I’ve been experimenting with the OpenAI ChatGPT system, learning as I go about how to ask better questions (prompts) that elicit responses that are useful and not restatements of my prompt or a previous response. (It is easy to ask a leading question that results in a response that easily feeds a confirmation bias.)
Here is a Google Doc that you can comment on with my recent discussion with the AI system on the challenge of supporting investment in public education at all levels.
As my friend Andrea has helped me see, one of the opportunities created by incorporating an AI system in teaching and learning is developing the skill to create and test better questions. Perhaps that skill is a clue to the future of the human-machine relationship.
David Epstein, in his book Range, indicated that the future relationship beween humans and machines might be formed from the human’s ability to contribute strategy when solving “wicked problems” that would be combined with the machine’s ability to harness large volumes of data and patterns, useful for “kind-problems.” A good (framing) question can create a path to a novel and valuable strategy, as we’ve found in Strategic Doing and most education activities.
Education funding and public support is surely a wicked problem, whether it’s the immediate political decision of a BSA funding level or the long-term priority of funding as a community and economic development strategy.
I’ve been having regular chats with ChatGPT, and I’d appreciate any feedback you have from my recent chat regarding funding public education. I’m sharing it for two reasons. One is that you might find the question-response and documentation interesting. I’ve formatted each of my prompts as a heading so that they will all roll up into a table of contents at the beginning so you can scan the prompts and jump to one that interests you. Through the thread, the ChatGPT responses are indented from my prompt. Its a bit time intensive to copy/paste/format, but I’ve not found a better way save or share chat sessions that makes clear the human prompt vs. the machine response in a way that I can analyze the trajectory of questions/prompts vs. the content of the responses.
You might find it humourous, as I did, further down in the discussion, where I pointed out that the AI response was incomplete, and the subsequent response acknowledged the missing information!
Finally, you might be interested and enjoy seeing how I finally broke ChatGPT with my final question. The result was something I posted in a social media note as a “#TuringTestFail” (Turing is a proposed test AI development.)
I’d enjoy any feedback you have on either the approach of using and documenting ChatGPT sessions, my specific questions and how they could be improved, or the content of the discussion.
I’m pondering developing a seminar class exploring the use of AI systems that are increasingly available for anyone to use, including exploring group prompt development and response review and discussion.)