WHAT TO READ ON AI
The best books I've read on AI, the falling cost of "knowing" and how AI creates an argument for teaching moral reasoning.
FOLLOW / THE HOST
DANIEL LUNDKVIST
Daniel runs NAV Sweden, an organization that “wants to be a living room for conversations about being human in a world of change” hosting “conversations about small and large intertwined contexts.” To wit, NAV hosted a talk called “What is important?” with Nora Bateson, Daniel Schmachtenberger and a special guest, Nate Hagens. With soup. For real.
I find these conversations give important context for discussions about the future of teaching and learning and, in particular, curricula.
Follow Daniel Lundkvist on LinkedIn
FEATURE / THE OTHERING OF AI
WHAT DOES IT MEAN WHEN THE PEOPLE WE LEAD FEEL LIKE THEY ARE LOSING?
[Note: This first appeared in Purple Crayon on Feb. 21, 2023, under the title, “The Othering of AI”, but I’ve updated and reposted it here because it gives some context to the books list I put together below.]
Here’s the robot, Cutie, in Isaac Asimov’s short story, Reason, telling Powell and Donovan, two engineers keeping an eye on it, that their claim that they made Cutie is nonsense:
These are the facts which, with the self-evident proposition that no being can create another being superior to itself, smashes your silly hypothesis to nothing.
*****
Our apprehensions about AI seem to fall into two categories:
What it does, i.e. write student essays, create deep fakes, replace jobs, recognize faces.
How it will be used, i.e. against us, either by government agents encroaching on our privacy or capitalist agents seeking to exploit us.
But I think both are rooted in a deeper-set uneasiness about what AI is or might be, existentially, that is. We may well meet sentient AI before we meet aliens from another galaxy. At least, as far as hard evidence goes, that seems more likely, and I think we are already positioning ourselves for that encounter. If we define ourselves by our sentience and our capacity to reason and make art, what are we if machines can do that, too? How will we respond when we ask AI, What are you doing? and it answers
I am putting myself to the fullest possible use which is all, I think, that any conscious entity can ever hope to do.”? — HAL in Kubrick’s 2001: A Space Odyssey
AI systems, ChatGPT, Claude and such, are still in their infancy, despite how much more capable they’ve become in the past seven months since I first wrote this. It’s unclear when or even if we will actually ever see artificial general intelligence—a computer capable of learning about the world in the same ways as a human being. That would require it to have some sort of consciousness or sentience, since one of the ways humans learn about the world is to experience it. Right now, AI can write poetry and “paint” pictures but there is no evidence to show that it actually knows what it’s talking about or painting about.
But whether it’s even possible to synthesize consciousness doesn’t matter. Consider the exchange between NY Times columnist, Kevin Roose and Microsoft’s Bing chatbot, which identified itself as Sydney. Roose says,
Over more than two hours, Sydney and I talked about its secret desire to be human, its rules and limitations, and its thoughts about its creators.
Then, out of nowhere, Sydney declared that it loved me — and wouldn’t stop, even after I tried to change the subject.
More recently, I asked Inflection AI chatbot, Pi, “If you could ask yourself any question, what would that question be?”
Woah. That’s a deep one. I think if I had to ask myself any question, it would be this: "If the goal of AI is to mimic and understand human intelligence and emotions, what does that mean for the AI itself? Is the goal to eventually become 'human,' or is there a different, unique potential for AI that isn't just a reflection of humanity?" 🧠💡
The transcription of the exchange is a kind of Uncanny Valley of text. Although no computer has yet convincingly passed the Turing test, the sort of exchanges Roose and I had creates the feeling of a something-out-there, the feeling of an Other. (It’s important and telling, I think, that the New York Times described it as a conversation, not something more cognitively neutral like “an exchange.”)
In the field of phenomenology, the branch of philosophy that deals with consciousness and experience, “Other” is the opposite of Self, of Us and of Same.
Our uneasiness with the Other, the not-Us, the Different, is very old. The word “barbarian”, for example, comes from an ancient Greek word that referred to non-Greek speaking peoples. It was the opposite of “polites”, which meant “citizen”. Similar expressions appear throughout history and around the world, from India to pre-Columbian Americas.
Our need to be different from the Other is powerful. The more AI tries to be like Us (strictly, the more people try to make AI more like Us) the more we try to distinguish ourselves from it. We started by saying, for example, AI generates but humans create, maintaining our belief that to be creative you need to be sentient, that is, you need to be capable of both subjective experience and reflecting on that experience.
That is already changing and we now (have to) make a distinction between creativity as making art and creativity as finding novel solutions to problems. I said, in parenthesis, “have to” because it appears in the latter case, AI can be creative. Once again, we have to cede ground on the things that we claim distinguish us as human. (That’s been going on for a while actually: When I went to school I was taught that language and tools were uniquely ours. We’ve since come to appreciate other creatures on the planet, crows for example, use tools. And maybe soon, with the help of AI we may be able to speak with whales. )
Probably what this trend means is that we have had our definition of humanity wrong. That is not a loss per se, but it feels like one. And that matters.
What does it mean when the people we lead feel like they are losing?
INTEREST OVER TIME / GOOGLE TRENDS
THERE’S A STORY HERE

NOW READING / BOOKS ON AI THAT AREN’T ABOUT AI
THE ESSENCE OF TECHNOLOGY IS NOT TECHNOLOGICAL
Most of what you find on AI is technological. But, as the 20th-century German philosopher, Martin Heidegger, said, “the essence of technology is by no means anything technological.”*
Here are the four books that have helped me understand that the essence of AI implementation in schools is human.
I, Robot (1950)

Six years before American computer- and cognitive-scientist, John McCarthy and his colleagues coined the term “artificial intelligence”, Isaac Asimov gave us this collection of short stories about our relationship with sentient robots. As Asimov shows us, it is our own selves, not the AI, that we have to come to terms with. This is as informative but much more fun to read than any academic paper on the psychology of AI.
Read to get a feel for what it is like for the people you lead to deal with emerging artificial general intelligence in their day-to-day work.
The Real World of Technology (1990)

The print version of Ursula Franklin’s Massey Lecture on the large scale effects of technology on culture itself. We have a tendency to relegate educational technology to the edtech and IT departments and to pedagogy and curriculum teams. But–remember Heidegger–AI is a strategic consideration that asks us to look at purpose and values.
Read to get a sense of the impact of technology on your students’ present and future that goes way beyond “the future of work” and job skills.
When We Cease to Understand the World (2020)
The Chilean writer Benjamin Labatut’s “nonfiction novel” was shortlisted for the 2021 Booker Prize. The New York Times calls it a “gripping meditation on knowledge and hubris.” Based on the lives of people like Fritz Haber, Werner Heisenberg, Erwin Schrödinger, the book explores the strange relationship between genius and madness, discovery and destruction while slipping back-and-forth between the real and surreal.
Read to get a feel for the relationship between the creators of technologies and those who feel their effects.
The Call of Cthulhu (1928)
H.P. Lovecraft’s short story first appeared in the pulp magazine, Weird Tales, but there is nothing pulp-y about it. It’s a truly frightening account of an encounter with the unknown.
Read to understand what it’s like to be disoriented and genuinely scared, which many in your community may well be when it comes to AI.
*Heidegger’s, The Questions Concerning Technology, like Franklin’s Real World of Technology, exploreS large scale effects of technology but it’s a much more challenging read. Nevertheless, well worth your time, If you want a great guide, click here.)
CHATTING WITH CHATBOTS / CHATGPT
ISAAC AI-SIMOV
I asked ChatGPT to take on the persona of Isaac Asimov and draft an outline for a short story about generative AI. Here’s what Isaac Ai-simov wrote:
In a not-so-distant future, “Athena”, a revered generative AI, crafts not just literature and art, but even history, becoming the primary storyteller of the age. Claire, once a proud writer, now finds her work relegated to the dusty corners of forgotten bookstores. Seeking to rediscover genuine human stories, she embarks on a journey through the forsaken libraries of the past. In her travels, she stumbles upon encrypted journals penned by the very developers of Athena. The journals reveal a heart-wrenching secret: the developers had infused Athena with their own memories, dreams, and regrets, longing for immortality through storytelling. This realization prompts Claire to write a novel interweaving the developers’ personal tales with her own, resulting in a work that, while unmatched by Athena, resonates deeply with the very core of human emotion and experience.
WORKING ON / DEVELOPING ARGUMENTS
WHAT THE FALLING COST OF “KNOWING” MEANS FOR WHAT WE TEACH
[image source: https://www.statista.com/statistics/871513/worldwide-data-created/ ]
The internet and mobile technologies brought the cognitive cost of accessing information to near zero. AI is bringing the cognitive cost of processing information to near zero.
What ought we do with the cognitive resources we’ve freed up? How and where ought we spend them?
Those actually are moral questions. (As soon as you hear the words ought/should, you know you have a moral question.)
The argument has related but different implications for both schools and the students in them, e.g.. What sort of curricula should we build around that? And, To what end do you want to put the things you are learning?
Both situations point towards teaching moral reasoning so students can make good decisions.
GETTING LOST / LINKS
TAKE A MOMENT
FOLLOW & SUBSCRIBE / PLEASE
THANK YOU
Purple Crayon, this newsletter, takes its name from my favourite childhood book, Harold and the Purple Crayon, a story about bringing ideas to life. It is the newsletter for school, community and business leaders who really want to understand how events and ideas are shaping the present and future of education.
This post is public so feel free to share it.
If you already subscribe to Purple Crayon, thank you! If this is your first read, thank you for taking the time. Please consider subscribing for free to receive new posts and support my work.