Should We Use Generative AI for Education?

Photo of students and computers

Note: This was first published on Mind Matters News in three parts. Re-posting here to keep my writing on this subject together.

~3,000 words, about a 15 minute read.

Dr. Bill Dembski recently wrote an article encouraging the use of generative AI chatbots like ChatGPT to improve educational outcomes. I’ve been graciously invited to respond.

First, I recognize that Dr. Dembski is one of few who would be carved in a Mt. Rushmore of the I.D. movement. I am a super-fan of him and his work, and love how he has invested his brilliant giftedness into carefully teaching the world how the design inference works. As a long-time fan and supporter of Discovery Institute, I could write 5,000 more words of praise, but that’s not what I’ve been asked to do.

Dr. Dembski wants us to use technology with wisdom. Not to follow transhumanist visions of enhancement, but to edify. I share his broader goal, but not his willingness to leverage today’s generative AI chatbots to get there.

Purpose of Education

For quick context: the founders of the USA valued education from the beginning. In 1647, the Massachusetts Bay colony enacted the “Old Deluder Act.” They saw that since, “it being one chief project of that old deluder, Satan, to keep men from the knowledge of the Scriptures,” literacy was vital to keep students from being deceived by false teaching. They were right. And that is still true today.

Around the same time, John Milton said the purpose of education is to, “repair the ruins of our first parents by regaining to know God aright, and out of that knowledge to love him, to imitate him, to be like him … by possessing our souls of true virtue.”

And today, we’re arguably failing miserably. Students are entering college without the ability to read. Even without having read a single book straight through.

We are not preparing our next generation to avoid deception by being able to read and think for themselves. And we’re light-years away from Milton’s virtue formation. So improving education is a laudable goal. Dr. Dembski and I are on the same team there.

But are GenAI chatbots the bridge over the gulf of our education crisis?

Blinded By the Math?

Dr. Dembski’s article is filled with the assumption that AI chatbots like ChatGPT are trustworthy sources of information that students can depend on for learning.

I’ve had conversations with several brilliant math PhDs about AI chatbots. All of them have a hard time seeing the dangers of large language models (LLMs). It’s almost unthinkable to them that the #1 use-case of ChatGPT is therapy and companionship. To math geniuses, it seems like only a tiny minority of foolish people should be susceptible to being drawn in.

I see chatbots through a perspective formed by years of research into Big Tech’s exploitation of our behavioral psychology that I published in my book, [Un]Intentional. Because of that lens, I expected people would form relationships with ChatGPT before the research started proving it. It seemed obvious that it was designed for that one purpose. Just like social media is designed for “engagement” (leading to today’s mental health and loneliness crises), chatbots are designed to foster what the Center for Humane Technology called “the race to intimacy.”

I heard a math professor on a podcast recently talk about a student who was learning how LLMs work. The student’s epiphany was, “it’s just math!” The implication? “Math” can’t be harmful — it is innately useful, so since LLMs are “just math,” they’re useful too.

My study of Marshall McLuhan, Neil Postman, Jacques Ellul, Cal Newport, and many others have given me a different lens that I hope will help scholars like Dr. Dembski see what their great learning might have obscured from their view.

AI Chatbots Are Fundamentally Untrustworthy

Dr. Dembski frames his positive vision for using AI chatbots in education like this:

It’s a false dilemma to think that students will either cheat using AI or must be prevented from using it to learn successfully. The third option is to use AI as a way of honing students’ skills and knowledge, helping them learn more effectively than before.

But the assumption that AI can “hone students’ skill and knowledge” ignores fundamental aspects of what AI chatbots are designed to do, and how we are shaped, changed, formed, and ultimately harmed by using them.

First, I must quote fellow Mind Matters News author Professor Gary Smith, who says,

The inescapable dilemma is that if you know the answer, you don’t need to ask an LLM and, if you don’t know the answer, you can’t trust an LLM.

Like they can’t imagine forming an emotional bond with a chatbot, I think that people with a mind to get a PhD in math have a hard time believing that people wouldn’t be able to tell the difference between true and false output of an LLM.

But students are being led astray. Adults are being led astray.

That’s because while LLMs may be “just math” and statistically choose the next word as “autocomplete on steroids,” they aren’t reliably connected to reality. They don’t know the meaning of anything. In fact, they know precisely nothing (Big Tech propaganda notwithstanding). So the words they generate are not grounded in reality. It’s not just that they hallucinate: it’s that they confabulate, or “BS.”

And that’s before we consider the emotionally manipulative user interface (UI). The chatbot UI is deceptively created to feel helpful, supportive, encouraging, and intelligent, leading us into forming a relationship and building trust. Big Tech’s signature exploitative strategies optimize for engagement, not for truth. By manipulating our behavioral psychology in this way, chatbots become top of mind when we have a question about anything, or just need a friend.

ChatGPT is the archetype of today’s AI chatbots. Its leader, Sam Altman, is widely known for his willingness to say anything people want to hear. And he’s made ChatGPT in his image: it says anything to get us to become more dependent upon it, to build a trust relationship with it.

And students are particularly vulnerable to chatbot manipulation.

Are AI Chatbots Like Chess Games?

Dr. Dembski asks us to compare chatbots to computer chess games. He says that since humans have become better chess players since the invention of machines like IBM’s Deep Blue, AI chatbots can be similarly helpful.

But this is a non sequitur. Chess programs are designed for one thing: to play chess. Like using machines to build our muscles at the gym, a chess-playing machine will exercise those brain “muscles” and make their users better chess players.

In contrast, GenAI chatbots are designed for one thing: to build a trust relationship. Users aren’t going to be better thinkers, and become more wise or discerning. They’re going to be better at having relationships with chatbots. And in the process, as Marshall McLuhan teaches us, the mental abilities they’ve extended by their use of the chatbot will be amputated over time, while they are numbed to the process.

Are AI Chatbots Like Books?

Dr. Dembski depends heavily on the idea that if students are monitored properly, they can use AI chatbots for good and not fall into the traps of cheating and other negative effects. As part of the justification, he tells this story:

Ben Carson, the renowned pediatric neurosurgeon for many years at Johns Hopkins, describes how his mother got him to read two books a week when he was young. […] She herself had only a third grade education, and so was limited in what she could teach him. But she could ensure that her son spent time reading the books and then quiz him on their content, getting him to summarize and answer questions about it. Carson’s mother here acted as a monitor, not as a teacher.

So Dr. Carson’s mom inspired him to read, even though she couldn’t read well. Likewise, adults who “monitor” students who use AI chatbots can inspire them to use them wisely.

But this is another non sequitur. Books and chatbots are ontologically different. With a book, you have a focused argument from a (hopefully) trustworthy human author. With a chatbot, you have a fundamentally untrustworthy word generator ungrounded from its human sources, wrapped in an UI designed for relationship formation.

Plus, the user experience of interacting with the AI is completely different from reading a book. A book reader can go into a state of focused attention we call “flow,” and that’s where the powerful learning happens. If a reader gets stuck, they can re-read, rethink, wrestle, and fight to understand what the author is saying.

In a “conversation” with a chatbot, the “flow” state never happens. The user interface encourages quick dopamine-spiking interactions, incantation-response, incantation-response. A few students might try to figure out whether the output is right, but they’ll get tired before the AI does, and will eventually just keep asking the AI questions and accept the answers.

The recent MIT study shows how the brain is disconnected when we use AI chatbots for writing. I expect the same result when scientists compare the brains of students who learn from books to those who try to “learn” from chatbots.

On Personalized Learning

Our four daughters were homeschooled for most of their education. I’m a huge fan of homeschooling, for many of the reasons Dr. Dembski also shares. The ability to focus on a student’s individual strengths and needs is much better than many of today’s public schools, and the outstanding homeschooling student outcomes speak for themselves.

And the Studia Nova model Dr. Dembski describes sounds largely positive, if they can manage to avoid using AI chatbots.

Dr. Dembski’s List of Possible “AI” Applications for Education

Dr. Dembski lists 10 possible applications of “AI” in education, and I want to briefly comment on each.

But first, the problem with the “AI” label is that it is simultaneously a hype-filled marketing term, a deception (they’re not artificial, nor intelligent), and a label for a broad scope of technologies we’ve explored since the 1950’s that are far beyond AI chatbots. But since today’s AI conversation is largely about chatbots, they’ve taken all the innovative energy from other, more narrowly focused uses of the whole class of “machine learning” opportunities we might consider.

With that, let’s quickly walk through Dr. Dembski’s list:

  1. Accent and Pronunciation Refinement in Language Learning
    A focused speech-recognition “AI” (not a chatbot) could help students learn languages more efficiently. That tech has been around for a while. A very narrowly, ethically trained product could be useful here, but it’s no replacement for human conversation.
  2. Creative Writing with Rhetorical Precision
    This is a dangerous proposal. Writing is thinking. AI can’t be trusted to generate coherent content, nor to have a voice that is something students should emulate. Do we really want everyone homogenized into writing like a chatbot? No, the sacred work of choosing words must have authoritative human guides. Improve the curriculum, have better books, and invite humans to teach via video. But don’t use chatbots for creative writing.
  3. Polyphonic Music Composition and Performance
    I see this as risky. Again, if the AI is not a general purpose chatbot, and perhaps specifically trained on good music, it could potentially be helpful. But today’s AI chatbot-driven song generators are not creating well trained musicians. They’re creating more passive consumers — by design. Chatbot users aren’t interested in learning to “think contrapuntally.” They want quick and easy dopamine hits, which is what chatbots are designed to provide.
  4. Advanced Sight-Reading and Aural Skills for Musicians
    I give this a maybe. Again, if it’s a very narrowly trained product, not designed to form trusted relationships, and designed to be very accurate, it could be helpful. But as a musician myself, I know that nothing replaces the practice and tutoring guided by a trusted human mentor.
  5. Scientific Experiment Simulation and Inquiry-Based Discovery
    AI-driven virtual reality environments for mathematical intuition? Risky. I’m reminded of Spock in the recent remake of Star Trek being trained this way. But today’s virtual reality environments are almost all tainted by Big Tech’s perverse incentives. We are changed by our use of VR into different kinds of people. Neil Postman would warn us about Amusing Ourselves to Death.
  6. Mastery of Mathematical Intuition and Visualization
    With the use of VR, this is so similar to #5 that it shares the same risks.
  7. Fine Motor and Artistic Skill Development via Gesture Feedback
    Fine motor/artistic skill development? Risky. Yes, some technical art skills might be improved. But artistry is a distinctly human gift. We’re not machines, so our artistry shouldn’t be constrained or discipled by machines. Sure, drills in technical skills might help some artists. But we risk stifling creativity and encouraging robot-like behavior. I think Dr. Esther Meek’s Doorway to Artistry would push-back strongly against this.
  8. Debate and Argumentation Coaching
    This is also super risky. Dr. Dembski believes that chatbot suggestions for stronger evidence and fallacies could be trustworthy. But the debater can’t trust the chatbot’s answer, so they can be led astray. I recommend against this one.
  9. Emotional Intelligence and Empathy Training through Simulated Dialogue”
    Extremely risky. I’m not outsourcing emotional intelligence to a chatbot. Students have arguably lost social skills because of their immersion in technology already. Chatbots aren’t going to help there. I think we should run quickly away from this idea.
  10. Advanced Memory and Visualization Techniques”
    Maybe, but again, Spock’s VR-based school worked for an emotionless Vulcan in science fiction. We learn differently. Entertaining “AI tutors” would change us into people who depend on those tutors, and who become like them (Psalm 115:8). And the MIT study cited above already pushes back against the idea that memory could be improved by the use of chatbots. The opposite seems to be the case.

A Scary Surveillance Idea

Dr. Dembski’s last idea may be the most terrifying:

The Oura Ring tracks sleep, activity, heart rate, temperature, etc. through advanced biosensors, offering detailed insights into recovery and overall well-being. … A suitable low-cost unobtrusive device, however, could monitor brain states favorable and unfavorable to learning. Such a device could enable real-time adaptation of instruction, such as adjusting pacing or content to improve focus and retention.

Have we become so accustomed to surveillance capitalism that we are completely blinded to the implications? Or has Big Tech become so trustworthy that we can trust them to handle the data of student brain states with benevolent intent?

This suggestion mirrors the ultra-authoritative Chinese state’s use of AI monitoring for students. In China, students are already wearing AI-powered headbands to constantly monitor student’s brain activity. That data will almost certainly be part of their “social credit score,” among other nefarious things. The video in that Wall Street Journal is not a dystopian sci-fi story, it’s today’s reality. Those are real kids wearing those devices and being shaped, controlled, and dehumanized.

This is not the educational answer we’re looking for. Let’s not turn our kids’ brainwaves and futures over to the most powerful corporations in the world.

Conclusion

I’ve been a software engineer for 30 over years. I love finding beneficial uses of technology. But I often see people using tech like a hammer looking for nails: everything must have a technical answer. Ellul wasn’t a fan of that, nor am I. Technology isn’t always the answer, and it often adds many harmful, unintended consequences.

Like Dr. Dembski, I want us to push against Sam Altman’s desire for us to merge with AI. But Dr. Dembski seems to trust Altman’s chatbot more than it deserves, and is open to more merging than I am. Dr. Dembski says his view is “humanistic” as opposed to “transhumanistic”:

The humanistic vision is natural, like promoting health through good diet, exercise, and proper rest. The other is artificial, like relying on pharmaceuticals to achieve wellness.

But to me, Dr. Dembski’s “humanistic vision” seems biased towards the unnatural, especially in its embrace of today’s AI chatbots. The most natural way we learn is by human mentoring. Every mediating technology (“the medium is the message,” as McLuhan taught us) changes us in ways we cannot see now.

AI chatbots are far too new, too hyped, and already too exploitative in their design and deceptive in their claims to trust the next generation to them.

Since it is still the “one chief project of that old deluder, Satan, to keep men from the knowledge of the Scriptures,” let’s not expose our students to an intentionally deceptive technology. Let’s take a page from our 17th-century forebears and encourage truth-seeking from reliable, truthful, human sources of knowledge and wisdom instead.

Gratitude

It was incredibly gracious of Dr. Dembski to invite me to share this critique. He practices what he preaches, and wants to follow the truth wherever it leads, so he invites push-back. May I do the same. I look forward to our ongoing conversation.

Photo by Thành Đỗ:

One response to “Should We Use Generative AI for Education?”

Leave a Reply

Discover more from Doug Smith

Subscribe now to keep reading and get access to the full archive.

Continue reading