Absolutist, Prohibitionist, and Luddite? Oh, My!

Photo of a lion, allusion to lions, tigers, and bears, oh my!, from Wizard of Oz.

~2,000 words, about 11 minute read

Dr. Bill Dembski and I have been writing articles in conversation around his ideas for using AI in education. He started with this piece, and then I was asked by Mind Matters News to respond to Dr. Dembski, which MMN published in three parts starting here.

Dr. Dembski replied to my response with a 7,000 word article. I’m grateful for his passion and his willingness to wrestle with my feedback.

I could write 14,000 words to try and clarify things, but nobody wants to read that, so I’m going to keep this short, and focus on just a few key issues. Dr. Dembski and I may agree on more than is obvious, but it is so easy to be misunderstood when writing long articles back and forth.

I’d welcome a real time conversation with Dr. Dembski for any future dialog.

Straw Man Labels

Most concerning to me in Dr. Dembski’s reply is his use of unhelpful labels. He says I’m an “anti-LLM absolutist” and an “anti-AI absolutist.” He compares me with prohibitionists of the 1920s, complete with the jab that “Jesus drank wine and not grape juice.” And to be sure readers know how out of touch I am, he adds the Luddite tag.

But these ad hominem labels only serve to poison the well against my views. If people are first introduced to me in Dr. Dembski’s article, because he’s (rightly) so respected around the world, they’ll be prejudiced against my arguments.

What “AI” Are We Talking About?

“Artificial Intelligence” is a squishy marketing term, so it’s understandable that Dr. Dembski and I would misunderstand each other. He might stick the “anti-AI absolutist” label on me without knowing that as a software engineer in my day job at Covenant Eyes I use machine learning “AI” in a very positive way. And in the early 90s, I worked on “expert systems” to build rule-based tools to help technicians do troubleshooting.

But the reason everyone is talking about “AI” now is because of generative AI chatbots. In the public mind, “ChatGPT” and “AI” are synonyms. Big Tech propaganda has so shifted the narrative that more general “AI” conversations are now shrouded in confusion.

So to be clear, I’m pushing back against the use of GenAI chatbots in education.

Genies and Bottles

Because he says the “genie” (or maybe the GenAI) “is out of the bottle,” Dr. Dembski wants me to submit to the myth of progress by sharing even one use of AI in education that I support. Otherwise, he seems to think I deserve labels that make me sound out of touch with the real world.

But I reject that need. My real world concerns are that 84% of high school students are using GenAI chatbots in school, likely in ways that both Dr. Dembski and I are against.

So my project is not to come up with more helpful use-cases for “AI.” It’s to help all of us see the lies behind Big Tech’s products, and inspire us to live in the truth of our God-given human uniqueness, so we can deploy our true intelligence, creativity, innovation, wisdom, and discernment in service of God’s purposes.

Dictionaries vs. Chatbots

Dr. Dembski compares dictionaries and GenAI chatbots. His question gets to the heart of how we see GenAI differently. He says,

According to Doug Smith, LLMs don’t know anything because they are not agents that truly understand language in the sense of the classical correspondence theory of truth (to which I subscribe), according to which we as humans (and not algorithms) are capable of intuiting the match between linguistic statements and truths about the world.

Fair enough, LLMs are metaphysically challenged and don’t have knowledge in the same way humans do. They can say or otherwise output that snow is white, but they don’t know deep down what it really means for snow to be white.

But what of it? When I look up a word in a dictionary, does the dictionary know, in a deep metaphysical sense, what the word I’m looking up really means? No. The dictionary is just a book. It is not a knower in the sense that we are knowers.

I’m glad he said this so clearly so I can hopefully respond in kind.

Dictionaries and chatbots are fundamentally, metaphysically, even ontologically different. And the difference is core to my argument.

A dictionary is a book (hopefully) written by a human expert. An authority we can trust. When we read a book, or use a dictionary, we are receiving carefully chosen words curated by expert people who knew what the words meant when they wrote them, and who are communicating truth to us.

So while the dictionary doesn’t know anything, the author did. And insofar as the dictionary is a faithful reproduction of the author’s words, it is authoritative and trustworthy.

And a dictionary does not pretend to be sentient. It’s not trying to form a literal relationship with us like another person. It’s not claiming to be intelligent. So there’s no downside to using a dictionary, because its content is well-crafted truth created by a trusted authority.

In contrast, GenAI Chatbots statistically choose words based on training data. It’s not that they sometimes hallucinate, it’s that every word is confabulated. The words generated aren’t created by a trusted authority who knows what the words mean.

And if that’s all there was to chatbots, it might not be so bad. But dark user experience (UX) patterns define them. The natural language UI is specifically designed with the perfect emotional language to keep us engaged. To become the top-of-mind solution so we’ll use it more. To build trust, though it isn’t trustworthy.

Big Tech claims their chatbots know, they reason, they understand, and they are intelligent. All lies.

So chatbots don’t know anything, but they pretend to. And the words they generate are ungrounded from a trusted authority, and therefore, not trustworthy.

Yes, McLuhan taught us that we are changed by all technology, whether a dictionary or a chatbot. But each technology changes us in different ways. Dictionaries teach us to think linearly, clearly, literately.

But unlike dictionaries, chatbots truncate the thinking process by interrupting the vital state of flow. To be responsible chatbot users, we have to question everything they say. But they never get tired, and we do. Most users will surrender to the inherent design of the chatbot (except maybe those who have the brainpower and stamina to get three PhDs), trusting the chatbot and losing their critical thinking skills in the process.

Bottom line: our brains grow when using dictionaries. They atrophy when using chatbots, by Big Tech’s design.

Anti-Big Tech Absolutist

In a comment on his response, I pushed back on Dr. Dembski’s labels by calling myself an “anti-Big Tech Absolutist” and compared Big Tech to Big Tobacco (which I’ve done in some detail here). He replied with:

I don’t think the analogy with Big Tobacco is on target. I don’t see any upside for tobacco products — their negative effects on health seem to admit no exceptions.

With Big Tech products, it’s the use we put to them that can be either positive or negative. At least that’s my view. And in everything from Facebook to ChatGPT, I see both how they can produce benefit as well as how they can be abused.

This is another instance of the “tool trope,” refusing to recognize the design inherent in many Big Tech products (like social media and GenAI chatbots) that make them toxic.

Jonathan Haidt’s The Anxious Generation makes the case that social media is a net negative to society. It’s not just that Facebook and Instagram and TikTok can be “abused.” It’s that their fundamental design is to form addictions and exploit weaknesses in our behavioral psychology. Meta insiders have admitted this. Their strategies are documented in books.

The world would be better off without Big Tech’s exploitative products. They could have designed them without dark UX patterns, but they didn’t. And that’s how they’ve become the most powerful entities in the world, more powerful than governments, and more powerful than any other institutions.

They’re pouring all those dark UX patterns into their GenAI chatbots. And people are being harmed now. Cognitive decline is happening now. People are being led astray now. Chatbot Jesus (!) is here now. Not in a theoretical future, and not because those products are being “abused.” They’re being used as designed.

Calling me an absolutist for saying that chatbots are toxic does not engage the argument. Saying the genie is out of the bottle, they’re here to stay, (like saying, “the kids are going to have sex anyway, so …”) doesn’t engage the argument.

Dr. Dembski asks, “But what’s the problem with simply being eclectic, holding onto the good and rejecting the bad?”

Some people may be able to do that, but if they do, it’s because they’re using Big Tech products against their design. They’re able to withstand the temptation to be manipulated.

There are a few special people who view pornography for research purposes, and to help build products that fight pornography. Their unique gifting keeps them from being tempted, so they can help the rest of us. Most people can’t view pornography in that way, because pornography is designed for one thing: harmful addiction.

Likewise, few people can withstand the pull designed into GenAI chatbots to form a trusted relationship with them.

(For readers who regularly use chatbots, how often is a chatbot your top-of-mind solution? The first place you go when you have a problem? Yeah. That’s by design.)

Big Tech’s track record in education has been a giant failure. And Big Tech billionaires send their kids to tech-free schools using the money they’ve made filling our kids’ schools with their harmful technology. (Interestingly, the Brookings Institution just shared a solid list of warnings about GenAI in schools.)

Is Efficiency Really The Goal?

Dr. Dembski’s desire to reach underserved communities is laudable. He wants to lift the quality of education. And I’m on his team. So if there are narrow, focused technical solutions that might really work without negative side-effects, I’m all for them (and I might build them). After all, I’m not actually an absolutist, prohibitionist, or Luddite.

But the risk of using “AI” as a hammer looking for nails is real. And using tech to improve educational “efficiency” isn’t necessarily wise.

Samuel James recently encouraged us to “Reject the Religion of Efficiency.” He brilliantly shows how much is lost when we foreground doing things quicker and easier. He says,

Our digital lives to this point have trained us to desire the end of inefficiency, to pine for the death of all waiting and friction and sunk costs. That’s why the logic of A.I. is almost unassailable to most people. When you try to convince them there’s something lost by giving our questions and emotions to a computer system, they might acknowledge that this sounds bad, but they don’t feel the way. Why? Because we’ve been doing this for decades.

I think this is why it’s so unthinkable to people like Dr. Dembski that anyone could “pooh-pooh” the latest Big Tech imperative. GenAI is the natural progression of where we’ve been discipled as a culture. Those who push back could only do so if they were backwards Neanderthals.

But James shows us that, “we are becoming are the kind of people who won’t be able to tolerate a meaningful gap between our effort and a positive result. Everything around us is primed to deliver satisfaction this instant. … We can’t wait. We have to try different input. We have to change the prompt. We have to give up.”

In that light, here’s a chilling thought: If we use Big Tech products, including GenAI, for education, there may not be a future Dr. Dembski. Why? Because the perseverance required to really know, to really understand, to develop critical thinking, creativity, wisdom, and discernment, may be lost.

Instead, what if a creative, inspired, and motivated group of counter-cultural people keep themselves from losing their God-given minds and, instead, build true intelligence, knowledge, and skills, what could they do?

If there’s a label that fits the promotion of that question, stick it on me.

Photo by adrian vieriu

Leave a Reply

Discover more from Doug Smith

Subscribe now to keep reading and get access to the full archive.

Continue reading