Should We Use Generative AI To Improve Creativity?

Photo of a gold rush ghost town.

~3,000 words, approx 15 minute read

The CEO of a medium-sized tech company asked for my thoughts on Jeremy Utley’s YouTube video: “How Stanford Teaches AI-Powered Creativity in Just 13 Minutes.”

I’m grateful for the chance to interact with Utley’s presentation and work. It deserves a careful response, and this is mine.

Context: How Humans Know What We Know

Before I dig into GenAI, I want to establish the importance of how we as humans come to know what we know.

Epistemology, the study of how we know what we know, has long been a fascination of mine. Here’s what I’ve learned after ten years of study: True knowing comes through a process of discipleship. And my definition of discipleship is broader than religion: a disciple performs embodied practices in pursuit of a trusted authority.

True knowing comes through a process of discipleship.

When humans write, they do their best to communicate what they know so that readers, in a trusting relationship with the author, are changed as they begin to see the world the way the author does. Readers must trust the author, as student to teacher, in order to come to know what the author knows.

Dr. Esther Meek, philosopher of epistemology, has written several important books that help us understand how we come to know what we know. In her 2011 book Loving to Know, she shares her distinctive model: “covenantal epistemology.” She shows that we come to know through relationship with trusted, authoritative guides.

Book cover: "Loving to Know" by Dr. Esther Lightcap Meek

Meek shows how knowing is a messy process, not a formula. It depends upon trusting an authoritative guide until we experience “some kind of integrative and transformative shift.” Like learning to read or ride a bike, when we trust our authoritative guide we eventually reach a point where our actions embody what we have come to know.

According to Meek, Western civilization has adopted a false epistemology, claiming that knowledge is nothing more than mental acceptance of objective, disembodied information, no active practices or trusted guides required. In other words, we’re merely meat machines as shown in movies like The Matrix. Plug in the brain, load the program, and out comes Kung Fu.

GenAI Breaks Human Epistemology

GenAI pushes the false, disembodied epistemology to the ultimate end. It is — by definition — a disembodied information system. It scrambles human words into statistical relationships that are completely unplugged from reality, and especially decoupled from a trustworthy human guide. In response to incantation-like prompts, GenAI re-presents disembodied words from its training data as plausible sounding phrases which are often confidently wrong.

GenAI pushes the false, disembodied epistemology to the ultimate end. It is — by definition — a disembodied information system.

The chatbot user interface (UI) is made to feel person-like. Designers intentionally craft the UI to foster a trust relationship with unsuspecting human users. In subtle and overt ways, the chatbot UI carries the authority of a person, even a teacher. Users begin to trust that its words are authoritative to guide them.

Humans bring their queries to the chatbot, submitting in a posture of apprentice to master, of disciple to rabbi, and are subsequently dehumanized by a scrambled, statistical confabulation of words uncoupled from any human relationship.

There’s at least a twofold risk here: Not only is GenAI’s content often confidently wrong, but the UI is deceptive as well. It’s attempting to convince us a relationship exists. That it is a trusted guide when it is not.

Big Tech’s endgame is for their chatbot to be the most trustworthy guide in our lives. They are like the makers of silver and gold idols in Psalm 115 who are warned, “Those who make [idols] will become like them” and ominously for the rest of us the verse ends with: “so will all who trust in [idols]” (Psalm 115:8). The idol of our age is GenAI, and we’re already trusting it on a worldwide scale.

And by placing our trust in the machine, we become machine-like ourselves.

With that context, let’s see where Jeremy Utley wants to guide us.

Who is Jeremy Utley?

Utley is a Silicon Valley insider. He wears many hats: Stanford professor, best-selling author, venture capitalist, and worldwide keynote speaker on the topic of AI. He’s one of the top people selling the utopian vision of AI that Big Tech wants us to embrace.

His about page title says: “Jeremy facilitates epiphanies.” He’s not wrong. Merriam-Webster’s second definition of epiphany is “an appearance or manifestation especially of a divine being.” Utley offers “ah-ha moments” of an almost spiritual dimension.

And creating such epiphanies aligns with Big Tech execs like Sam Altman who believe they are “creating God.” I’ll have more to say about the spiritual dimension below.

Utley’s three-minute speaker reel is filled with super-slick, emotionally charged language with visuals of his family, speaking events, happy clients, and über-relatability presenting one thing: trust me. And trust me when I say to trust AI.

Creating epiphanies aligns with Big Tech execs who believe they are “creating God.”

Consider some of the quotes from his speaker reel:

Parenting Advice?

I’m not embarrassed to say that GenAI helped me be a better parent.


That first time GenAI blew my mind and made me a better parent, in one of those ‘how’s daddy going to respond’ crisis moments we all face as parents, GenAI made me realize I could bring my kids onto my own team by reinforcing family values — something I’d never imagined until that moment.

GenAI will never have kids, nor will it ever know what it is to be a parent. GenAI won’t ever even know what a child is, because it knows nothing at all. But by scrambling words statistically connected to the topic of parenting from across the internet, Utley’s mind was blown when he learned that his kids should be on his team.

If thinking of his family as a team is mind-blowing, what would he think after reading a parenting book written by a good parent?

GenAI Superpowers

I’ve seen technology companies double or triple the revenues of their key product lines by infusing GenAI superpowers.

These are amazing promises, founded on the notion of “collaborating” with GenAI. If we invite GenAI in as an authoritative guide, we can have incredible results. How can an executive, manager, or rank-and-file employee not be caught up by these promises by someone so authoritative and winsome?

But Utley doesn’t consider the full cost. In exchange for GenAI superpowers, we lose the ability to think and reason for ourselves. We become so disconnected from human ways of knowing that real life relationships, desires, and reality itself begins to shrivel away.

Spiritual Promises

You’ll undoubtedly have your own epiphanies too. I wonder what ideas you’ve never imagined are waiting on the other side of your own AI transformation.

Epiphanies. Transformation. This is unmistakably spiritual, almost salvation-like language. GenAI is a savior, and Utley is a prophet. If we become “transformed by the renewing of our minds” (Romans 12:2), maybe even “repent and be baptized” (Acts 2:38) to get to Utley’s “other side,” will we have a transformation beyond “what we could ask or imagine” (Ephesians 3:20)?

What exactly is this “AI transformation” Utley promises? Is it anything like transformation described in this Rolling Stone article published on May 4, 2025 titled, “People are losing loved ones to AI-fueled spiritual fantasies”?

One woman’s “partner of seven years fell under the spell of ChatGPT in just four or five weeks.” She continues:

“It would tell him everything he said was beautiful, cosmic, groundbreaking,” she says. “Then he started telling me he made his AI self-aware, and that it was teaching him how to talk to God, or sometimes that the bot was God — and then that he himself was God.” In fact, he thought he was being so radically transformed that he would soon have to break off their partnership. “He was saying that he would need to leave me if I didn’t use [ChatGPT], because it [was] causing him to grow at such a rapid pace he wouldn’t be compatible with me any longer,” she says.

Are these edge cases that only imbalanced people fall into? I could share many more stories, so I don’t think so. I think a better question is: Why are GenAI chatbots designed to foster this level of intimacy, or to enter the spiritual domain at all?

Big Tech’s Recent Big Admission

OpenAI had to roll back a recent release of ChatGPT-4o because customers were creeped out by its sycophancy. They went too far in their persuasive design. But it’s okay, they assured us. They rolled it back, and they are “revising how we collect and incorporate feedback to heavily weight long-term user satisfaction and we’re introducing more personalization features.”

They want us to trust them because they are committed to us being long-term users. Isn’t their transparency wonderful?

Utley’s obviously spiritual language, excitement, and enthusiasm as he promises “your own AI transformation” raises alarm bells that I think I’d hear whether or not I was a Christian. Why would we trust Big Tech products to shape us spiritually?

Bought and Paid For

Stanford is heavily endowed by Silicon Valley, with huge investments from Google, Mark Andreesen, and other AI visionaries. The university’s unique link with Big Tech makes it obvious why they’d want to enthusiastically promote the unbridled embrace of GenAI.

And beyond Utley’s Stanford connections, with his venture capital firm, international speaking schedule, and large public platform, it seems that he’s well compensated in money and fame for his role as an evangelist for the industry.

Engaging with the Presentation

With that context, let’s consider Utley’s presentation about using AI to enhance our creativity. I’m going to interact with a few of his key quotes.

Winston Churchill

I’ve always been jealous of Winston Churchill.

Utley spends almost 10% of the 13-minute presentation in his intro, starting with some impressive impressions of Gary Oldman’s outstanding performance in the movie Darkest Hour. Utley’s story is compelling, and is designed to hook the viewer with a promise: You can be just as powerful as Churchill if you take Utley’s advice.

But this makes me wonder: Did Utley dictate the script of this presentation from his bathtub? By passing a few prompts to GenAI?

More importantly, what level of trust would we have to imbue in Big Tech to let them choose our words for us? Should GenAI be a trusted assistant with world-changing speechwriting powers?

Given that the first 10% of the presentation is an emotionally rich story that draws on images of a legendary leader, we know we’re not watching an objective, dispassionate training. This is straight-up advocacy. Even calling it propaganda isn’t too strong.

Canonical Book

To me, the fact that I wrote the canonical book on idea generation just prior to AI is like writing the best book about retail just before the internet.

The next more than 10% of the video introduces Utley himself. He makes quite a claim, that his book is “the canonical book on idea generation.” No lack of self-esteem there. But then he parlays that into the claim that GenAI makes even his important book obsolete.

Again, this is credibility forming but with exaggerated marketing hype-filled language, not objective, scientific, reasoned, rational type arguments. Red flags go off for me here.

Do Not Ask GenAI, Let It Ask You

[To the chatbot]: I want to ask how I should answer this question. What’s the best way of framing that question to an AI?

This seems to be the heart of Utley’s advice. And to me, it’s chilling in its implications.

He wants us to ask AI how to use AI. Almost as a meta-guide, a guide of how to be guided by a chatbot. To go beyond the risky move of just asking a question to asking what questions we should ask puts us in a position of submission to GenAI.

  • We are the student; GenAI is the teacher.
  • We are the padawan; GenAI is the Jedi.
  • We are the disciple; GenAI is the prophet.

Utley advocates that we race to intimacy with AI just like Tristan Harris and Asa Raskin warned us about in their important “AI Dilemma” presentation from a couple years ago.

Because we only ask important questions of people we trust, and as trust grows, intimacy follows. When we need an answer, when we want to know, we confer authority on the person we trust enough to query. When we ask a chatbot, it feels personal to us, and intimacy blossoms. The more we trust the chatbot, the more the McLuhanian transformation takes place: our minds are extended into the chatbot, while our own mental capabilities are amputated, and we are numbed to the process.

Because we only ask important questions of people we trust, and as trust grows, intimacy follows.

But GenAI is not worthy of our trust, nor are the Big Tech behemoths who are pushing them everywhere.

Consider how GenAI is often confidently wrong (putting aside the fact that GenAI has no grounding to reality, so it has no idea whether anything it says is right).

We’re constantly told that GenAI is getting exponentially better, and it’s just a matter of time before hallucinations are solved. But just this month (May 2025), Gary Marcus documented several egregious, yet simple errors that show the need for constant vigilance as we use GenAI. And OpenAI’s own recent documents show that their latest models are hallucinating more than the previous ones, not less, with hallucination rates from 16%–79%. So their claims that scale would make everything better isn’t working out.

Why would Big Tech create a system that pretends to be a personal agent when it’s not, that pretends to care when it doesn’t, that is always confident even when wrong, but that quickly responds to everything and is available everywhere? Because they want us to build an intimate relationship that turns into full-on dependency. It’s for their good, not ours.

But why? Because of all the hype-filled gains we’re supposed to realize. Outsource our minds to Big Tech, and they’ll give us back all of this incredible efficiency.

Adam, the “Back-Country” National Park Ranger, and 20 Years to 45 Minutes

The National Park Service is estimating that the tool that Adam built in 45 minutes is going to save the service 7,000 days of human labor this year. That’s the kind of impact that normal professionals can have, even without any technical ability, if only they’re given very basic foundational training.

This is an incredible story. What manager can resist the promise of that kind of productivity gain? Even if he’s half wrong, or 10x wrong? Sign me up!

But there are so many problems with this. First, when Utley documented this story on his blog in “The Story of An Unlikely AI Hero,” he didn’t make the productivity claim under the authority of the National Park Service. He said,

I did a quick, real-time back-of-the-envelope calculation. “If this tool saves just one or two days per request across the parks in the system, that’s over 7,000 days of labor saved annually.”

Somewhere between telling the story on his blog and retelling the same story in the video, the “back-of-the-envelope” calculation became a US Federal Government endorsement. The claim is that they’re saving millions of dollars a year because one “back-country park ranger” spent 45 minutes with a chatbot.

When people wonder why I say “propaganda,” this is what I’m talking about.

The claim seems to be that across all the national parks, rangers have to make about 3,500 requests for materials that require an average of two days each to prepare the paperwork. So now that Adam “built” a “tool” with ChatGPT, those documents take literally no time to create? And uniformly across every request, at every park, for every ranger? That seems like a huge back-of-the-napkin exaggeration.

What is the value of these AI-generated documents if they can be so fully automated? Who is reading them? What are they reading? Are they okay with an average 33%+ error rate (from the OpenAI doc shared earlier)? Is it just that a document with certain words has to show up in some government inbox, then some other agency cuts checks? Why not spend another 45 minutes writing a tool for that agency to read the AI-generated documents? Then, why not cut out both tools and just let rangers order what they want from Amazon with an unlimited federal credit card?

To me, selling AI with such exaggerated claims is gold rush mentality. Stake your claim, get rich quick, and ignore the collateral damage. We like to think we’re different from people in the 19th century who risked life and limb and abused people and animals to cross the Yukon in search of gold. But gold rush messaging still tempts us.

Utley later claims, “AI makes people 25% faster [with] 40% better quality.” No sources are offered, no industries, no context or relevance. These numbers are more propaganda like his back-of-the-napkin-turned federal case. We’re just to trust his authority as a Stanford professor.

To me, selling AI with such exaggerated claims is gold rush mentality.

These claims are a sandy foundation on which to build trust. I wouldn’t do what he wants me to do based on these shaky promises.

Utley’s Grand Finale

The only correct answer to the question how do you use AI? I don’t. I don’t use AI; I work with it. When you start working with AI, it’ll change everything.

This is the transformation he talked about in his speaker video. Change everything.

I don’t trust Big Tech to change everything into their image. They have too many perverse incentives to make this work for them, but not for us. They broke the world’s trust with social media. They’ve turned hundreds of millions of people into distracted, compulsive users of their products using deception and powerfully dehumanizing strategies. And they’re taking those same strategies to the next level in their race to world domination through GenAI.

Utley as evangelist uses the same hype as the industry, making huge claims of world-changing power if we only surrender ourselves to GenAI as our authenticated, trusted guide.

I don’t trust Big Tech to change everything into their image.

So what do I recommend instead? That’s out of this article’s scope. My quick encouragement: meditate on the epistemological considerations above, and avoid being captivated by Big Tech promises as you consider if/where GenAI fits into your workflow.

Photo by Drei Kubik; I’ll never use AI-generated photos.

4 responses to “Should We Use Generative AI To Improve Creativity?”

Leave a Reply

Discover more from Doug Smith

Subscribe now to keep reading and get access to the full archive.

Continue reading