What Could Possibly Be Wrong With AI Summaries?

~1,500 words, about an 8 minute read.

“Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”

– Dr. Malcolm, Jurassic Park

Drowning in Data

AI summaries? They seem so harmless, right? So useful. Surely, Doug, if this is a tool that can be used for good or evil, this is one of the good things? (Of course, if you know the tool trope, you know.)

Most distributed offices are using Zoom, Slack, and other remote collaboration software. They’re the way we stay connected at work these days. And under Big Tech’s hypnotic spell, these companies are adding AI summaries to everything.

You weren’t paying attention or taking notes in that meeting? Here’s the AI summary. Missed that Slack thread? Here’s the summary. All that chatty customer feedback? It’s so much more efficient to just read the summary.

That podcast? That’s right, AI can make a summary for your subscribers to read as show notes. We don’t have time to summarize it ourselves. Better than nothing, right?

That text thread from your spouse? Yeah, summarize it—what could go wrong?

And we are busy. We do have too much input. We can’t possibly keep up with everything that demands our attention. We’re drowning in data, and we need help!

But are AI summaries really the rescue we need?

What Do We Believe About AI Summaries?

Because AI summaries are pushed on the world from on high, few of us ask the “should we?” question. Or stop to question the tacit beliefs we hold.

But what must we believe about AI summaries to use them? To trust them?

  • We must believe they are accurate. But what would accuracy mean for a machine-generated summary? It’s close enough? Even with GenAI’s continually high “hallucination” rates? When chatbots don’t know the meaning of a single word?
  • We must believe that AI prioritizes the same thoughts we would. The chatbot picks what is really most important. But what would lead us to that level of trust? How could an algorithm without understanding know what we should focus on?
  • We must believe that the most actionable information was chosen by the chatbot. If we read an AI summary and act on it, we will have trusted the chatbot to influence our behavior. But if we don’t act on it, why have the summary at all?

The common thread is trust. We must trust the chatbot to choose the most important thing for us to think about. We confer upon AI sacred task of choosing what we attend to.

But that trust is misplaced, because while AI is sold as something that knows, that understands, that reasons, and that is intelligent, it is none of those things.

We must trust the chatbot to choose the most important thing for us to think about.

We have no good reason to believe that an AI summary gives us accurate, prioritized, and actionable data. We’re expecting meaning from a machine that cannot understand meaning. It’s like asking a colorblind man to pick matching clothes.

Stifling, Cancelling, and Erasing the Human Voice

By creating AI summaries, what are we saying to the people who must read them? If, as Marshall McLuhan said, the medium is the message, what is the message of the AI summary medium?

The message is clear: human words aren’t as important as machine words.

Or the other way around: machine words are more important than human words.

Whatever that customer, coworker, or product reviewer actually said, isn’t really worth your time. It’s what the chatbot picks that matters.

AI summaries stifle, cancel, and erase the human voice.

What kind of AI game does a customer or employee need to play to be heard in a world of AI summaries? Their real words will never be heard, but maybe if they hack enough, something will pierce through the veil of algorithmic significance.

The medium of the AI summary shouts this message: the human’s voice isn’t worth hearing. And if a summary is really needed (e.g. podcast), the reader isn’t worth the cost of a high-quality, human-generated summary. The machine between us devalues sender and receiver: the sender is never heard, and the receiver is deceived into thinking they heard something important.

And we are shaped by the chatbot into prioritizing efficiency over care for our fellow humans. We are discipled by the AI, changed into people who care more about quick and easy than human and real.

The machine between us devalues sender and receiver: the sender is never heard, and the receiver is deceived into thinking they heard something important.

Why Is Big Tech Pushing AI Summaries Everywhere?

We are flooded with information overload, and have been for a long time. We can’t possibly keep up with everything. And Big Tech’s “cloud” continually pours on our data deluge.

Into that melee comes AI summaries to save us.

If we’re drowning in data, AI summaries are like trying to rescue a drowning man by throwing him a laminated poster of boat maintenance tasks. Sure, boat maintenance is arguably relevant to a person and water, but the drowning man still dies.

So Big Tech swamps us in data, creates a real problem, and now they “fix” it with AI. Just like they paved the way for the #1 use-case of ChatGPT to be therapy and companionship by creating the crises of mental health and loneliness with social media.

Here’s why they push AI summaries everywhere: summaries are low-hanging fruit. Bait, actually. The quick and easy way to lure us into trusting them. They lead to us forming a dependency—the “engagement” they monetize. In fact, AI summaries may be the first thing many of us do with GenAI.

AI summaries are a gateway drug. The thing that leads us to trust AI with more of our lives, until we ask it dozens of questions a day, or to critique our work, or to offer us advice, or to brainstorm, or help us write the email to fire our employees. (Yes, all of those things are happening.)

It is essential for Big Tech’s survival that we all use GenAI because they’ve bet everything on it. It’s not essential for us.

Their worldwide power is extracted by our ongoing trust and compliance. Our belief that whatever they create is “progress.” Our acceptance of their “solutions” to the problems they’ve created.

The Intentional Human Difference

The truth is we are busy. We can’t possibly keep up with everything that demands our attention, and we need help. Cal Newport has been trying to help us navigate information overload in a better way: by taking our lives back from Big Tech. Books like Deep Work, Digital Minimalism, and A World Without Email call us to ways of working that lead to world-class creativity and performance because of the tech we leave behind, not by embracing the latest fad.

That’s what I try to do in [Un]Intentional too: to help us become more intentional with our lives than Big Tech.

What should we do instead of using AI summaries?

  • Prioritize true human connection. Invest our precious time and attention in relationships and communication that matters.
  • If you’re in a role where you need to keep up with conversations and meetings that you can’t attend, hire people you can trust to attend and summarize for you. (Good experience for them, and trustworthy, actionable data for you.)
  • If you’re in meetings or discussions that aren’t worth your time, then excuse yourself. Prioritizing human connection means you must choose some humans over others.
  • Don’t use a chatbot summary to give you the illusion that you’re staying up-to-date with what is being summarized. You’re not.

Wherever you can, turn off those AI summaries, and refuse to send them to anyone else. You’ll stand out in a world of fake content, with a human voice that’s all your own.

Photo by lil artsy

Leave a Reply

Discover more from Doug Smith

Subscribe now to keep reading and get access to the full archive.

Continue reading