12 Reasons I Will Not Use GenAI For Software Development

Silhouette of man thinking at sunset

Why I Will Not Use GenAI For Software Development

~1,400 words, about a 7 minute read

I’ve been a software developer for over 30 years. I love the craft of well-written software. A quest for high quality, clean architecture, and business value has defined my career. Employers have consistently promoted me to their highest senior developer level.

Given the trend of using generative AI tools like ChatGPT, Gemini, Copilot, and Claude for software development, many companies have decided that developers must use GenAI to succeed. I strongly disagree. But my view is very unpopular.

So why would I hold a such a conviction? Even if it means I lose a job that I loved?

This post is to share my rationale. Mostly, I wrote this to bring clarity to myself in the face of a difficult employment decision. But it’s also for anyone curious to see why someone might reasonably not jump on this particular bandwagon.

First, these are false accusations some people might make when they hear my position:

  • You’re out of touch with reality, out-of-date, past your prime. Nope. I may be older than some readers, but I’m still at the cutting edge of software development and continue to produce top-quality software.
  • You’re afraid of GenAI. My views are not based in fear. They are based in research, science, experience, and — I hope — ethical and moral clarity.
  • You’re anti-technology. I’ve been a technologist for a long time. It’s what I wanted to be when I grew up, and I’m still a top performer. But I didn’t just stay in my techie lane. I also studied philosophy, theology, psychology, economics, and more. Technology only dominates other disciplines in a technopoly. Technology isn’t my bedrock, but that doesn’t make me anti-tech.

So, why won’t I use GenAI for development?

Bottom line? It’s too risky. Today’s apparent short-term boost may lead to long-term regret.

To explain further, I’ve grouped twelve risks into five categories.

We risk being deceived into forming an undeserved trust relationship.

  1. Addictive Workflow. GenAI-powered software development is backed by chatbots that are created specifically to form an undeserved trust relationship. Our brain’s dopamine system is exploited by the non-deterministic output of GenAI, just as with social media and slot machines. We’re surprised when it’s right, we’re angry when it’s wrong, and we’re scared when it goes beyond the boundaries we think we’ve configured. All of these trigger dopamine spikes, and foster addiction.
  2. Deceptive Design. Every aspect of GenAI chatbots are built on deception. By design, they trick us into appearing to be sentient, to know, to reason, to understand, all with their signature, sycophantic conversation style. And we are lulled into dependency by these clever deceptions. We can’t help it. GenAI never gets tired, but we do. And the advice to “treat it as a junior” or a “proficient intern” or whatever, not only insults junior developers and interns, but may deceive us into deeper dependency.

We risk losing our ability to focus and create.

  1. Distracting Interface. The new GenAI-powered workflows in tools like Codex and Claude Code interrupt the profoundly productive and creative state of cognitive flow. GenAI tools encourage multitasking, something impossible for the human brain. But when an expert developer is in the state of focused flow, code flows from the fingers as easy as language. That’s when we innovate. That’s when we surprise ourselves by creativity with code, just like authors do with words. So why would we allow Big Tech to impose a new workflow that mediates, interrupts, blocks, distracts, and prevents us from understanding the code we’re writing?
  2. Declining Creativity. Only humans are truly creative. But innovation only arises from a deep, intimate, expert knowledge of the domain. We can only know what is good and unique by becoming experts ourselves, and by using our minds (in collaboration with other experts) to explore novel ideas. We risk losing our ability to innovate when instead of pursuing deep understanding, we take the posture of consumers or curators of GenAI output.

We risk losing positive human character traits.

  1. Becoming Impatient. We’re told that the skill of the future is “prompt engineering.” I call it “incantation creation” or “spell casting” instead. Yes, casting spells as “prompts” is exciting, especially when it’s new. And GenAI promoters say that if GenAI isn’t working for you, it’s a “you problem.” You need to learn to “engineer” better, clearer prompts, with narrower contexts — to cast better spells. But this may lead us to lose the patience needed to learn how complex software systems work. Instead, we may prefer the quick and easy power that comes with the promised wizardry.
  2. Becoming Tyrannical. Success in “agentic” workflows seems to require users to command a chatbot like a dictator over a servant. And what we practice is who we become. Developers are already writing berating, emotion-filled directives to tell an AI “agent” not to ship to production, not to write bugs, not to leave its “sandbox,” or not to delete databases. If we’re commanding these agents, thinking of them as juniors or interns (i.e. less than), then we are shaped by that activity. When we are frustrated that a chatbot “disobeys” and craft harsher commands, our tyrannic attitude shapes us.

We risk harming novice developers, social skills, and business value.

  1. Harming Novices. Some conventional wisdom says that experts can get more benefit from GenAI than novices, because experts know good code. But if experts use GenAI, they risk losing their expertise. And if novices use GenAI, they might not develop the skills and capacity to understand a software system and become experts. GenAI may be killing off the next generation of senior developers. So senior developers who use GenAI risk hurting those who do not yet share our expertise by our example.
  2. Eliminating Debate. Sycophantic chatbots eliminate interpersonal friction that arises when developers disagree about everything from goals to architecture to design to style. But we only grow from resistance formed by struggle. By spending most of our time interacting with chatbots, we risk preferring the always agreeable artificial to the messy, complex, but vital human relationships. And we lose the trust and camaraderie formed by enduring hardship, working through challenges, and coming out stronger and better.
  3. Stunting Mentorship. The “secret sauce” of any development team is their ability to work together. And the code itself is a team’s unique artifact that describes a precious asset: their shared understanding. “Tribal knowledge” is what makes a team (and its company) special, and what creates economic value. But chatbots degrade or eliminate a team’s need or desire to share its knowledge with others. This reduces a company’s ability to compete.

We risk falling prey to predatory Big Tech companies.

  1. Dominating Influence. Big AI companies seek to dominate every industry. The supposedly “neutral tools” they’ve created are sold as powerful oracles of knowledge, efficiency, and profitability. But GenAI tools are not neutral at all. They are seductive lures, shiny and enticing, all designed to increase Big Tech’s power over everyone who forms a dependency upon them.
  2. Craving Survival. Big AI is fighting for its life, so they must entice everyone into using GenAI for their survival, not ours. GenAI is not profitable, and Big AI has no viable path to profitability as they spend themselves into oblivion. They will fail if they don’t make all of us believe that we can’t live without their products. So they’ve targeted a lucrative market: software development. And at today’s prices, AI seems like a bargain. But …
  3. Exploding Prices. GenAI cannot stay at current prices and pay off. If Big Tech is going to make money from their colossal “investments,” they must raise their subscription prices by orders of magnitude. So they’re luring us with bargain rates now until our companies really can’t live without GenAI (if we surrender our skills in pursuit of speed and power).

What will I do instead?

By the grace of God, I’m going all-in on our uniquely made-in-the-image-of-God human intelligence, creativity, innovation, wisdom, and discernment. And I’ll be encouraging all of us to do the same.

Nobody knows how GenAI will turn out. But it seems clear to me that Big AI companies don’t have our best interests in view as they overwhelm the world with their propaganda and power.

I have too much faith in the God who gave us our minds and hearts, and I believe that he will lead those of us who are betting on humanity along a better path than the one Big Tech has ordained.

Photo by Samiul Alam Siyam

2 responses to “12 Reasons I Will Not Use GenAI For Software Development”

  1. I agree with the points you made in this article. I am sure that your employment decisions were difficult and I hope they care you well into the future.

    I was similarly contemplating hiring into a company pushing generative AI for development and wasn’t sure how to feel. On one hand it felt like a “pride thing” because I wouldn’t be running my own show anymore. However, I do not think that is the real reason. Using GenAI switches from a people focused perspective to a bottom line perspective, especially when it is being dictated from people not actually performing the engineering at the end of the day.

    • Thank you for the comment, Nate! I appreciate you thinking through this too.

      I agree that it’s not a “pride thing” to want to know how to do the work we’re doing as developers. There are many more issues underlying the growing dependency on GenAI that we need to care about.

Leave a Reply

Discover more from Doug Smith

Subscribe now to keep reading and get access to the full archive.

Continue reading