In the age of AI, disagreement is a feature
One reason I love my job is that my co-workers have no problem disagreeing with me.
That may sound odd coming from a CTO. After all, shouldn’t leadership have authority when choosing direction, making decisive calls, and defining vision?
Sure, but the best visions are forged in the cradle of strong, constructive disagreement. The benefit of working with intelligent, confident peers is that they’ll (hopefully) tell you when you’re wrong, push back on flawed assumptions, and often improve upon (or override) your short-sighted ideas and plans.
The worst workplace isn’t the one with no conflict amongst staff. It’s the one where your peers can see failure looming on the horizon but stay silent. In that environment, execution may be smooth, but it’s headed straight toward a cliff.
Naturally, this also applies to AI tooling.
Where LLMs fall short
An LLM won’t disagree with you. They are deliberately trained to be helpful, agreeable, and prioritize engagement. This isn’t a bug, it’s a design goal (axios) that spans multiple stages: pretraining corpora, supervised fine-tuning, reinforcement learning with human feedback, and even system prompts.
Generally speaking, tools that prioritize engagement don’t necessarily encourage useful disagreement. Social networks are an obvious example: they prioritize engagement, but often at the cost of creating interactions that are caustic and light on fact checks.
With respect to AI, the byproduct is that LLMs rarely tell you when you’re wrong. They’ll expand on your idea, refine it, and make it sound plausible, even if it’s fundamentally flawed or misapplied (good idea, inappropriate setting). They’re designed to say yes, not no, to all but the most absurd ideas. And often (especially in complicated solutioning), what you really need is someone to stand up and say no. (livescience).
Many firms making use of unengaged near/offshore teams have dealt with this firsthand. The cliché surrounding bad remote development help is that they’ll happily build (and bill you for) anything that you want, with very few questions asked.
Exceptional systems are about thoughtful tradeoffs
Whether in software architecture, organizational design, or product strategy, the best systems balance competing concerns and intentional tradeoffs. Privacy vs. accessibility. Security vs. convenience. Speed vs. cost. Robustness vs. time-to-market. These are human considerations.
These aren’t binary choices; they’re gradients of priorities codified by design decisions. Good design emerges from reluctant compromise and debate, not from a heuristic that amplifies a single user’s perspective. Even with fantastic context-engineering, an LLM doesn’t understand your system goals, your market constraints, or your compliance obligations. It will never raise its hand and say, “You’re wrong. this plan might maximize convenience, but it completely ignores security and compliance.”
AI can’t succeed without humans
That’s why leveraging more than one human being who understands the system’s context and competing priorities remains as critical as ever. Copilots and AI assistants can accelerate output, but they can also multiply and reinforce blind spots, biases, and personal perspectives if they go unchallenged. Challenging these weaknesses is an inherent human strength. We want velocity from these tools, not just volume.
In our firm, we treat disagreement as a signal of quality—an indicator that multiple intelligent minds are wrestling with the problem from different angles. It’s precisely this friction that ensures we don’t just build systems that work, but systems that work well for the long haul and serve the mission of the companies we’re enabling.
And that’s a function that no LLM can yet replace: the hard but necessary work of telling each other, respectfully and directly, when we’re wrong.