AI Increases Output. It Does Not Increase Judgment.

Output and judgment are being treated as if they exist on the same scale — as if producing more output at higher speed is equivalent to making better decisions. They are not the same thing.

Part of the Phase II — Understanding series

By Michael E. Ruiz

There is a category error embedded in most discussions of AI productivity. Output and judgment are being treated as if they exist on the same scale, as if producing more output at higher speed is equivalent to making better decisions. They are not the same thing, and conflating them leads organizations to optimize for the wrong variable.

What AI has done is compress the time between question and first answer. A research task that would have taken an analyst two days now takes two hours. A report that would have taken a consultant a week to draft takes an afternoon. The output arrives faster, and in many cases it is structurally competent: well-organized, well-cited, covering the obvious ground.

Faster and structurally competent is not the same as right. The question of whether the right question was asked, whether the right sources were weighted appropriately, and whether the conclusion is supported by the evidence rather than merely suggested by it — none of that is answered by the tool.

This matters most in high-stakes domains, including security, healthcare, legal, and financial advisory, where the cost of a confident wrong answer is measured in consequences that cannot be undone. AI tools produce confident outputs regardless of the quality of the reasoning behind them. A language model asked to analyze a security posture will produce an analysis that reads like the work of a competent analyst, whether or not the underlying reasoning is sound. The person reviewing that output needs enough domain knowledge to evaluate it critically, not just enough fluency to understand it. If the reviewer lacks that knowledge, the AI has not improved decision quality. It has accelerated the production of authoritative-seeming errors.

The organizations that will use AI most effectively are those that treat it as an input to judgment rather than a substitute for it. That means maintaining investment in the development of domain expertise even as AI reduces the time required to execute domain tasks. It means building review processes that hold AI outputs to the same standard as human outputs, which is harder than it sounds because humans are naturally more skeptical of other humans than of machines that speak with equal confidence. And it means being honest about what judgment actually is: the capacity to evaluate incomplete information against experienced understanding of what matters and why. That capacity is not in the model.

These ideas are available as keynote presentations and executive briefings. Explore speaking topics →