Skip to main content

AI Didn't Reduce the Work. It Changed the Demand Profile

The capacity cost of AI is already on your P&L. It just isn't on a dashboard yet.

Your AI productivity dashboard says the deployment is working. Throughput up. Cycle times shorter. Sprint velocity rebaselined. The CFO has the slide.

Your other dashboards say something different. Errors up. Attrition above forecast. The senior engineer who was supposed to move into a lead role just took a recruiter call. Two AI-augmented projects shipped on time and missed downstream targets, and your post-mortems can't quite explain why.

Both dashboards are accurate. They measure different sides of the same operational change. The research caught up in 2025-2026. The dashboards haven't.

What the research is now saying

Four studies, different labs, converging on the same finding.

UC Berkeley Haas (Ranganathan & Ye, HBR Feb 2026), in an eight-month embedded ethnography, documented workload creep - AI didn't reduce work; it intensified it through task expansion, blurred boundaries, and parallel multitasking. Workers reported feeling more productive without feeling less busy.

BCG Henderson Institute (HBR Mar 2026, N=1,488) named AI brain fry - mental fatigue from monitoring AI output, not from using it. Affected workers showed 39% more major errors and 39% higher intent to quit. They identified a three-tool cliff: productivity peaks at three concurrent AI tools and declines above four.

Stanford / BetterUp (HBR Sep 2025, N=1,150) named workslop - AI-generated content that masquerades as good work but lacks substance. Forty percent of workers had received it in the previous month. Average decode time: nearly two hours per instance. Estimated cost: $186 per worker per month, roughly $9M per year per 10,000 employees.

MIT Media Lab (Kosmyna et al., 2025) used EEG and found AI-assisted writers showed up to 55% lower neural connectivity than writers without AI, particularly in regions associated with critical thinking and memory. They named the long-horizon effect cognitive debt. Lead author: there is no cognitive credit card.

What the convergence means for operations

These are four readings of the same change: AI altered the demand profile your workforce metabolizes.

The Zones Framework™ - the operating model behind the Capacity Audit - has always made one claim: performance is state-dependent. 🟢Green Zone produces deliberate work. 🟡Yellow Zone produces narrowed work. 🔴Red Zone produces reactive work. AI doesn't change that claim. It changes which demands compound it.

Three new amplifiers on existing capacity taxes

Validation load is the new dominant cognitive demand. Monitoring AI output is denser work than generating the same output, and it consumes prefrontal capacity faster.

Workslop is demand exported from producers to recipients. Every AI-augmented document that isn't quite right is a tax on whoever has to decode it.

Cognitive debt is the multi-year version. The cognitive muscle that distinguishes acceptable AI output from accepted AI output atrophies when too much gets offloaded.

These aren't new capacity taxes. They're amplifiers on the five most organizations were already paying - Decision Density, Meeting Debt, Recovery Debt, Manager Load, Forfeited Upside. AI didn't add a sixth tax. It made the five run hotter.

A practical example

The math on a 200-person team

A 200-person professional-services team running three concurrent AI tools - at BCG's productivity peak - produces a combined annual capacity-drift cost in the $2-4M range, depending on industry and compensation.

Hold everything constant except AI tool count. Push it from three to seven. Same workforce, same compensation, same zone hours. The capacity cost moves up by approximately 35% - roughly $1.1M per year of additional drag, all from amplification on Yellow and Red zones. Not from new headcount. Not from new initiatives. Just from cognitive overhead the dashboards aren't counting.

That number is conservative. It does not include cognitive-debt compounding, trust erosion in colleague-to-colleague perception, or the senior people whose intent-to-quit numbers turn into actual attrition the following year.

What an operations leader does

Count the tools.

Most organizations cannot answer how many AI tools the average cognitive worker runs concurrently. The line where AI starts depleting more than it augments sits between three and four. Most knowledge-work environments are well past it because tool deployment was decentralized. Counting is the diagnostic.

Budget validation as scheduled cost.

A morning with three meetings and eleven AI-drafted communications to evaluate is a different day than three meetings and an open afternoon. Capacity-aware scheduling means front-loading discriminating work into the hours when discrimination is still available.

Design the demand architecture deliberately.

AI without an explicit architecture runs whatever architecture develops accidentally, and the data says accidental architectures concentrate demand in ways that exceed cognitive capacity.

Count workslop as a P&L line item.

The Stanford methodology gives you a credible per-worker monthly figure. The dollar number matters less than the recognition: every AI-augmented producer is now a demand event for every recipient.

The number that should be on your P&L next quarter

The Capacity Cost Calculator on this site produces a directional estimate of the combined cost. The Capacity Audit converts that estimate into numbers tied to your specific operating data. Findings, not a pitch.

There is a version of the AI story where it returns capacity at a scale we haven't experienced. That version requires an operations function that knows what its workforce can metabolize. The other version - the one most organizations are currently living - accelerates everything except the capacity to evaluate what's being accelerated.

The number is already on your P&L. It just isn't on a dashboard yet.