In 2026, Silicon Valley has been mired in a wave of layoffs. Amazon confirmed cutting around 16,000 jobs in January; Block laid off nearly half of its workforce in February; and Meta was exposed to plan a 16,000-person layoff in March.
Anxiety has swept the workplace that AI will boost work efficiency and replace white-collar workers. Yet, an article titled AI Fatigue Is Real and Nobody Talks About It published by Siddhant Khare, a software engineer at tech company Ona (formerly Gitpod) based in Jabalpur, India, has sparked extensive discussions among global media and readers.
He pointed out the huge gap between AI's practical application and its lofty promises, arguing that the efficiency improvement brought by AI is overestimated, while workplace employees are falling victim to "AI fatigue".
Recently, Siddhant Khare has shared his insights on this phenomenon in an exclusive interview with National Business Daily (NBD), advising people to change their AI usage habits and avoid being trapped in a vicious cycle of generating, reviewing, regenerating and re-reviewing with AI.
Human Workload Surges Tenfold
NBD: You describe AI fatigue as an increasingly common issue. In your view, is this primarily a technical problem, a management problem, or a structural issue in the design of modern work systems?
Siddhant Khare: It is structural. The technical layer works——the models generate code, write emails, summarize documents. The management layer is trying: companies buy licenses, run training sessions, set adoption targets. But the structure is broken. We made one part of the workflow ten times faster (generation) without making the next part any faster (verification). The human is still the bottleneck, and now this bottleneck has ten times more volume flowing through it.
Think of a factory: you install a machine that stamps parts ten times faster, but the quality inspector at the end of the line is still one person checking each part by hand. Output goes up, the inspector's workload skyrockets, the defect rate stays the same, and eventually the inspector burns out. That is exactly what happened with AI in knowledge work. We automated production but not verification, making humans the quality gate for a system that produces far faster than any human can review.
This is not a management failure. Most managers do not even see it happening. The metrics look good——more code shipped, more documents produced, more emails sent,but employee exhaustion is invisible in the dashboards.
NBD: When companies adopt AI and productivity rises, yet employee workload does not decrease, where do you think the core issue lies?
Siddhant Khare: The issue is that productivity gains get absorbed as new expectations, not as freed time. Before AI, an engineer might write 20 pull requests a week; with AI, they can write 50. The company does not say "finish your 20 and go home early". It sets 50 as the new normal. AI has raised the floor of what is considered standard output, so the workload does not drop, and employees work the same hours at a much higher intensity.
There is a second layer: AI-generated work still requires human review. As an open source maintainer, I used to receive 20-25 pull requests per week, and now the number has surged to over 100, most of which are AI-generated. Each one still needs careful checking. Generation was automated, but judgment was not. So employees now both produce more and review more. Both sides of the equation have become heavier, and the only thing that got lighter is the typing work.

AI Coding Tools Lead to a 19% Drop in Actual Work Efficiency
NBD: From your experience, what aspects of enterprise AI adoption are most commonly overestimated, and what risks are most frequently underestimated?
Siddhant Khare: The most common overestimation is the speed of adoption and immediate productivity gains. Companies expect that equipping engineers with AI tools will deliver productivity leaps in just a few weeks, but real-world data tells a completely different story.
DX, a platform for engineering efficiency and developer productivity analysis, conducted a comprehensive survey covering 121,000 developers across more than 450 companies. The data shows that even though 93% of developers use AI assistants, productivity gains have plateaued at 10% and failed to move further. The results of a randomized controlled trial by METR, a model evaluation and risk research institution, are even more striking: developers using AI coding tools were actually 19% slower, despite subjectively believing they were 24% faster.
This gap between perceived and actual productivity is the most dangerous overestimation. The 2025 Stack Overflow Developer Survey confirms this trend: 84% of developers use AI tools, yet 46% actively distrust the accuracy of AI-generated outputs, and 66% report frustration with results that are "almost right, but not quite".
What enterprises most easily underestimate, first and foremost, is the cost of verification. Every AI output needs human checking, yet this work is invisible in most project plans – no one budgets for "time spent reviewing AI-generated code" or tracks "hours spent fixing AI hallucinations". The verification cost is real, substantial, and almost always missing from AI adoption plans.
The second underestimated factor is the cultural cost. When AI generates most of the work output, employees who once took pride in their professional craft start feeling like inspectors on an assembly line. This identity shift is gradual and hard to quantify, but it directly leads to talent attrition.

Reviewing AI Output Is More Exhausting Than Doing the Work Yourself
NBD: Many white-collar workers now worry that they are effectively "training systems that may eventually replace them". From a technical standpoint, how valid is this concern? Which types of roles are actually closest to being replaced by AI?
Siddhant Khare: The concern is partly valid but widely misunderstood. Most workers are not training the model directly. When you use ChatGPT or Copilot, your inputs do not automatically become training data for the next version——most enterprise user agreements explicitly prevent this. Technically, the fear of "I am training my replacement" is inaccurate for most users.
What is accurate is that your role is being redefined. The tasks you do today may be automated, but the judgment you apply to those tasks is harder to automate than most people think.
The roles closest to substitution are those where the output is standardized and the quality bar is low: first-draft copywriting, basic data entry, simple code generation from clear specifications, and template-based reporting. These are tasks where "good enough" is the standard, and AI is already more than capable.
In contrast, the roles furthest from substitution are those requiring taste, context, and judgment that cannot be specified in a prompt,such as system architecture design, product strategy formulation, business negotiation, and any work where the hard part is deciding what to build, not the actual building process.
Most people fall into the middle ground: their roles will not disappear, but they will change. The core value of their work shifts from "I can produce this output" to "I can judge whether this output is correct, appropriate, and aligned with what we actually need".
NBD: In your observation, is AI primarily replacing human labor, restructuring labor distribution, or simply increasing the output intensity per worker?
Siddhant Khare: By a wide margin, it is increasing output intensity. I have not seen large-scale human replacement; what I have seen is the same number of people doing significantly more work. Engineers who used to only write code now write code and review AI-generated code; writers who used to only draft articles now draft and edit AI-generated ones; analysts who used to only build reports now build and validate AI-generated reports.
Labor has not disappeared. It has shifted: from production to verification, from creation to judgment, from doing to checking. This is a restructuring of what work feels like, not a reduction in the total amount of work. What is more, the new work (verification, judgment, quality control) is more cognitively demanding than the old work (production). Catching a subtle error in someone else's code is harder than writing the code yourself. You need to understand the intent, context, and edge cases without having the mental model that comes from writing it yourself.
So employees are doing more work, and the work is harder. This is the complete opposite of what the productivity narrative promised.
NBD: Compared with previous waves of automation or internet tools, why does AI seem to create a more persistent cognitive and psychological burden?
Siddhant Khare: The core reason is that previous automation was deterministic, while AI is not. When you automated a spreadsheet with a macro, you knew exactly what it would do every time – the same input produced the same output, and you could trust it and move on.
AI is non-deterministic: the same prompt can generate completely different outputs, and the output looks confident even when it is wrong. You cannot trust it and move on; you have to check every single time. That constant checking is the source of the burden. It is not just the volume of work, but the perpetual vigilance. You must stay alert because the system can fail in unpredictable ways. Unlike a broken macro that fails obviously, AI fails subtly: the code compiles, the email reads well, the report looks professional, but there may be a factual error on page three, a logic bug on line 47, or a hallucinated statistic in the second paragraph.
Previous tools failed loudly; AI fails quietly. Quiet failures demand constant attention, and constant attention is utterly exhausting.
There is a second factor: previous automation did not produce work that resembled your own. A macro did not write in your voice, and a calculator did not generate prose. But AI does. Its output is so close to human work that reviewing it requires the same cognitive effort as producing it. You are not checking a machine's output; you are checking something that looks like a colleague's work but may contain invisible errors. That resemblance makes AI psychologically different from any previous tool.
NBD: If AI outputs cannot be fully trusted but have to be used at scale, what mechanisms do companies currently rely on to bridge this "trust gap"?
Siddhant Khare: Regrettably, most companies rely on the worst possible mechanism: human review as the only quality gate, though things may improve. Engineers read every line of AI-generated code, editors check every paragraph, and analysts verify every number. This approach is costly, slow, and unscalable. Yet it is the default at most organizations.
The companies that handle this well have built what I call "backpressure": automated feedback mechanisms that catch errors before they reach humans. The hierarchy of these mechanisms, from strongest to weakest, is as follows:
1. Type systems: A strong type system (such as TypeScript strict mode, Rust, Go) instantly catches entire categories of errors, allowing AI to get immediate feedback and self-correct without human involvement.
2. Test suites: A failing test tells the AI "what you just did broke something". Fast test suites (under two minutes) enable AI to iterate and fix its own mistakes.
3. Linters and pre-commit hooks: Every issue caught by a linter is one that never reaches your review queue.
4. Human review: This should be the last step, not the first. By the time code reaches a human, trivial issues should already be resolved, so humans can focus their attention on judgment calls rather than catching missing imports.
Companies with this infrastructure report a dramatic reduction in AI fatigue, while those without it are burning out their best employees.

NBD: In your opinion, what is the most important missing safety or constraint mechanism in current AI product design?
Siddhant Khare: It is authorization. Specifically, agent identity and runtime security, and fine-grained control over what an AI agent is allowed to do.
Today, most AI agents operate with the full permissions of the user who launched them. If I give an AI agent access to my codebase, it can read any file, modify any file, and run any command. There is no way to specify "you can edit files in the src/ directory but not in the config/ directory" or "you can run tests but not deploy to production". This is equivalent to giving a new employee the root password on their first day and hoping they do not break anything.
I work on this problem directly: I maintain OpenFGA, a CNCF Incubating project for fine-grained authorization (based on Google's Zanzibar system and deployed by hundreds of companies), and I built Agentic-AuthZ, an authorization gateway for AI agents. The technology to solve this problem already exists – what is missing is adoption.
Most AI products ship with a binary permission model: the agent is either on or off, with no middle ground, no scoped permissions, no audit trail of what the agent accessed, and no way to enforce least-privilege access. As agents evolve from assisted tools (where a human approves every action) to autonomous systems (where agents act on their own), this gap becomes extremely dangerous. An AI agent that can read your entire codebase, access your API keys, and execute arbitrary commands is a security incident waiting to happen.
NBD: With AI playing an increasingly important role in production, do you think the definition of an employee's value is shifting from "output capacity" to "judgment and constraint capabilities"?
Siddhant Khare: Yes, and this shift is already happening, even if most employee evaluation systems have not caught up.
When AI can generate a first draft of anything – code, copy, analysis, design – the ability to generate content is no longer scarce. What is scarce is the ability to judge whether the output is correct, appropriate, and aligned with what the organization actually needs.
The most valuable engineers I work with are not the fastest coders, but those who can look at AI-generated code and say "this works, but it is the wrong approach for our system". That kind of judgment requires deep context, rich experience, and professional taste – none of which can be achieved by optimizing prompts.
The shift is from "how much can you produce?" to "how well can you evaluate what was produced?"; from output to judgment, from volume to quality, from typing speed to thinking speed. This has far-reaching implications for hiring, evaluation, and career development. If you evaluate employees by the number of lines of code or documents they produce, you are measuring the wrong thing – AI can generate infinite lines of code, and the real question is whether those lines should exist at all.
The employees who will thrive are the ones who can answer that question, who can say "no, this is wrong" with confidence and explain why. That is judgment – and it is becoming the primary unit of an employee's value.

The Most Important Work Rarely Needs AI
NBD: Faced with the work pressure and mental internal friction brought by AI, how should ordinary white-collar workers properly interact with AI?
Siddhant Khare: I recommend three adjustments.
First, stop using AI for tasks where your own thinking is the core value. For example, when drafting a strategic plan, the value lies in the thinking process, not the typing. Using AI to skip the thinking process is equivalent to weakening the value of your work. AI is more suitable for repetitive tasks where "the result is important and the process is secondary".
Second, set clear boundaries on review time. If you spend more than two hours a day reviewing AI-generated output, it means there is a problem with your workflow – either the prompts are unclear, the context information is insufficient, the work rules are not strict, or the company lacks automated checking mechanisms. Never accept "unlimitedly reviewing all AI output" as the normal state of work.
Third, protect your deep work time. AI can trap you in a loop of generating, reviewing, regenerating and re-reviewing, which constantly interrupts your attention. You need to deliberately set aside a period of time to work without using AI at all. The most important work you do rarely relies on prompts – it is completed through independent thinking.
NBD: For individuals who are already dependent on AI, what should they do to change this situation?
Siddhant Khare: The first and foremost change is to adjust their AI usage habits. Now many people open ChatGPT reflexively when they encounter a problem, asking AI to generate content before even starting to think independently.
You must reverse this order: think independently first, clarify your work goals, and then judge whether AI is needed. Many times, a blank page and twenty minutes of independent deep thinking yield better results.
The anxiety people feel about AI is essentially a loss of a sense of control. When AI is constantly generating content and giving suggestions, you will feel like a passive executor. But once you regain the right to decide "whether and when to use AI", your sense of control will gradually return, your anxiety will naturally decrease, and you can truly break free from the predicament of AI fatigue.
NBD: Many people know they should limit their AI usage but find it difficult due to performance pressure and corporate evaluation systems. Are there any low-cost, practical constraints that individuals can adopt to reduce AI fatigue without compromising output quality?
Siddhant Khare: Yes, and here are four that cost nothing at all.
First, the two-minute rule: Before using AI for any task, spend two minutes thinking about it yourself. Write down what you want to achieve, what the constraints are, and what good output looks like. This takes almost no time but completely changes the dynamic – you become the director, not the audience.
Second, review windows instead of review streams: Do not review AI output as it arrives; batch it and set two or three review windows per day. Let the output accumulate outside these windows. This protects your deep work time and reduces the context-switching that causes fatigue.
Third, encode your most common catch: Think about the recurring mistakes you find in AI output – a formatting error, a wrong assumption, a missing edge case. Then turn that into a rule: a linter config, a checklist, a template. Automate the detection so you never have to catch it manually again. Each rule you encode is one less decision you have to make.
Fourth, keep a "did not use AI" list: Track the tasks you completed without AI every week. This may sound trivial, but it matters a lot. It reminds you that you are capable of working without the tool, counteracts the anxiety of dependence, and often reveals that your best and proudest work is done without a single prompt.
These are not productivity hacks – they are boundaries. And it is these boundaries that prevent a useful tool from becoming an exhausting obligation.
Note: All views and opinions expressed in this interview are Siddhant Khare’s and do not represent the views of Ona’s employer.

川公网安备 51019002001991号