T
traeai
RSS登录
返回首页
The JetBrains Blog

Give AI Something Worth Amplifying: Three Priorities for Technical Leaders

7.5Score
Give AI Something Worth Amplifying: Three Priorities for Technical Leaders
AI 深度提炼
  • AI放大现有团队能力,而非拉平差距。
  • 高效开发者通过AI提升代码产出达320%。
  • 组织需关注基础能力和系统以最大化AI收益。
#AI#技术领导力#开发效率
打开原文

![Image 1: Ai logo](https://blog.jetbrains.com/ai/) Supercharge your tools with AI-powered features inside many JetBrains products

AIInsights

Give AI Something Worth Amplifying: Three Priorities for Technical Leaders

Image 2: Colette Des Georges

April 22, 2026

Most organizations are asking what AI can do for their developers. A harder question is what are your developers bringing to AI?

DORA’s _2025 AI Capabilities Model_ report argues that AI does not level teams out – it amplifies what is already there. Stronger, high-performing organizations get more out of it, whereas the weaknesses of struggling ones become more apparent.

The same seems to be true at the individual level. GitClear’s January 2026 analysis of 2,172 developer-weeks found that regular AI users produced about 320% (4.2x) more durable code than that of non-users. Those same users also increased their output by 25% from 2024 to 2025. That is a meaningful improvement, but it still doesn’t explain the entire gap on its own. GitClear’s conclusion is that AI widened a performance difference that already existed. Experienced, high-output engineers were often the first to adopt it.

That matters because AI does not operate in a vacuum. It lands on top of whatever habits, systems, and constraints your team already has. If the foundation is solid, AI can help. If it is weak, AI gives the weakness more room to spread.

Some of those foundations are easy to see. Infrastructure shows up on the balance sheet, and tooling can be inventoried. This article focuses on three areas that are easier to overlook: code review, technical debt, and developer judgment. These aren’t new problems, but AI makes it easier to ignore them while making the consequences harder to escape.

**Priority 1: Strengthen code review**

AI enables developers to write more code in less time, but this only helps if the review process can handle the extra volume. If the review pipeline is already under strain, AI does not solve the problem – it just pushes more code into the same bottleneck. Larger changesets arrive faster, ownership gets blurrier, and reviews fall back to scanning for obvious red flags instead of understanding the change holistically.

This is already showing up in the data. LinearB’s _2026 Software Engineering Benchmarks Report_ found that pull requests containing AI-assisted code wait about 2.5 times longer for review than pull requests written without AI. Where AI agents wrote the code autonomously, the delay rose to 5.3 times. Their explanation is straightforward – developers hesitate to take on what appears to be a larger, riskier review task.

So the issue is not just volume. It is volume plus uncertainty, flowing into a process that has not changed.

That is why the answer is process, not headcount.

**Next moves to consider:**

  • **Establish or tighten your batch size standard for pull requests.** DORA identified small batches as one of the best protections against hard-to-review code dumps, and AI makes those dumps easier to create.
  • **Make static analysis a real gate, not a suggestion.** Reviewer assessments alone do not reliably separate high-quality pull requests from weak ones. A 2019 study of 4.7 million code quality issues across 36,000 pull requests found that accepted and rejected pull requests looked surprisingly similar on code quality measures. A January 2026 study confirmed the same pattern in AI-generated code. Static analysis reduces the amount of subjective guesswork in the accept-or-reject decision.

**Priority 2: Stay disciplined about technical debt**

AI can help reduce existing technical debt, improve documentation, add tests, and work through repetitive cleanup. But it can also make it easier to pile up new debt faster than the codebase can absorb it. Three independent data sets point in the same direction.

  • GitClear’s 2025 longitudinal analysis of 211 million lines of code found that code churn grew 84% between 2020 and 2024. Refactoring dropped sharply, from 25% of changed lines in 2021 to under 10% in 2024. Duplicated code blocks appeared more than 10 times as often in 2024 as in 2022. GitClear links those shifts to the rise of AI-assisted coding after 2022.
  • Faros AI’s 2025 analysis of telemetry from more than 10,000 developers found that teams moving from low AI adoption to high AI adoption saw 9% more bugs per developer, a 154% larger average pull request size, and 91% longer review times. Individual output increased, but organizational performance did not.
  • A 2025 Carnegie Mellon study found something similar. Researchers tracked 806 teams that adopted an AI coding tool and compared them with 1,380 teams that did not. Static analysis warnings rose by about 30% following adoption and remained elevated. Cognitive complexity rose approximately 41% and also stayed up, even after the researchers controlled for codebase growth. The code was not just larger – it was harder to understand.

The pattern is hard to miss – output rises faster than the quality controls around it.

That does not mean teams need less AI. It means they need more discipline about where time goes.

**Next moves to consider:**

  • **Explicitly protect sprint capacity for technical debt reduction.**If AI increases pressure to ship, that is when refactoring and static analysis become more important, not less.
  • **Map your knowledge risk before AI makes it worse.** Ask your developers which areas of the codebase they would hesitate to modify without a long ramp-up. Use that to prioritize refactoring, documentation, and deliberate reviewer rotation.

**Priority 3: Give developer judgment a proving ground**

Better code review and better debt discipline both depend on something more basic – people who can actually tell what they’re looking at. That capability does not appear automatically, and AI makes it easier to avoid building it.

  • Anthropic’s January 2026 randomized controlled trial makes that risk concrete. Researchers split junior developers learning an unfamiliar Python library into AI-assisted and manual-coding groups. Developers who delegated code generation to AI and moved on scored between 24% and 39% on a later comprehension assessment. Developers who used AI while actively building understanding scored between 65% and 86%. Anthropic’s conclusion was blunt: “Productivity gains may come at the cost of the skills necessary to validate AI-written code.”
  • Microsoft’s Azure CTO and VP of Developer Community described this as a “seniority-biased technological change.” Senior engineers benefit because they already have the judgment to direct, verify, and integrate AI output. Less experienced engineers often do not. For them, the same tools can add drag instead of leverage.

That creates a tempting short-term logic: hire senior people, automate more junior work, and move on. It might make sense for a quarter or two, but it may also hollow out the pipeline you need in the future. The judgment that makes AI useful has to be developed before it can be amplified.

**Next moves to consider:**

  • **Review your junior developer hiring against business continuity.** Where will your senior developers come from in four years? A 2025 Stanford Digital Economy study found a 16% relative employment decline among workers aged 22-25 in AI-exposed occupations, even after controlling for broader firm-level hiring trends.
  • **Close the mentoring gap AI is creating.**_LeadDev’s AI Impact Report 2025_found that 38% of engineering leaders agreed that AI tools have reduced direct mentoring for junior engineers. That knowledge transfer does not happen on its own when both parties have an AI assistant at hand.
  • **Screen and onboard for AI judgment, not just AI familiarity**. The question is no longer whether someone uses AI, but whether they know when not to.

**Tactical note: Set the direction, not the tools**

How you apply these priorities matters. Developers will feel the difference between leadership and control. The _2025 Stack Overflow Developer Survey_ found that “autonomy and trust” was the strongest driver of developer job satisfaction.

That same independent streak shows up in AI tool choice. The _2025 State of Software Delivery report_ from Harness found that 52% of developers do not use the AI tools their IT department provides. This is probably the wrong place to fight for standardization.

Do not standardize the model choice. Do not standardize the assistant. Standardize the access layer.

**Give your team a workspace where any AI tool fits**

JetBrains IDEs give your developers one consistent environment to work with virtually any LLM and ACP-compatible coding agent. That means they can try new tools without rebuilding their workflow every time the market shifts.

Underneath that flexibility is something more stable: the IDE’s structural model of your codebase. It is not guessing what the code probably means – it understands the code as code. When AI generates something that looks plausible but breaks something elsewhere, the IDE can flag it before the change reaches review. Add JetBrains Qodana, and the same structural model extends into ongoing static analysis across your entire codebase.

That is the practical point: standardize only what should be standardized, and leave the rest flexible. The All Products Pack supports the same logic. It covers every JetBrains IDE in one subscription, each purpose-built for one of the major languages in use now or the one needed next. When a developer moves across the stack, there’s no license request, no procurement delay, and no fresh approval cycle.

The decision is already made. The tools are already there.

Learn more about the All Products Pack.

[](http://blog.jetbrains.com/ai/2026/04/give-ai-something-worth-amplifying-three-priorities-for-technical-leaders/#)

1. Priority 1: Strengthen code review 2. Priority 2: Stay disciplined about technical debt 3. Priority 3: Give developer judgment a proving ground 4. Tactical note: Set the direction, not the tools 5. Give your team a workspace where any AI tool fits

Discover more