---
title: "Fragments: April 29"
source_name: "Martin Fowler"
original_url: "https://martinfowler.com/fragments/2026-04-29.html"
canonical_url: "https://www.traeai.com/articles/7c73f4f5-d984-441e-b1ed-cc2ce553ce08"
content_type: "article"
language: "英文"
score: 8.5
tags: ["AI","软件开发","工程实践"]
published_at: "2026-04-29T13:23:00+00:00"
created_at: "2026-04-30T02:17:48.418771+00:00"
---

# Fragments: April 29

Canonical URL: https://www.traeai.com/articles/7c73f4f5-d984-441e-b1ed-cc2ce553ce08
Original source: https://martinfowler.com/fragments/2026-04-29.html

## Summary

本文介绍了Chris Parsons关于使用AI进行编码的最新指南，强调了小步变更、构建护栏、严格文档记录和验证的重要性，并讨论了程序员在训练AI编写正确代码中的关键作用。

## Key Takeaways

- 保持变更小，构建护栏，严格文档记录，确保每次变更都经过验证。
- 验证是关键，团队应专注于快速验证多个方案而非单一方案。
- 高级工程师应通过训练AI来减少人工审查的需求。

## Content

Title: Fragments: April 29

URL Source: http://martinfowler.com/fragments/2026-04-29.html

Published Time: Wed, 29 Apr 2026 13:23:47 GMT

Markdown Content:
# Fragments: April 29

[![Image 1](http://martinfowler.com/mf-name-white.png)](https://martinfowler.com/)

[](http://martinfowler.com/fragments/2026-04-29.html#navmenu-bottom)

*   [Refactoring](https://refactoring.com/)
*   [Agile](http://martinfowler.com/agile.html)
*   [Architecture](http://martinfowler.com/architecture)
*   [About](http://martinfowler.com/aboutMe.html)
*   [Thoughtworks](https://www.thoughtworks.com/engineering)
*   [](http://martinfowler.com/feed.atom "feed")
*   [](https://www.twitter.com/martinfowler "Twitter stream")
*   [](https://toot.thoughtworks.com/@mfowler "Mastodon stream")
*   [](https://www.linkedin.com/in/martin-fowler-com/ "LinkedIn")
*   [](https://bsky.app/profile/martinfowler.com "BlueSky")

## Topics

[Architecture](http://martinfowler.com/architecture)

[Refactoring](https://refactoring.com/)

[Agile](http://martinfowler.com/agile.html)

[Delivery](http://martinfowler.com/delivery.html)

[Microservices](http://martinfowler.com/microservices)

[Data](http://martinfowler.com/data)

[Testing](http://martinfowler.com/testing)

[DSL](http://martinfowler.com/dsl.html)

## about me

[About](http://martinfowler.com/aboutMe.html)

[Books](http://martinfowler.com/books)

[FAQ](http://martinfowler.com/faq.html)

## content

[Videos](http://martinfowler.com/videos.html)

[Content Index](http://martinfowler.com/tags)

[Fragments](http://martinfowler.com/fragments)

[Board Games](http://martinfowler.com/boardgames)

[Photography](http://martinfowler.com/photos)

## Thoughtworks

[Home](https://thoughtworks.com/)

[Insights](https://thoughtworks.com/insights)

[Careers](https://thoughtworks.com/careers)

[Radar](https://thoughtworks.com/radar)

[Engineering](https://www.thoughtworks.com/engineering)

## follow

[RSS](http://martinfowler.com/feed.atom)

[Mastodon](https://toot.thoughtworks.com/@mfowler)

[LinkedIn](https://www.linkedin.com/in/martin-fowler-com/)

[Bluesky](https://bsky.app/profile/martinfowler.com)

[X](https://www.twitter.com/martinfowler)

[BGG](https://boardgamegeek.com/blog/13064/martins-7th-decade)

# Fragments: April 29

Martin Fowler: 29 Apr 2026

Chris Parsons has updated [his guide on using AI to code](https://www.chrismdp.com/coding-with-ai/). This is his third update, what I like about it is that he gives a lot of concrete information about how he uses AI, with sufficient detail that we can learn from him. His advice also resonates with the better advice I’ve seen out there, so the article makes a good overview of the state of using AI for software development.

> I wrote the previous version of this post in March 2025, updated it once in August, and it has been linked from almost everything I have written about AI engineering since. The fundamentals from that post still hold: keep changes small, build guardrails, document ruthlessly, and make sure every change gets verified before it ships. One thing has had to move with the volume. “Verified” used to mean “read by you”. With modern agent throughput, it has to mean “checked by tests, by type checkers, by automated gates, or by you where your judgement matters”. The check still happens; it just does not always happen in your head.

Like Simon Willison, he makes a clear distinction between vibe coding, where you don’t look at or care about the code, and agentic engineering. He recommends either Claude Code or Codex CLI. He considers the inner harness provided by his preferred tools to be a key part of their advantage.

He sees verification is the key thing to focus on:

> A team that can generate five approaches and verify all five in an afternoon will outpace a team that generates one and waits a week for feedback. The game is not “how fast can we build” any more. It is “how fast can we tell whether this is right”. That shifts where to invest. Build better review surfaces, not better prompts. Make feedback unnecessary where you can by having the agent verify against a realistic environment before it asks a human, and make feedback instant where you cannot.

The key role of the programmer is in training the AI write software properly, and the most important thing skilled agentic programmers can do is pass that skill onto other developers.

> And if you are a senior engineer worried that your job is quietly turning into approving diffs: it is. The way out is to train the AI so the diffs are right the first time, to make yourself the person on the team who shapes the harness, and to make that work the visible thing you are measured on. That role compounds in a way that reviewing never will.

❄❄❄❄❄

Early this month Birgitta Böckeler wrote a superb [article on Harness Engineering](https://martinfowler.com/articles/harness-engineering.html). (That’s not just my opinion, judging by the crazy traffic it’s attracted.) Birgitta has now recorded a video [discussion with Chris Ford on Harness Engineering](https://www.youtube.com/watch?v=uLWOLmeHOSE), which is well worth a watch.

In it they focus on discussing the role of computational sensors in the harness, such as static analysis and tests.

> LLMs are great for exploratory and fuzzy rules, but once you have something that really is objective, converting it to a formal, unambiguous, deterministic format can give you more assurance

Birgitta did some experiments to explore the benefits of adding sensors, including a deep dive on using static analysis. She found it’s more useful as agents can really address _every_ warning, and don’t slack off like humans do.

❄❄❄❄❄

Adam Tornhill considers an age-old question: [how long should a function be?](https://adamtornhill.substack.com/p/how-long-should-a-function-be-and) This question is still relevent in the age of agentic programming.

> AI models do not “understand” code the way humans do. They infer meaning from patterns in tokens and depend heavily on what is explicitly expressed in the code.
> 
> 
> Research shows that naming plays a critical role. When meaningful identifiers are replaced with arbitrary names, model performance drops significantly. Current models rely heavily on literal features—names, structure, and local context—rather than inferred semantics.

Like me, he doesn’t think the answer is to think about how many lines should be in a function, instead it’s all about providing better structure. He has a good example of how a well-chosen function defines useful concepts, where a function wraps four lines of code, returning a new concept that enters the vocabulary of the program.

> Functions are the first unit of structure in a codebase. They define how logic is grouped, how intent is communicated, and how change is localized. If the function boundaries are wrong, everything built on top of them becomes harder to understand and harder to evolve.

This fits with my writing that the key to function length is the [separation between intention and implementation](https://martinfowler.com/bliki/FunctionLength.html):

> If you have to spend effort into looking at a fragment of code to figure out what it’s doing, then you should extract it into a function and name the function after that “what”. That way when you read it again, the purpose of the function leaps right out at you, and most of the time you won’t need to care about how the function fulfills its purpose - which is the body of the function.

❄❄❄❄❄

Many folks in my feeds recommended Nilay Patel’s post on [Why People Hate AI](https://www.theverge.com/podcast/917029/software-brain-ai-backlash-databases-automation). He thinks that many people in the software world have “software brain”:

> The simplest definition I’ve come up with is that it’s when you see the whole world as a series of databases that can be controlled with the structured language of software code. Like I said, this is a powerful way of seeing things. So much of our lives run through databases, and a bunch of important companies have been built around maintaining those databases and providing access to them.
> 
> 
> Zillow is a database of houses. Uber is a database of cars and riders. YouTube is a database of videos. The Verge’s website is a database of stories. You can go on and on and on. Once you start seeing the world as a bunch of databases, it’s a small jump to feeling like you can control everything if you can just control the data.

Software Brain views people into databases, and oddly enough, a lot of people don’t like that. Which is why so many polls reveal the negative feelings folks have about the AI movement.

> Even taking the time to consider how much of your life is captured in databases makes people unhappy. No one wants to be surveilled constantly, and especially not in a way that makes tech companies even more powerful. But getting everything in a database so software can see it is a preoccupation of the AI industry. It’s why all the meeting systems have AI note takers in them now.

Patel draws a similarity that I’ve often made - that between programmers and lawyers. Lawyers who draw up contracts are creating a protocol for how the parties in the contract should behave. As Patel puts it:

> If the heart of software brain is the idea that thinking in the structured language of code can make things happen in the real world, well, the heart of lawyer brain is that thinking in the structured legal language of statutes and citations can also make things happen. Hell, it can give you power over society.

The difference, of course, is that law is non-deterministic. Litigation is resolving what happens when people have different ideas about how those contracts should execute.

❄❄❄❄❄

I was chatting recently with a company who wanted to use AI to make sense of their internal data. The potential was great, but the problem was that the data a mess. People put stuff into fields that didn’t make sense, and there was little consistency about how people classified important entities. As someone commented

> the hardest problem with internal data is precise, consistent definitions

You can imagine my astonishment. (i.e. none at all - this has been a constant theme during all my decades with computers.) The difficulty of getting such definitions undermines much of the hopes of Software Brain

This resonates with our relationship with LLMs when programming. Precise and consistent definitions strike me as crucial to effective communication with The Genie. These definitions need to grow in the conversation, and be tended over time. Conceptual modeling will be a key skill for agentic programming and whatever comes next. (At least I hope it will, since it’s a part of programming I really enjoy.)

❄❄❄❄❄

Patel’s article refers to Ezra Klein’s post about the [new feeling in San Francisco](https://www.nytimes.com/2026/03/29/opinion/ai-claude-chatgpt-gemini-mcluhan.html?unlocked_article_code=1.eFA.abX-.lGEOqsmKZVY_&smid=url-share).

> You might think that A.I. types in Silicon Valley, flush with cash, are on top of the world right now. I found them notably insecure. They think the A.I. age has arrived and its winners and losers will be determined, in part, by speed of adoption. The argument is simple enough: The advantages of working atop an army of A.I. assistants and coders will compound over time, and to begin that process now is to launch yourself far ahead of your competition later. And so they are racing one another to fully integrate A.I. into their lives and into their companies. But that doesn’t just mean using A.I. It means making themselves legible to the A.I.

That legibility is the heart of Patel’s observation. That’s why I see many colleagues of mine dumping all their email, meeting notes, slide decks and everything else into files that AI can read and work with. This works to the strengths of AI, we know that AI is really good at querying unstructured information. So I can figure out what’s buried in my notes in a way that’s far more effective than hoping I’m typing the right search regex.

I’ve been using Gemini a fair bit for exactly this on the web, finding it easier to write a question to it than to throw search terms at Google. Gemini keeps a record of my past requests, and uses that to help it tune what I’m looking for. As Klein observes:

> [The AI] is constantly referring back to other things it knows, or thinks it knows, about me. Sycophancy, in my experience, has given way to an occasionally unsettling attentiveness; a constant drawing of connections between my current concerns and my past queries, like a therapist desperate to prove he’s been paying close attention.
> 
> 
> The result is a strange amalgam of feeling seen and feeling caricatured.

Like myself, Klein is a writer, and is faced by the same temptation that I have when I think about AI and writing. Maybe instead of toiling over articles, I should ask an LLM to create an `AGENTS.md` file that summarizes my writing style, and every few days ask it to compose an article on some subject, read it, tweak it, and then publish my erudite musings. But that’s not at all appealing to me. I want understanding to grow in _my brain_, not the LLM’s transient session. Writing to explain my thinking to others is how I refine that thinking, “chiseling that idea into something publishable” as Klein puts it. To have an AI write for me is to cripple my own mind.

## Topics

[Architecture](http://martinfowler.com/architecture)

[Refactoring](https://refactoring.com/)

[Agile](http://martinfowler.com/agile.html)

[Delivery](http://martinfowler.com/delivery.html)

[Microservices](http://martinfowler.com/microservices)

[Data](http://martinfowler.com/data)

[Testing](http://martinfowler.com/testing)

[DSL](http://martinfowler.com/dsl.html)

## about me

[About](http://martinfowler.com/aboutMe.html)

[Books](http://martinfowler.com/books)

[FAQ](http://martinfowler.com/faq.html)

## content

[Videos](http://martinfowler.com/videos.html)

[Content Index](http://martinfowler.com/tags)

[Fragments](http://martinfowler.com/fragments)

[Board Games](http://martinfowler.com/boardgames)

[Photography](http://martinfowler.com/photos)

## Thoughtworks

[Home](https://thoughtworks.com/)

[Insights](https://thoughtworks.com/insights)

[Careers](https://thoughtworks.com/careers)

[Radar](https://thoughtworks.com/radar)

[Engineering](https://www.thoughtworks.com/engineering)

## follow

[RSS](http://martinfowler.com/feed.atom)

[Mastodon](https://toot.thoughtworks.com/@mfowler)

[LinkedIn](https://www.linkedin.com/in/martin-fowler-com/)

[Bluesky](https://bsky.app/profile/martinfowler.com)

[X](https://www.twitter.com/martinfowler)

[BGG](https://boardgamegeek.com/blog/13064/martins-7th-decade)

[![Image 2](http://martinfowler.com/thoughtworks_white.png)](https://www.thoughtworks.com/engineering)

© Martin Fowler | [Disclosures](http://martinfowler.com/aboutMe.html#disclosures)
