I’m finding that whether this process works well is a measure (and a function) of how well-factored and disciplined a codebase is in the first place. Funnily enough, LLMs do seem to have a better time extending systems that are well-engineered for extensibility.
That’s the part which gives me optimism, and even more enjoyment of the craft — that quality pays back so immediately, makes it that much easier to justify the extra effort, and having these tools at our disposal reduces the ‘activation energy’ for necessary re-work that may before have just seemed too monumental.
If a codebase is in a good shape for people to produce high-quality work, then so too can the machines. Clear, up-to-date, close-to-the-code, low redundancy documentation; self-documenting code and tests, that prioritizes expression of intent over cleverness; consistent patterns of abstraction that don’t necessitate jarring context switches from one area to the next; etc.
All this stuff is so much easier to lay down with an agent loaded up on the relevant context too.
Edit: oh, I see you said as much in the article :)
That’s the part which gives me optimism, and even more enjoyment of the craft — that quality pays back so immediately, makes it that much easier to justify the extra effort, and having these tools at our disposal reduces the ‘activation energy’ for necessary re-work that may before have just seemed too monumental.
If a codebase is in a good shape for people to produce high-quality work, then so too can the machines. Clear, up-to-date, close-to-the-code, low redundancy documentation; self-documenting code and tests, that prioritizes expression of intent over cleverness; consistent patterns of abstraction that don’t necessitate jarring context switches from one area to the next; etc.
All this stuff is so much easier to lay down with an agent loaded up on the relevant context too.
Edit: oh, I see you said as much in the article :)