The tradeoff of higher velocity for less enjoyment may feel less welcome when it becomes the new baseline and the expectation of employers / customers. The excitement of getting a day's work done in an hour* (for example) is likely to fade once the expectation is to produce 8 of such old-days output per day.
I suspect it doesn't matter how we feel about it mind you. If it's going to happen it will, whether we enjoy the gains first or not.
* setting aside whether this is currently possible, or whether we're actually trading away more quality that we realise.
> The excitement of getting a day's work done in an hour* (for example) is likely to fade once the expectation is to produce 8 of such old-days output per day.
That dumb attitude (which I understand you’re criticising) of “more more more” always reminds me of Lenny from the Simpsons moving fast through the yellow light, with nowhere to go.
The most challenging thing I'm finding about working with LLM-based tools is the reduction in enjoyment. I'm in this business because I love it, and I'm worried about that going forward.
> The tradeoff of higher velocity for less enjoyment may feel less welcome when it becomes the new baseline and the expectation of employers / customers. The excitement of getting a day's work done in an hour* (for example) is likely to fade once the expectation is to produce 8 of such old-days output per day.
That's why we should be against it but hey, we can provide more value to shareholders!
Short-term, automated tech debt creation will yield gains.
Long term the craftsperson writing excellent code will win. It is now easier than ever to write excellent code, for those that are able to choose their pace.
Given it's 2025 and companies saddled with tech debt continue to prioritize speed of delivery over quality, I doubt the craftperson will win.
If anything we'll see disposable systems (or parts) and the job of an SE will become even more like a plumber, connecting prebuilt business logic to prebuilt systems libraries. When one of those fails, have AI whip up a brand new one instead of troubleshooting the existing one(s). After all, for business leader it's the output that matters, not the code.
For 20+ years business leaders have been eager to shed the high overhead of developers via any means necessary while ignoring their most expensive employees' input. Anyone remember Dilbert? It was funny as a kid, and is now tragic in its timeless accuracy a generation later.
Yes there will be a class of developer like that, but it would only be considered winning if you're satisfied with climbing some artificial institutional hierarchy.
From the masterpiece, Tragedy of the Man, describing the future where everything is done in the name of efficiency:
THE GREYBEARD
You left your workroom in great disarray.
MICHELANGELO
Because I had to fabricate the chair-legs
To the quality as poor as it can be.
I appeal’d for long, let me modificate,
Let me engrave some ornaments on it.
They did not permit. I wanted as a chance
The chair-back to change but all was in vain.
I was very close to be a madman
And I left the pains and my workroom, too. (stands back)
THE GREYBEARD
You get house arrest for this disorder
And will not enjoy this nice and warm day.
In my experience, listening to music engages the creative part of your brain and severely limits what you can do, but this is not readily apparent.
If I listen to music, I can spend an hour CODING YEAH! and be all smug and satisfied, until I turn the music off and discover that everything I've coded is unnecessary and there is an easier way to achieve the same goal. I just didn't see it, because the creative part of my brain was busy listening to music.
From the post, it sounds like the author discovered the same thing: if you use AI to perform menial tasks (like coding), all that is left is thinking creatively, and you can't do that while listening to music.
I describe it slightly differently. Similar to what the author described, I'll first plan and solve the problem in my head, lay out a broad action plan, and then put on music to implement it.
But, for me the music serves something akin to clocks in microcontrollers (and even CPUs), it provides a flow that my brain syncs to. I'm not even paying attention to the music itself, but it stops me from getting distracted and focus on the task at hand.
I just think it's distracting. I get caught up listening to the lyrics and kind of mentally singing along, stuff like that which disrupts my thought and distracts from what I actually want to be thinking about.
I think this is individual, I have the same problem in social settings - if I'm having a conversation and a song I like is playing in the background I some times stop listening to the conversation and focus on the music instead, unintentionally.
My solution is to listen to music without vocals when I need to focus. I've had phases where I listen to classical music, electronic stuff, and lately I've been using an app I found called brain.fm which I think just plays AI generated lo-fi or whatever and there's some binaural beats thing going on as well that's supposed to enhance focus, creativity etc. I like it but some times I go back to regular music just because I miss listening to something I actually like.
I'm sorry but that's nonsense. Listening to music is not a creative process, it does not at all take away creativity from somewhere else.
I've never, ever, ever once in 40 years of coding listened to music while coding and later found the code "unnecessary" or anything of the sort.
I engage in many creative pursuits outside of coding, always while listening to music, and I can confidently say that music has never once interfered in the process or limited the result in any way.
I’d probably drop GenAI before I dropped the music that allows me to focus. Also, at this stage of my career, I mainly code for fun, and blasting music across the house is part of it.
> writing a blurb that contains the same mental model
Good nugget. Effective prompting, aside from context curation, is about providing the LLM with an approximation of your world model and theory, not just a local task description. This includes all your unstated assumptions, interaction between system and world, open questions, edge cases, intents, best practices, and so on. Basically distill the shape of the problem from all possible perspectives, so there's an all-domain robustness to the understanding of what you want. A simple stream of thoughts in xml tags that you type out in a quasi-delirium over 2 minutes can be sufficient. I find this especially important with gpt-5, which is good at following instructions to the point of pedantry. Without it, the model can tunnel vision on a particular part of the task request.
It's not parody. I'm trying to provide the LLM with what's missing, which is a theory of how the system fits into the world: https://pages.cs.wisc.edu/~remzi/Naur.pdf
Without this it defaults to being ignorant about the trade-offs that you care about, or the relevant assumptions you're making which you think are obvious but really aren't.
The "simple stream" aspect is that each task I give to the LLM is narrowly scoped, and I don't want to put all aspects of the relevant theory that pertains just to that one narrow task into a more formal centralized doc. It's better off as an ephemeral part of the prompt that I can delete after the task is done. But I also do have more formal docs that describe the shared parts of the theory that every prompt will need access to, which is fed in as part of the normal context.
I use Zoom rather than Teams, but have no problems playing background music with Spotify. Just have to make sure that “share computer audio” is not enabled when sharing your screen. Also, when I was using the mic of my bluetooth headphones, any music played would be mono and lower quality due to bluetooth bandwidth. Since moving to using a dedicated mic on my desk, the bluetooth headphones are output only and back to good quality stereo (MacOSX and Bose QC35).
Not too long ago, I installed a system mixer/EQ on my work computer so I could mix ambient-ish music with Zoom calls while giving space to the (meeting room's) vocals. It works great.
Sounds great…
I’m a hobbyist audio engineer so can’t resist tinkering.
My mic input goes to Rogue Amoeba’s Audio Hijack which allows a chain of VST plugins to be used. I use a gentle noise gate, followed by Supertone Clear. This is excellent for removing any noise mixed in with my voice. It handles the dehumidifier near me with ease, and can also handle the portable aircon on heatwave days, which can be very loud. Next in the chain is some gentle and transparent compression, and then the output goes to the free Blackhole virtual audio device. Zoom then picks up that device. Tweaked Zoom settings and “original sound for musicians” button then stops it from compromising the sound as much as possible. I am tempted to have some EQ plugin tweaks too, but I love the character of the mic so I don’t mess around with it.
EQing the music played sounds interesting. I’ll look at the options. I tend to just have it at a level low enough that I can hear all speakers on the call.
It definitely changed how I get into flow state for me. But music still works, if not even better when coding with AI (listening to: techno, electro, edm). Generally my flow is to sit down, make a small plan of what I will work on, fire off 2 agents to work on different parts of the code that are lower hanging fruits (takes 2-10 mins for them to complete). Then while this is busy, map out some bigger tasks.
Agents finish, I queue them up with new low hanging fruits, while I architect the much bigger tasks, then fire that off -> Review smaller tasks. It really is a dance, but flow is much easier achieved when I do get into it; hours really just melt together. The important thing to do is to put my phone away, and block all and any social media or sites I frequent, because its easy to get distracted when agents aren just producing code and you're sitting on the sidelines.
Whenever I need some sort of quick data pipeline to modify some sort of file into another format, or do some batch transformation, or transform some sort of interface description into another syntax, or things like that, that would normally require me to craft a grep, awk, tr, etc pipeline, I can normally simply paste a sample of the data and with a human language description get what I need. If it’s not working well I can break up the steps in smaller steps.
In my experience, it seems the people who have bad results have been trying to get the AI to do the reasoning. I feel like if I do the reasoning, I can offload menial tasks to the AI, and little annoying things that would take one or two hours start to take a few minutes.
The ones who know what they want to do, how it should be done, but can't really be arsed to read the man pages or API docs of all the tools required.
These people can craft a prompt (prompt engineering :P) for the LLM that gets good results pretty much directly.
LLMs are garbage in garbage out. Sometimes the statistical average is enough, sometimes you need to give it more details to use the available tools correctly.
Like the fact that `fd` has the `-exec` and `--exec-batch` parameters, there's no need to use xargs or pipes with it.
Every kind of project is faster with AI, because it writes the code faster.
Then you have to QA it for ages to discover the bugs it wrote, but the initial perception of speed never leaves you.
I think I'm overall slower with AI, but I could be faster if I had it write simple functions that I could review one by one, and have the AI compose them the way I wanted. Unfortunately, I'm too lazy to be faster.
With web apps playwright-mcp[0] is essential IMO. It lets the AI Agent check its own work before claiming it's done.
With that it can see any errors in the console, click through the UI and take screenshots to analyse how it looks giving it an independent feedback loop.
Pretty much what somebody else said: AI takes over simple tasks, the "fluff" around the business logic, error handling, stuff like that, so I can focus on doing the harder stuff at the core.
> 90% of what the average (or median) coder does isn't in any way novel or innovative. It's just API Glue in one form or another.
I hear this from people extolling the virtue of AI a lot, but I have a very hard time believing it. I certainly wouldn't describe 90% of my coding work as boilerplate or API glue. If you're dealing with that volume of boilerplate/glue, isn't it incumbent upon you to try and find a way to remove that? Certainly sometimes it isn't feasible, but that seems like the exception encountered by people working on giant codebases with a very large number of contributors.
I don't think the work I do is innovative or even novel, but it is nuanced in a way I've seen Claude struggle with.
To be more exact, 90% of the _code_ I write is mostly just different types of API glue. Get data from this system, process it and put it in another system.
It's the connectors that are 90-95% AI chow, just set it to task with a few examples and it'll have a full CRUD interface for your data done while you get more snacks.
Then you can spend _more_ of your limited time on the 10% of code that matters.
That said, less than 50% of my actual time spent on the clock is spent writing code. That's the easiest part of the job. The rest is coordinating and planning and designing.
Because I need to have a controller that does CRUD operations.
It has a specific amount of code I need to write just to add the basic boilerplate of receiving the data and returning a result from the endpoint before I can get to the meat of it.
IIRC there are no languages where I can just open an empty file and write "put: <business logic>" and it magically knows how to handle everything correctly.
I'm slowed down (but perhaps sped up overall due to lower rewrites/maintenance costs) on important bits because the space of possibilities/capabilities is expanded, and I'm choosing to make use of that for some load bearing pieces that need to be durable and high quality (along the metrics that I care about). It takes extra time to search that space properly rather than accept the first thing that compiles and passes tests. So arguably equal or even lower velocity, but definitely improved results compared to what I used to be capable of, and I'm making that trade-off consciously for certain bits. However that's the current state of affairs, who knows what it'll look like in 1-2 years.
I’m building a moderately complex system with FastAPI + PG + Prefect executing stuff on Cloud Run, and so long as I invest in getting the architecture and specs right, it’s really a dream how much of the heavy lifting and grunt work I can leave to Claude Code. And thank god I don’t have to manage Alembic by myself.
There's a local website that sells actual physical Blu-rays. Their webshite is a horror show of Javascript.
I had Claude Code build me a Playwright+python -based scraper that goes through their movie section and stores the data locally to an sqlite database + a web UI for me to watchlist specific movies + add price ranges to be alerted when it changes.
Took me maybe a total of 30 minutes of "active" time (4-5 hours real-time, I was doing other shit at the same time) to get it to a point where I can actually use it.
Basically small utilities for limited release (personal, team, company-internal) is what AI coding excels at.
Like grabbing results from a survey tool, adding them to a google sheet, summarising the data to another tab with formulas. Maybe calling an LLM for sentiment analysis on the free text fields.
Half a day max from zero to Good Enough. I didn't even have to open the API docs.
Is it perfect? Of course not. But the previous state was one person spending half a day for _each_ survey doing that manually. Now the automation runs in a minute or so, depending on whether Google Sheets API is having a day or not =)
I set up a model in DBT that has 100 columns. I need to generate a schema for it (old tools could do this) with appropriate tests and likely data types (old tools struggled with this). AI is really good at this sort of thing.
It’s a lot more high-level executive functioning now, instead of grinding through endless syntax and boilerplate. Easy to mindlessly code to music, much harder to think about what you want to do next, and evaluate if the result you just got is what you really wanted.
I encounter a problem which is my social media usage increase. The time when you wait for the agent to write code is kinda silly :) You can not switch to do other things as it will make your focus shift away from the code. So usually I just dumb scroll for 1 minute
I let the AI first generate a outline of how it would do it as markdown. I adapt this and then let it add details into additional markdown files about technical stuff, eg how to use a certain sdk and so on. I correct these all.
And then I let the AI generate the classes of the outline one by one.
What happened to coding for joy in your free time? At work I do whatever the company wants as long as I get my money at the end of the month. Java? Sure boss. Golang? Let’s do it. LLMs? Whatever you want. TDD? Yep.
At home I still plan and devise my own worlds with joy. I may use LLMs for boring or repetitive tasks, or help or explanation; but I still can code better than the day before.
The line that stood out for me was that "a 4-hour session of AI coding is more cognitively intense than a 4-hour session of non-AI coding."
Many programmers are rejecting AI coding because they miss the challenge they enjoy getting from conventional programming but this author finds it even more challenging. Or perhaps challenging in a different way?
There is a distinction I believe between challenging and focusing. The difference lies in difficulty (the former being more dificult) and workload (the latter being more intellectualy labor intensive), which is an interesting approach to intellectual menial labor as distinct from intellectual craft.
I think there are (at least) two types of programmer - I am the kind of programmer who wants everything done right. Others don't care as long as it works and the boss is happy.
I suspect that the type of programmer who enjoys vibe coding is the latter. For me it's pretty tiring to explain everything in excruciating detail, it's often easier to just write the code myself rather than explain in English how to write it.
It feels like I am just doing the hard part of programming all the time - deciding how the app should work and how the code should be structured etc, and I never get those breaks where I just implement my plan.
My theory as a none scientist is that you need a different part of the brain to think about AI prompts compared to coding yourself. Or maybe that whatever though process you need for coding intersects with the part that enjoys listening to music. And because of that intersection you can't focus on both at the same time.
I actually still listen to music heavily when in the zone with AI coding. Totally agree that the focus time feels much more intense now than it did before AI.
CAN you get into a state of flow when directing an LLM? I don't have a lot of experience using LLMs to code, but it always feels like I'm coaching a junior staff member. No way to flow that, IMHO.
It’s the opposite for me. I’ve never been able to listen to music while coding as my thoughts would drown it out or it would keep me from thinking so I’d shut it off. However if I am vibe coding my brain is basically idle and can handle some music
I'm curious which models the OP is using that produce code so quickly and accurately? I mostly use Claude Code, which is accurate, but it isn't very fast. I certainly don't feel like I'm producing piles of code with it.
> Absolutely—I feel like I can ship at a crazy velocity now, like I have a team of interns at my disposal to code up my every silly demand.
I also wonder what type of simple CRUD apps people build that have such a performance gain? They must be building well understood projects or be incredible slow developers for LLMs to have such an impact, as I cant relate to this at all.
I wonder whether we'll be able to look back on this period in 10 years time and save definitively whether the wide spectrum of responses to LLMs was perception or real feature of our differing jobs.
That’s right, it’s good at things that are common. If your job is mostly filled with uncommon tasks, it won’t be good at helping you.
But for the rest of us, who have a mix of common/boring and uncommon/interesting tasks, accelerating the common ones means spending more time (proportionally) on less common tasks.
Unfortunately we don’t seem to great at classifying tasks as common or uncommon, and there are bored engineers who make complex solutions just to keep their brain occupied.
I dunno, it is a bit different leveraging a model, but I still listen to music coding. It does depend on the music. I need to listen to really brutal stuff (Arsis, Thyrfing, Dissection, etc.) to focus, though
It's hard to get in the zone with an LLM doing crazy stuff.
For instance this week when setting up a Django/Wagtail project GPT helpfully went ahead and created migration files in text instead of "makemigrations". Otherwise it did a bang up job and saved me a couple of hours.
Just no way I can get in the zone wrangling that kind of thing all day.
But I'm not sure getting in the coding zone frequently was all that mentally healthy so oh well.
> For frontend code and my side projects, AI coding seems to be even more effective and actually reduces the cognitive load, winning in all dimensions.
Can we see this frontend code? For research purposes, of course.
That's an argument I had with a friend last year. I told him generative AI will make writing code easier, but the life of whoever is writing it far worse. Because writing code without using AI is done with some sort of due diligence: you memorize some stuff, look up other stuff in the docs or online, and you take some time actually solving the problem you have. If you succeed, you would've spent the needed time at YOUR pace, with an intrinsic reward of feeling good that you achieved something. With AI, on the other hand, you are in semi-cheat mode, throwing prompts after prompts and now you are trying to catch someone/something else's pace, zero reward, and more mentally exhausted.
The best approach is to use AI only when you are stuck and looking for potential solutions, but we all know that is not going to happen unless you have extreme self-control.
I don't listen to music while doing code reviews either. It also happens to be my least favourite part of the job. The LLM agents just make it feel like I'm constantly code reviewing and I don't think it makes me more productive overall.
I suspect it doesn't matter how we feel about it mind you. If it's going to happen it will, whether we enjoy the gains first or not.
* setting aside whether this is currently possible, or whether we're actually trading away more quality that we realise.
reply