Hacker Newsnew | past | comments | ask | show | jobs | submit | tptacek's commentslogin

It's not even true that the RSA authors were the founders of the company we know as RSA. The RSA founders company was acquired by Security Dynamics in the mid-1990s, which then took over the name.

Tip: I very often use AI for inspiration. In this case, I ended up keeping a lot (not all) of the UI code it made, but I will very often prompt an agent, throw away everything it did, and redo it myself (manually!). I find the "zero to one" stage of creation very difficult and time consuming and AI is excellent at being my muse.

This right here is the single biggest win for coding agents. I see and directionally agree with all the concerns people have about maintainability and sprawl in AI-mediated projects. I don't care, though, because the moment I can get a project up on its legs, to where I can interact with some substantial part of its functionality and refine it, I'm off to the races. It's getting to that golden moment that constitutes 80% of what's costly about programming for me.

This is the part where I simply don't understand the objections people have to coding agents. It seems so self-evidently valuable --- even if you do nothing else with an agent, even if you literally throw all the code away.

PS

Put a weight on that bacon!


> This is the part where I simply don't understand the objections people have to coding agents. It seems so self-evidently valuable --- even if you do nothing else with an agent, even if you literally throw all the code away.

It sounds like the blank page problem is a big issue for you, so tools that remove it are a big productivity boost.

Not everyone has the same problems, though. Software development is a very personal endeavor.

Just to be clear, I am not saying that people in category A or category B are better/worse programmers. Just that everyone’s workflow is different so everyone’s experience with tools is also different.

The key is to be empathetic and trust people when they say a tool does or doesn’t work for them. Both sides of the LLM argument tend to assume everyone is like them.


Just this week I watched an interview with Mitchell about his dev setup and when asked about using neovim instead of an IDE he said something along the lines of "I don't want something that writes code for me". I'm not pointing this out as a criticism, but rather that it's worth taking note that an accomplished developer like him sees value in LLMs that he didn't see in previous intellisense-like tooling.

Not sure exactly what you're referring to, but I'm guessing it may be this interview I did 2 years ago: https://youtu.be/rysgxl35EGc?t=214 (timestamp linked to LLM-relevant section) I'm genuinely curious because I don't quite remember saying the quote you're saying I did. I'm not denying it, but I'd love to know more of the context. :)

But, if it is the interview from 2 years ago, it revolved more around autocomplete and language servers. Agentic tooling was still nascent so a lot of what we were seeing back then was basically tab models and chat models.

As the popular quote goes, "When the Facts Change, I Change My Mind. What Do You Do, Sir?"

The facts and circumstances have changed considerably in recent years, and I have too!


It was this one: https://sourcegraph.com/blog/dev-tool-time-mitchell-hashimot...

They even used the quote as the title of the accompanying blog post.

As I say, I didn’t mean this as a gotcha or anything- I totally agree with the change and I have done similarly. I’ve always disabled autocomplete, tool tips, suggestions etc but now I am actively using Cursor daily.


Yeah understood, I'm not taking it negatively, I just genuinely wanted to understand where it came from.

Yeah this is from 2021 (!!!) and is directly related to LSPs. ChatGPT didn't even get launched until Nov 2022. So I think the quote doesn't really work in the context of today, it's literally from an era where I was looking at faster horses when cars were right around the corner and I had not a damn clue. Hah.

Off topic: I still dislike [most] LSPs and don't use them.


Cognitive Dissonance. Still there, even in the best of us.

> the moment I can get a project up on its legs, to where I can interact with some substantial part of its functionality and refine it, I'm off to the races. [...] This is the part where I simply don't understand the objections people have to coding agents.

That's what's valuable to you. For me the zero to one part is the most rewarding and fun part, because that's when the possibilities are near endless, and you get to create something truly original and new. I feel I'd lose a lot of that if I let an AI model prime me into one direction.


Surely there are some things which you can’t be arsed to take from zero to one?

This isn’t selling your soul; it is possible to let AI scaffold some tedious garbage while also dreaming up cool stuff the old fashioned way.


> Surely there are some things which you can’t be arsed to take from zero to one?

No, not really: https://news.ycombinator.com/item?id=45232159

> This isn’t selling your soul;

There is a plethora of ethical reasons to reject AI even if it was useful.


OP is considering output productivity, but your comment is about personal satisfaction of process

That's true, but when the work is rewarding, I also do it quite fast. When it's tedious tweaking, I have force myself to keep on typing.

Also: productivity is for machines, not for people.


Tedious tweaking is my favorite thing to outsource to coding agents these days.

I concur here and would like to add that I worry less about sprawl when I know I can ask the agent to rework things in future. Yes, it will at times implement the same thing twice in two different files. Later, I’ll ask it to abstract that away to a library. This is frankly how a lot of human coding effort goes too.

I was talking about this the other day with someone - broadly I agree with this, they're absolutely fantastic for getting a prototype so you can play with the interactions and just have something to poke at while testing an idea. There's two problems I've found with that, though - the first is that it's already a nightmare to convince management that something that looks and acts like the thing they want isn't actually ready for production, and the vibe coded code is even less ready for production than my previous prototyping efforts.

The second is that a hand-done prototype still teaches you something about the tech stack and the implementation - yes, the primary purpose is to get it running quickly so you can feel how it works, but there's usually some learning you get on the technical side, and often I've found my prototypes inform the underlying technical direction. With vibe coded prototypes, you don't get this - not only is the code basically unusable, but you really are back to starting from scratch if you decide to move forward - you've tested the idea, but you haven't really tested the tech or design.

I still think they're useful - I'm a big proponent of "prototype early," and we've been able to throw together some surprisingly large systems almost instantly with the LLMs - but I think you've gotta shift your understanding of the process. Non-LLM prototypes tend to be around step 4 or 5 of a hypothetical 10-step production process, LLM prototypes are closer to step 2. That's fine, but you need to set expectations around how much is left to do past the prototype, because it's more than it was before.


> the moment I can get a project up on its legs, to where I can interact with some substantial part of its functionality and refine it, I'm off to the races

AI is an absolute boon for "getting off the ground" by offloading a lot of the boilerplate and scaffolding that one tends to lose enthusiasm for after having to do it for the 99th time.

> AI is excellent at being my muse.

I'm guessing we have a different definition for muse. Though I admit I'm speaking more about writing (than coding) here but for myself, a muse is the veritable fount of creation - the source of ideas.

Feel free to crank the "temperature" on your LLM until the literal and figurative oceans boil off into space, at the end of the day you're still getting the ultimate statistical distillation.

https://imgur.com/a/absqqXI


Agree – my personal rule is that I throw away any branches where I use LLM-generated code, and I still find it very helpful because of the speed of prototyping various ideas.

People get into this field for very different reasons.

- People who like the act and craftsmanship of coding itself. AI can encourage slop from other engineers and it trivializes the work. AI is a negative.

- People who like general engineering. AI is positive for reducing the amount of (mundane) code to write, but still requires significant high-level architectural guidance. It’s a tool.

- People who like product. AI can be useful for prototyping but won’t won’t be able to make a good product on its own. It’s a tool.

- People who just want to build a MVP. AI is honestly amazing at making something that at least works. It might be bad code but you are testing product fit. Koolaid mode.

That’s why everyone has a totally different viewpoint.


Real subtle. Why not just write "there are good programmers and bad programmers and AI is good for bad programmers and only bad programmers"? Think about what you just said about Mitchell Hashimoto here.

I'm not sure that's a fair take.

I don't think it's an unfair statement that LLM-generated code typically is not very good - you can work with it and set up enough guard rails and guidance and whatnot that it can start to produce decent code, but out of the box, speed is definitely the selling point. They're basically junior interns.

If you consider an engineer's job to be writing code, sure, you could read OP's post as a shot, but I tend to switch between the personas they're listing pretty regularly in my job, and I think the read's about right.

To the OP's point, if the thing you like doing is actually crafting and writing the code, the LLMs have substantially less value - they're doing the thing you like doing and they're not putting the care into it you normally would. It's like giving a painter an inkjet printer - sure, it's faster, but that's not really the point here. Typically, when building the part of the system that's doing the heavy lifting, I'm writing that myself. That's where the dragons live, that's what's gotta be right, and it's usually not worth the effort to incorporate the LLMs.

If you're trying to build something that will provide long-term value to other people, the LLMs can reduce some of the boilerplate stuff (convert this spec into a struct, create matching endpoints for these other four objects, etc) - the "I build one, it builds the rest" model tends to actually work pretty well and can be a real force multiplier (alternatively, you can wind up in a state where the LLM has absolutely no idea what you're doing and its proposals are totally unhinged, or worse, where it's introducing bugs because it doesn't quite understand which objects are which).

If you've got your product manager hat on, being able to quickly prototype designs and interactions can make a huge, huge difference in what kind of feedback you get from your users - "hey try this out and let me know what you think" as opposed to "would you use this imaginary thing if I built it?" The point is to poke at the toy, not build something durable.

Same with the MVP/technical prototyping - usually the question you're trying to answer is "would this work at all", and letting the LLM crap out the shittiest version of the thing that could possibly work is often sufficient to find out.

The thing is, I think these are all things good engineers _do_. We're not always painting the Sistine Chapel, we also have to build the rest of the building, run the plumbing, design the thing, and try to get buy-in from the relevant parties. LLMs are a tool like any other - they're not the one you pull out when you're painting Adam, but an awful lot of our work doesn't need to be done to that standard.


> This is the part where I simply don't understand the objections people have to coding agents

Because I have a coworker who is pushing slop at unsustainable levels, and proclaiming to management how much more productive he is. It’s now even more of a risk to my career to speak up about how awful his PRs are to review (and I’m not the only one on the team who wishes to speak up).

The internet is rife with people who claim to be living in the future where they are now a 10x dev. Making these claims costs almost nothing, but it is negatively effecting mine and many others day to day.

I’m not necessarily blaming these internet voices (I don’t blame a bear for killing a hiker), but the damage they’re doing is still real.


I don't think you read the sentence you're responding to carefully enough. The antecedent of "this" isn't "coding agents" generally: it's "the value of an agent getting you past the blank page stage to a point where the substantive core of your feature functions well enough to start iterating on". If you want to respond to the argument I made there, you have to respond to the actual argument, not a broader one that's easier (and much less interesting) to take swipes at.

My understanding of your argument is:

Because agents are good on this one specific axis (which I agree with and use fwiw), there’s no reason to object to them as a whole

My argument is:

The juice isn’t worth the squeeze. The small win (among others) is not worth the amounts of slop devs now have to deal with.


Sounds like a very poorly managed team.

I have to agree. My experience working on a team with mixed levels of seniority and coding experience is that everybody got some increase in productivity and some increase in quality.

The ones who spend more time developing their agentic coding as a skillset have gotten much better results.

In our team people are also more willing to respond to feedback because nitpicks and requests to restructure/rearchitect are evaluated on merit instead of how time-consuming or boring they would have been to take on.


In tech? Say it ain't so.

in any organization???

Not sure what to tell you, if there's a problem you have to speak up.

And the longer you wait, the worse it will be.

Also, update your resume and get some applications out so you’re not just a victim.


Maybe it's possible to use AI to help review the PRs and claim it's the AI making the PR's hyperproductive?

Yes, this. If you can describe why it is slop, an AI can probably identify the underlying issues automatically.

Done right you should get mostly reasonable code out of the "execution focused peer".


In climate terms, or even simply in terms of $cost, this very much feels like throwing failing on a bonfire.

Should we really advocate for using AI to both create and then destroy huge amounts of data that will never be used?


I don't think it is a long term solution. More like training wheels. Ideally the engineers learn to use AI to produce better code the first time. You just have a quality gate.

Edit: Do I advocate for this? 1000%. This isn't crypto burning electricity to make a ledger. This objectively will make the life of the craftsmanship focused engineer easier. Sloppy execution oriented engineers are not a new phenomenon, just magnified with the fire hose that an agentic AI can be.


The environmental cost of AI is mostly in training afaik. The inference energy cost is similar to the google searches and reddit etc loads you might do during handwritten dev last I checked. This might be completely wrong though

I hear this argument a lot, but it doesn’t hold water for me. Obviously the use of the AI is the thing that makes it worthwhile to do the training, so you obviously need to amortize the training cost over the inference. I don’t know whether or not doing so makes the environmental cost substantially higher, though.

My trouble is that the remaining 20% of work takes 80% of my time. Ai assistance or not. The edge

100% agree and LLM does have many blind spots and high confidence which makes it hard to really trust without checking

100% agreed.

> "Put a weight on that bacon!" ?


Mitchell included a photograph of his breakfast preparation, and the bacon was curling up on the frying pan.

These are great and everybody should own a bunch of them.

https://www.thechefspress.com/


It's invaluable if you don't know how to work with it

This is an artefact of a language ecosystem that does not prioritize getting started. If you picked php/laravel with a few commands you are ahead of the days of work piping golang or node requires to get to a starting point.

I guess it depends on your business. I rarely start new professional projects, but I maintain them for 5+ years - a few pieces of production software I started are now in the double digits. Ghostty definitely aims to be in that camp of software.

He's saying more than that the companies are going to collapse; he's making pronouncements about the underlying technology, which are claims that are much harder to defend. I'm not entirely sure he understands the distinction between the companies and the technology, though.

Respectably...what?? Ed at this point is one of the most well read people on Earth for this topic. Of course he knows the difference between the companies and the technology. He goes in depth both on why he think the companies are financially unviable AND why he's unimpressed by LLM's technologically alllll the time.

Even as someone who is generally inclined to agree with his thesis, I find Ed Zitron's discussions as to why AI does not and will never work deeply unconvincing.

I don't think he fundamentally gets what's going on with AI on the tech level and how the Moore's law type improvements in compute have driven this and will keep doing so. He just kind of sees that LLM chatbots are not much good and assumes things will stay like that. If that were so investing $1tn would make no sense. But it's not true.

Having a large audience does not imply being the most well informed or correct.

"Saying what a lot of people want to hear" is not a good proxy for truthfulness or correctness.

Lecun pretty much says the same things, as most experts actually. Only the execs and marketing teams keep yapping about AGI

I didn't say anything about AGI. I think AGI is very silly.

The original meaning of AI is what some now call AGI. Some don't choose to follow meaning shifts forced by large companies for advertisement purposes. Same like Full* Self** Driving***.

Is he right on the AI stuff? Like, on the OpenAI company stuff he could be? I don't know? But on the technology? He really doesn't seem to know what he's talking about.

> But on the technology? He really doesn't seem to know what he's talking about.

That puts him roughly on-par with everyone who isn't Gerganov or Karpathy.


No, that's not at all clear. Ruby Central owns the AWS account for which Arko is (pretty clearly) being accused of changing the AWS root account password after having his access revoked.

I don't think for a second Arko will be charged, but there isn't a "nuh-uh, you did this gross thing in our open source community" defense for 18 USC 1030.


I didn't say it was clear, and I never said there was a defense. I implied that the wronged party in one case might want to be careful about raising the specter of liability or criminality.

Isn't the subtext of this post pretty clearly that the unauthorized actor was Andre Arko, who had until days prior all the same access to RubyGems.org already?

The impression I have reading this is that they're going out of their way to make it clear they believe it was him, but aren't naming him because doing so would be accusing him of a criminal act.


Let's say that they are 100% correct, we parse the subtext as text, it was totally him.

We still do not know the critical details of how (and when) he stored the root password he copied out of their password manager (encrypted in his own password manager? on his pwned laptop? in dropbox? we'll never know!) therefore the whole chain of custody is still broken.


The leading contender to replace RubyGems has Andre Arko as a charter member, so this all seems very salient.

Right but that speaks more to Andre's character, IMO.

Why are you copying a password out of a shared vault that should only be used in break-glass type scenarios? It's that's not planning for possible malicious action in the future, I don't know what is.

You can try and excuse it as having your own break-glass for the break-glass, but that's on the spectrum between irresponsible and incompetent.

Again, if the accusation is true, removing him was justifiable from any possible perspective you might have.


The other subtext is that they literally have no idea how to run rubygems securely... And what to do in case of a security incident...

I'm addressing the question of whether we all had better assume all the RubyGems published after this incident were compromised, and my response is "that is probably not rational since the actor in this scenario had all this access legitimately just days beforehand". The rest, I don't care.

Look, it's enough to know that Rubygems did not require 2FA before August 2022. There were gems with millions of downloads with owners without 2FA on their accounts. I think your initial assumption is pretty safe even without the ongoing fiasco.

It seems to me like the inherent trust in open source software is a big problem. Reliance on software maintained by strangers, sometimes just one individual, and not reading/understanding the code before running it.

The other other subtext is that this sure is an effective distraction from their governance problems, and muddies the waters. Given the utter lack of trust I have for anything the Ruby Central folks say at this point, given the amount of spin and misinformation they've spread already, my default assumption is that this is an excuse to malign someone who may well have had legitimate access, in the process of claiming that you're locking things down, which was always the excuse being made for kicking people out.

Update: https://andre.arko.net/2025/10/09/the-rubygems-security-inci... is pretty much exactly the kind of thing I expected here. Person with legitimate access doing their job, organization flailing around in the process of kicking people out that should never have been kicked out in the first place.

He changed the AWS root account password; RC implies they had to go through a reset flow to recover the account. This apparently went on for more than a week. I don't know how to reconcile what Arko is claiming with what RC is claiming.

Arko believed he was in the right to do so, and while he probably should've reached out sooner to notify them of the "precaution" he was taking, the fact that they didn't notice for almost two weeks shows how unserious they are about security

At this point, it looks like everyone involved, not just RubyCentral, contributed to the governance problems over many years https://archive.md/SEzoV

> Regarding Arko’s blog post about his removal, McQuaid [Homebrew Maintainer] told me it’s good that Arko is crediting other people for their contribution and that he’s following open source principles of community and transparency, but that “his ‘transparency’ here has been selective to things that benefit him/his narrative, he seems unwilling or unable to admit that he failed as a leader in being unwilling or unable to introduce a formal governance process long before this all went down or appoint a meaningful successor and step down amicably.”


Presuming, as a group full of security peers kibitzing about this in a chat right now all do, that the "unauthorized actor" here is Andre Arko, this is Ruby Central pretty directly accusing Arko of having hacked Rubygems.org; it depicts what seems to be a black letter 18 USC 1030 violation.

Any part of this narrative could be false, but I don't see a way to read it and take it as true where Arko's actions would be OK.


Putting myself in Arko’s shoes, I can imagine (charitably!) the following choice, realizing that I still have access and shouldn’t:

1. Try to get in touch, quickly, with someone with the power to fix it and explain what needs to be rotated.

2. Absent 1, especially if it cannot be done quickly, rotate the credentials personally to get them back to a controlled state (by someone who actually understands the security implications) with the intent to hand them off. Especially if you still _think_ of yourself as responsible for the infrastructure, this is a no-brainer compared to letting anyone else who might be in the same “should have lost access but didn’t, due to negligence” maintain access.

Not a legal defense, but let’s not be too hasty to judge.


Care providers make massively, massively more money than insurance providers.

You can look up Dr. Elizabeth Potter on Youtube who publicly details what its like dealing with insurance, and all the ways insurance screws her and her patients. United Health actively threatened and retaliated against her business when she started getting publicity.

The total industry wide profit numbers aren't relevant at all if you're running a small clinic going up against an insurance provider. Heck even if a single clinic made more money than an insurance provider, it would barely matter - the insurance providers have the power to stop covering your practice and kill it, a clinic does not have any such power over insurance providers.


Or I could just look at the numbers and see that providers make more than 8x what insurers do.

And yet this has absolutely nothing to do with the claim that "Insurance companies hold tremendous leverage over care providers, up to and including the power to effectively put them out of business on a whim.", you're not even engaging with the argument at all.

It doesn't? All the money is going to them, and they're massively larger than the insurers, but it's the insurers with all the leverage? Why isn't more of the money going to the insurers then?

https://nationalhealthspending.org/


Do you make this argument in any other scenario? I'm sure all merchants who accept credit cards combined make WAY more than Visa/MC, but I think most would agree Visa has much more leverage over a corner shop that accepts Visa than the other way around.

There are 5 or 6 big insurance companies, maybe 2000 if we count all of the small ones and 400K medical practices. So even by this very simple money=leverage argument, each individual practice has far less money than the insurance company they are dealing with. So if more money = more leverage then these same numbers prove the opposite claim.

So its probably fair to say that the picture isn't as simple as money=leverage.

If a medical practice and an insurance company get into a dispute and one of them decides to not work together, the practice loses say 1/5th-1/10th of its customers, the insurance company loses 1/100000th of its revenue. I call that leverage.


Care providers also likely spend much more time and labor on making that money than the insurance providers spend making their end, though I only have anecdotal evidence of this through my involvement in healthcare providers’ practices as an MSP.

It's $2.5Tn vs $0.3Tn. It's more than 8x more.

That’s one half of the proportion. What is the time/labor spent?

No, it's not. Phishing isn't a social problem, it's a technological problem. Whether or not you can intercept my credentials shouldn't be a question of how much I trust my IT department or how well I'm trained; the credentials simply shouldn't allow that to happen. That's the entire reason U2F was invented, and then WebAuthn and FIDO2.

Just rubbing it in, eh?

Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: