Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
It is worth it to buy the fast CPU (howardjohn.info)
249 points by ingve 1 day ago | hide | past | favorite | 482 comments




At my former job at a FAANG, I did the math on allocating developers machines with 16GB vs 64GB based on actual job tasks with estimates of how much thumb twiddling waiting time that this would save and then multiplied that out by the cost of the developer's time. The cost benefit showed a reasonable ROI that was realized in Weeks for Senior dev salaries (months for juniors).

Based on this, I strongly believe that if you're providing hardware for software engineers, it rarely if ever makes sense to buy anything but the top spec Macbook Pro available, and to upgrade every 2-3 years. I can't comment on non desktop / non-mac scenarios or other job families. YMMV.


No doubt the math checks out, but I wonder if developer productivity can be quantified that easily. I believe there's a lot of research pointing to people having a somewhat fixed amount of cognitive capacity available per day, and that aligns well with my personal experience. A lot of times, waiting for the computer to finish feels like a micro-break that saves up energy for my next deep thought process.

Your brain tends to do better if you can stay focused on your task for consecutive, though not indefinite, periods of time. This varies from person to person, and depends on how long a build/run/test takes. But the challenge for many is that 'break' often becomes a context switch, a potential loss of momentum, and worse may open me up to a distraction rather than a productive use of my time.

For me, personally, a better break is one I define on my calendar and helps me defragment my brain for a short period of time before re-engaging.

I recommend investigating the concept of 'deep work' and drawing your own conclusions.


This is true, but I find my train of thought slips away if I have to wait more than a handful of seconds, let alone two minutes.

Tying this back to your point, those limited hours of focus time come in blocks, in my experience, and focus time is not easily "entered", either.


One person's micro breaks are another person's disruption of flow state

Simple estimates work surprisingly well for a lot of things because a lot of the 'unquantifiable' complexity being ignored behaves like noise. When you have dozens of factors pulling in different directions—some developers multitask better, some lose flow more easily, some codebases are more memory-hungry, and so on—it all tends to just average out, and the result is reasonably accurate. Accurate enough that it's useful data to make a decision with, at least.

That sounds reasonable, but there are also factors pulling in the opposite direction, for example Wirth's Law [1], that suggests devs with powerful computers create inefficient software.

1. https://en.wikipedia.org/wiki/Wirth%27s_law


For me the issue is at work with 16gb of ram, I'm basically always running into swap and having things grind to a halt. My personal workstation has 64gb and the only time I experience issues is when something's leaking memory

There’s also the time to market and bureaucracy cost. I took over a place where there was a team of people devoted making sure you had exactly what PC you need.

Configuring devices more generously often lets you get some extra life out of it for people who don’t care about performance. If the beancounters make the choice, you’ll buy last years hardware at a discount and get jammed up when there’s a Windows or application update. Saving money costs money because of the faster refresh cycle.

My standard for sizing this in huge orgs is: count how many distinct applications launch per day. If it’s greater than 5-7, go big. If it’s less, cost optimize with a cheaper config or get the function on RDS.


Also worth factoring in that top-spec hardware will have a longer usable life, especially for non-power users.

Well depends what kind of time periods you're talking. I've seen one in the past that was 60 minutes vs. 20 minutes (for a full clean compile, but often that is where you find yourself) - that is far more than a micro-break, that is a big chunk of time wasted.

You’re not waiting for the end of a thing though. You might hope you are, but the truth is there’s always one little thing you still have to take care of. So until the last build is green and the PR is filed, you’re being held hostage by the train of thought that’s tied to this unit of work. Thinking too much about the next one just ends up adding time to this one.

You’re a grownup. You should know when to take a break and that’ll be getting away from the keyboard, not just frittering time waiting for a slow task to complete.


>"A lot of times, waiting for the computer to finish feels like a micro-break that saves up energy for my next deep thought process."

As an ISV I buy my own hardware so I do care about expenses. I can attest that to me waiting for computer to finish feels like a big irritant that can spoil my programming flow. I take my breaks whenever I feel like and do not need a computer to help me. So I pay for top notch desktops (within reason of course).


The hours I sometimes spend waiting on a build are time that won't come back latter. Sometimes I've done other tasks but I can only track so much and so often it isn't worth it.

a faster machine can get me to productive work faster.


I always bought a really large monitor for work with my own cash. When most dev's had 19" or 20" I got a 30" for $1500.

best money ever spent. lasted years and years.

for cpus - I wonder how the economics work out when you get into say 32 or 64 core threadrippers? I think it still might be worth it.


Most of my friends at FAANG all do their work on servers remotely. Remote edit, remote build. The builds happen in giant networked cloud builders, 100s to 1000s per build. Giving them a faster local machine would do almost nothing because they don't do anything local.

Wish my employers didn’t same calculation.

Gave developers 16GB RAM and 512MB storage. Spent way too much time worrying about available disk space and needlessly redownloading docker images off the web.

But at least they saved money on hardware expenses!


You mean 512GB storage?

FAANG manages the machines. Setting aside the ethics of this level of monitoring, I'd be curious to validate this by soft-limiting OS memory usage and tracking metrics like number of PRs and time someone is actively on the keyboard.

To put a massive spanner in this, companies are going to be rolling out seemingly mandatory AI usage, which has huge compute requirements .. which are often fulfilled remotely. And has varying, possibly negative, effects on productivity.

I think those working on user-facing apps could do well having a slow computer or phone, just so they can get a sense of what the actual user experience is like.

When I worked at a FAANG, most developers could get remote virtual machine for their development needs. They could pick the machine type and size. It was one of the first thing you'd learn how to do in your emb^H^H^H onboarding :)

So it wasn't uncommon to see people with a measly old 13" macbook pro doing the hard work on a 64cpu/256GB remote machine. Laptops were essentially machines used for reading/writing emails, writing documents and doing meetings. The IDEs had a proprietary extensions to work with remote machines and the custom tooling.


more than that, in the faang jobs I've had you could not even check code out onto your laptop. it had to live on the dev desktop or virtual machine, and be edited remotely.

Ah so the coding was done locally but run remotely?

I nearly went insane when I was forced to code using Citrix.


It sounds more like doing embedded development with a TFTP boot to an NFS mounted root filesystem.

> Ah so the coding was done locally but run remotely?

Both, depending on the case and how much you were inclined to fiddle with your setup. And on what kind of software you were writing (most software had a lot of linux-specific code, so running that on a macbook was not really an option).

A lot of colleagues were using either IntelliJ or VScode with proprietary extensions.

A lot of my work revolved around writing scripts and automating stuff, so IntelliJ was an absolutely overkill for me, not to mention that the custom proprietary extensions created more issues than they solved ("I just need to change five lines in a script for christ's sake, i don't need 20GB of stuff to do that")... So ended up investing some time in improving my GNU Emacs skills and reading the GNU Screen documentation, and did all of my work in Emacs running in screen for a few years.

It was very cool to almost never have to actually "stop working". Even if you had to reboot your laptop, your work session was still there uninterrupted. Most updates were applied automatically without needing a full system reboot. And I could still add my systemd units to the OS to start the things i needed.

Also, building onto that, I later integrated stuff like treemacs and eglot mode (along with the language servers for specific languages) and frankly I did not miss much from the usual IDEs.

> I nearly went insane when I was forced to code using Citrix.

Yeah I can see that.

In my case I was doing most of my work in a screen session, so I was using the shell for "actual work" (engineering) and the work macbook for everything else (email, meetings, we browsing etc).

I think that the ergonomics of gnu emacs are largely unchanged if you're using a gui program locally, remotely or a shell session (again, locally or remotely), so for me the user experience was largely unchanged.

Had i had to do my coding in some gui IDE on a remote desktop session I would probably have gone insane as well.


> it rarely if ever makes sense to buy anything but the top spec Macbook Pro available

God I wish my employers would stop buying me Macbook Pros and let me work on a proper Linux desktop. I'm sick of shitty thermally throttled slow-ass phone chips on serious work machines.


Just Friday I was dealing with a request from purchasing asking if a laptop with an ultra-low-power 15W TDP CPU and an iGPU with "8GB DDR4 graphics memory (shared) was a suitable replacement for one with a 75W CPU (But also a Core i9) and NVidia RTX4000 mobile 130W GPU in one of our lead engineer's CAD workstations.

No, those are not the same. There's a reason one's the size of a pizza box and costs $5k and the other's the size of an iPad and costs $700.

And yes, I much prefer to build tower workstations with proper thermals and full-sized GPUs, that's the main machine at their desk, but sometimes they need a device they can take with them.


Curious perspective. Apple silicon is both performant and very power efficient. Of course there are applications where even a top spec MacBook would be unsuitable, but I imagine that would be a very small percentage of folks needing that kind of power.

Sadly, the choice is usually between Mac and Windows—not a Linux desktop. In that case, I’d much prefer a unix-like operating system like MacOS.

To be clear, I am not a “fanboy” and Apple continues to make plenty of missteps. Not all criticisms against Apple are well founded though.


I have a 7950x desktop and an M3 max, they are very distant in performance for development, albeit I'll give credit to Apple for good single core performance that show in some contexts.

I have a decent rig I built (5900x, 7900xt) of course it blows my M1 MacBook out of the water.

You seem like a reasonable person that can admit there’s some nice things about Apple Silicon even though it doesn’t meet everyone’s needs.


You very clearly have no experience on powerful desktop machines. A 9950x will absolutely demolish an M3 or M4 Macbook Pro in any possible test, especially multicore testing. And I don't care how "performant" or "efficient" you think it is, those M series chips will be thermally throttled like anything else packaged into a laptop.

Oh, and the vastly superior dekstop rig will also come out cheaper, even with a quality monitor and keyboard.


That’s my bad for not clarifying I am talking solely about the laptop form factor here. It’s a given that laptops are not comparable in performance to desktops. In terms of laptop hardware, Apple Silicon performs quite well

Nice assumptions though.

It’s not just my opinion that Apple silicon is pretty performant and efficient for the form factor; you can look up the stats yourself if you cared to. Yet, it seems you may be one of those people that is hostile towards Apple for less well-founded reasons. It’s not a product for everyone, and that’s ok.


No doubt you mean well. In some cases it’s obvious- low memory machine can’t handle some docket setup, etc.

In reality, you can’t even predict time to project completion accurately. Rarely is a fast computer a “time saver”.

Either it’s a binary “can this run that” or a work environment thing “will the dev get frustrated knowing he has to wait an extra 10 minutes a day when a measly $1k would make this go away”


Is it worth to keep your old CPU?

I still run a 6600 (65W peak) from 2016 as my daily driver. I have replaced the SSD once (MLC lasted 5 years, hopefully forever with SLC drive from 2011?), 2x 32GB DDR4 sticks (Kingston Micron lasted 8 years, with aliexpress "samsung" sticks for $50 a pop) and Monitor (Eizo FlexScan 1932 lasted 15! years RIP with Eizo RadiForce 191M, highly recommend with f.lux/redshift for exceptional quality of image without blue light)

It's still powerful enough to play any games released this year I throw at it at 60 FPS (with a low profile 3050 from 2024) let alone compile any bloat.

Keep your old CPU until it breaks, completely... or actually until the motherboard breaks; I have a Kaby Lake 35W replacement waititng for the 6600 to die.


If you don't care about power efficiency, sure ;)

I bought a geforce RTX 3080 at launch and boy was I surprised at the power draw and heat/noise it pumps out. I wonder why anybody bothers with the 90 series at all.

I actually run it ~10% underclocked, barely affects performance, but greatly reduces heat/noise. These cards are configured to deliver maximum performance at any cost (besides system instability).

My next GPU I am probably going mid-range to be honest, these beefy GPUs are not worth it anymore cost and performance-wise. You are better off buying the cheaper models and upgrading more often.


> I bought a geforce RTX 3080 at launch and boy was I surprised at the power draw and heat/noise it pumps out. I wonder why anybody bothers with the 90 series at all.

More VRAM, and NVLink (on some models). You can easily run them at lower power limits. I've run CUDA workloads with my dual 3090s set as low as 170W to hit that sweet spot on the efficiency curve. You can actually go all the way down to 100W!


Oh well, for GPU programming sure as VRAM is king depending on task. But for gaming I won't go high end again.

When it comes to: "is it better to throw away the 6600 and replace it with a 5600, than keep running the 6600" I'm torn but: you probably need to use the 5600 for maybe 20 years to compensate for it's manufacturing energy cost (which is not directly linked to $ cost) and I think the 6600 might last that long with new RAM 10 years down the road, not so sure the newer 5600 motherboard and the CPU itself will make it that long becuse it was probably made to break to a larger extent.

Also the 6600 can be passively cooled in the Streacom case I allready have, the 5600 is to hot.


This is a little confusing because you were referring to a Core i5-6600 and presumably an i5-7500T or i7-7700T above but now you mention a 5600. Are you referring to a Ryzen 5 5600?


As we discussed elsewhere you could put the Ryzen 5 5600 into 45W ECO mode and cool it passively. Or a Ryzen 5 7600 if you decide to jump up to AM5 (which is probably a good idea even though you'll need new RAM).

You could buy a SSD with SLC cache that is bigger than in 2011 pure SLC one, and probably cheaper.

We have gone through this if you look at my comment history.

Yes you can do everything, but not without added complexity, that will end up failing faster.

We have peaked in all tech. Nothing will ever get as good as the raw peak in longevity:

- SSDs ~2011 (pure SLC)

- RAM ~2013 (DDR3 fast low latency but low Hz = cooler = lasts longer)

- CPUs ~2018 (debatable but I think those will outlast everything else)


What metric are you using to determine peak? Just long life?

I don't know about most people, but how long a wafer of silicon keeps working past its obsolescence, is just not that important


It is when we peak in nanometers too.

I'd expect every new generation of computers to not last as long as the last one, we keep reducing the transistor size and that means more fragility. I'm half surprised modern GPUs make it through shipping without melting from static.

My guess is that the most long lived computer gen could be one that still uses through hole components. Not a very useful machine by any metric though I bet.


On the flip side, a lot of those old through-hole memory chips have failed. I'm not sure what the mechanism of action is, but it likely leads back to some kind of (at the time unknown) manufacturing defect. Every new generation requires higher purity and better quality control (because those tiny transistors are less tolerant of defects). If we optimized for longevity rather than flops per dollar or per watt, we would likely keep making the same hardware for a very long time, optimizing the process along the way and learning about the pitfalls. Maybe you can see such things in the military or industrial computing spaces.

Really depends on your use case. I personally still run my 17 year old 2.4GHz Core 2 duo with 4GB of RAM as my daily runner. I simply do not hit the walls even on that thing. Most folks here simply would not accept that and not because they are spoilt but their works loads demand more.

a 17 year old core 2 duo machine is definitely less powerful than a raspberry pi 5 that is going to use 1/7th of the power.

Raspberry 5 is a dud, either go 4 or 3588 CM:

http://move.rupy.se/file/radxa_works.mp4

Or in a uConsole.

Also Risc-V tablet:

http://move.rupy.se/file/pinetab-v_2dmmo.png

Not as battle hardened as my Thinkpad X61s but maybe we'll get a GPU driver soon... 3 years later...


> Raspberry 5 is a dud

cf. the Pi 4: 2–3X CPU performance, full 5Gbps USB 3.0 ports, a PCIe Gen2 x1 connector, dual 4-lane MIPI connectors, support for A2-class SD cards, an integrated RTC...

A dud?? What's the issue? The price?


I do not know the specifics but a large issue with the 5 is that a lot of hardware acceleration for encoding and decoding video was removed, making it slower for anything to do with video.

Ah, yes, it's definitely a poor choice for most video-encoding and some video-decoding use-cases. Just not sure how GP goes from that to "dud"...

Absolutely, but until I can get that easily in a battle harden Thinkpad design, I will probably still be using this. I aam not against upgrading at all, just haven't needed it yet. That said this last year, a lot of applications have finally grown to the point that I can see the horizon creeping closer.

I love those as routers, firewalls, and other random devices on my mess of a home network where I just set things up for fun. Or as little NASes for friends and family that I can give to them for free or whatever.

Nothing older than Nehalem and Bulldozer got microcode mitigations for Spectre, so I'd say running a C2D online would be a liability by now.

…IF anyone really bothered to develop exploits against something practically no one uses anymore at this point.

I am still waiting to hear about a proven exploit in the wild for all that stuff we’re mitigating against.

It very much depends on the games you play though. When I upgraded from a 7700k, it was struggling with my Factorio world. My AMD 5700X3D handles it completely smoothly. Though now in Path of Exile 2, my CPU can barely maintain 30 fps during big fights.

CPU is now the bottleneck for games that struggle, which makes sense since GPU most often is configurable, while gameplay well is the hardcoded gameplay.

See PUBG that has bloated Unreal so far past what any 4-core computer can handle because of anti-cheats and other incremental changes.

Factorio could add some "how many chunks to simulate" config then? If that does not break gameplay completely.


I upgraded two years ago to a Ryzen 5700 rather than a 5800 specifically for the lower TDP. I rarely max out the cores and the cooler system means the fan rarely spins up to audible levels.

Most (if not all?) BIOSes today will let you limit the TDP - on AMD it’s often called eco mode.

Cool, can you tune exactly how many watts or just on/off?

There are usually a few presets, e.g. 65W or 45W ECO modes for a 105W part, or you can set your own specific values for PPT/TDC/EDC.

Nice!

E5-2650v2 in a Chinese mATX motherboard for me. Got the cpu years ago for like $50 as an eBay server pull. 970 Evo SSD. 24GB of mismatched DDR3. Runs all my home server junk and my dev environment (containerized with Incus). Every year I tell myself I should grab a newer Ryzen to replace it but it honestly just keeps chugging along and doesn't really slow me down.

This article skips a few important steps - how a faster CPU will have a demonstrable improvement on developer performance.

I would agree with the idea that faster compile times can have a significant improvement in performance. 30s is long enough for a developer to get distracted and go off and check their email, look at social media, etc. Basically turning 30s into 3s can keep a developer in flow.

The critical thing we’re missing here is how increasing the CPU speed will decrease the compile time. What if the compiler is IO bound? Or memory bound? Removing one bottleneck will get you to the next bottleneck, not necessarily get you all the performance gains you want


A lot of people miss the multi-core advantage. A lot of times the number of cores is an almost linear decrease in compile time.

You do need a good SSD though. There is a new generation of pcie5 SSDs that came out that seems like it might be quite a bit faster.


In some cases, the bottlenecks are external.

I've seen a test environment which has most assets local but a few shared services and databases accessed over a VPN which is evidently a VIC-20 connected over dialup.

The dev environment can take 20 seconds to render a page that takes under 1 second on prod. Going to a newer machine with twice the RAM bought no meaningful improvement.

They need a rearchitecture of their dev system far more than faster laptops.


I got my boss to get me the most powerful server we could find, $15000 or so. In benchmarks there was minimal benefit and sometimes a loss going with more than 40 cores even though it has 56. (52? - I can't check now) sometimes using more cores slows the build down. We have concluded that memory bandwidth is the limit, but are not sure how to prove it.

If that's true than have you looked at the threadripper or the new Ryzen AI+ 395? I think it has north of 200gbps

i have not (above machine was a intel), someone else did get a threaeripper though I don't know which. He reborted similar numbers though I think he was able to use more cores still not all.

The larger point is the fastest may not be faster for your workload so benchmark before spending money. Your workload may be different.


IO bound compiler would be weird. Memory, perhaps, but newer CPUs also tend to be able to communicate with RAM faster, so...

I think just having LSP give you answers 2x faster would be great for staying in flow.


Compiler is usually IO bound on windows due to NTFS and the small files in MFT and lock contention problem. If you put everything on a ReFS volume it goes a lot faster.

Applies to git operations as well.


by "IO bound" you mean "MS defender bound"

Dev Drive can help with that as well

I've seen gcc+ld use a large amount of disk (dozens of GB) during LTO.

I wish I was compiler bound. Nowadays, with everything being in the cloud or whatever I'm more likely to be waiting for Microsoft's MFA (forcing me to pick up my phone, the portal to distractions) or getting some time limited permission from PIM.

The days when 30 seconds pauses for the compiler was the slowest part are long over.


The circuit design software I use, Altium Designer, has a SaaS cloud for managing libraries of components, and version control of projects. I probably spend hours a year waiting for simple things like "load the next 100 parts in this list" or "save this tiny edit to the cloud" as it makes API call after call to do simple operations.

And don't get me started on the cloud ERP software the rest of the company uses...


You must be a web developer. Doing desktop development, nothing is in the cloud for me. I’m always waiting cor my compiler.

More likely in an enterprise company using MS tooling (AD/Entra/Outlook/Teams/Office...) with "stringent" security settings.

It gets ridiculous quickly, really.


Another thing to keep in mind when compiling is adding more cores doesn't help with link time, which is usually stuck to a single core and can be a bottleneck.


I don’t think that we live in an era where a hardware update can bring you down to 3s from 30s, unless the employer really cheaped out on the initial buy.

Now in the tfa they compare laptop to desktop so I guess the title should be “you should buy two computers”


There are peaks in long-term CPU value. That is, CPUs that are 1) performant enough to handle general purpose computing for a decade and 2) outperform later chips for a long time.

The i7-4770 was one. It reliably outperformed later Intel CPUs until near 10th gen or so. I know shops that are still plugging away on them. The first comparable replacements for it is the i7-12700 (but the i5-12400 is a good buy).

At 13th gen, Intel swaps E for P cores. They have their place but I still prefer 12th gen for new desktops.

Past all that, the author is right about the AMD Ryzen 9950x. It's a phenomenal chip. I used one in a friend's custom build (biz, local llm) and it'll be in use in 2035.


> The i7-4770 was one. It reliably outperformed later Intel CPUs until near 10th gen or so.

Per which benchmarks?

> At 13th gen, Intel swaps E for P cores.

One nit, Intel started adding (not swapping) E-cores to desktop parts with 12th gen, but i3 parts and most i5 parts were spared. More desktop i5 parts got them as starting with 13th gen.


What's wrong with E cores? They're the best bang for the buck for both baseline low-power usage (and real-world systems are idle a lot of the time) and heavy multicore workloads. An E-core cluster takes a tiny fraction of area and power compared to a P-core, so it's not just a one-on-one comparison.

Important caveat that the author neglects to mention since they are discussing laptop CPUs in the same breath:

The limiting factor on high-end laptops is their thermal envelope. Get the better CPU as long as it is more power efficient. Then get brands that design proper thermal solutions.


You simply cannot cram enough cooling and power into a laptop to have it equal a desktop high end desktop CPU of the same generation. There is physically not enough room. Just about the only way to even approach that would be to have liquid cooling loop ports out the back that you had to plug into an under-desk cooling loop and I don't think anyone is doing that because at that point just get a frickin desktop computer + all the other conveniences that come with it (discrete peripherals, multiple monitors, et cetera). I honestly do not understand why so many devs seem to insist on doing work on a laptop. My best guess is this is mostly the apple crowd because apple "desktops" are for the most part - just the same hardware in a larger box instead of being actually a different class of machine. A little better on the thermals, but not the drastic jump you see between laptops and desktops from AMD and Intel.

Carrying a desktop on the backpack is kind of hard for the work I do as a developer, not everyone is seating on a desk the full day, or has an assigned work desk at a specific office.

I work mostly remote, and also need to jump between office locations and customer sites as well.

Member of Windows/UNIX crowd since several decades.


If you have to do any travel for work, a lightweight but fast portable machine that is easy to lug around beats any productivity gains from two machines (one much faster) due to the challenge of keeping two devices in sync.

Having a backup system can be kind of priceless sometimes. Also, if you have a desktop and a laptop you probably use one of them 80-90% of the time, so you rarely loose time syncing stuff.

I do this regularly and it's really not a big concern.

> the only way to even approach that would be to have liquid cooling loop ports out the back that you had to plug into an under-desk cooling loop and I don't think anyone is doing that

It is (maybe was) done by XMG and Schenker. Called Oasis IIRC. Yep

https://www.xmg.gg/en/xmg-oasis/


> I honestly do not understand why so many devs seem to insist on doing work on a laptop.

Their employers made it the culture so that working from home/vacation would be easy.


A better look at it is to say that they started allowing working from home.

I've worked fully remotely in a couple of global remote-first company since 2006 for a decade-plus — it was my choice how I wanted to set up my working conditions, with company paying for a "laptop refresh" every 3 years which I did not have to use on a laptop.


My experience is that, when it started (for me, 2004 was my first work-issued laptop), it was more about what the GP was saying. The first time I had a laptop for work, I was expected to be in the office from 9 to 5 (or longer), and was expected to be available to respond to occasional emails in my off hours, sometimes taking calls, facilitated by having that laptop that I could bring home. I was fairly junior; others more senior than I was would be taking calls (for which they'd need their laptop) from home and doing substantial work nearly nightly. (We did a lot of work with hardware and software firms in east & southeast Asia, so we'd have calls at US-weird hours.)

But even in the few years before the pandemic, I was where you're talking about, working from home a lot more often, and replacing office-work hours with home-work hours, not just adding extra hours at home to my office-work hours.

I think this still depends a lot on the company, though. I know people who are expected to be in the office for 8+ hours a day, 5 days a week, but still bring their laptop home with them and do work in their off hours, because that's just what's expected of them. But fortunately I also know a lot of people with flexible hours, flexible home/office work, and aren't forced to work much more than 40 hrs/wk.


A laptop as the primary/only machine had been normalized at tech companies a long time before covid made working from home very common.

It's also nice to be able to change environments where I work.

> I honestly do not understand why so many devs seem to insist on doing work on a laptop.

I hate having more than one machine to keep track of and maintain. Keeping files in sync, configuration in sync, everything updated, even just things like the same browser windows with the same browser tabs, organized on my desktop in the same way. It's annoying enough to have to keep track of all that for one machine. I do have several machines at home (self-built NAS, media center box, home automation box), and I don't love dealing with them, but fortunately I mainly just have to ensure they remain updated, not keep anything in sync with other things.

(I'm also one of those people who gets yelled at by the IT security team when they find out I've been using my personal laptop for work... and then ignores them and continues to do it, because my personal laptop is way nicer than the laptop they've given me, I'm way more productive on it, and I guarantee I know more about securing a Linux laptop and responsibly handling company data than the Windows/Mac folks in the company's IT department. Yes, I know all the reasons, both real and compliance-y, why this is still problematic, but I simply do not care, and won't work for a company that won't quietly look the other way on this.)

I also rarely do my work at a desk; I'm usually on a couch or a comfy chair, or out working in a public place. If all I had was a desktop, I'd never do any work. If I had a desktop in addition to my laptop, I'd never use the desktop. (This is why I sold my personal home desktop computer back in the late '00s: I hadn't even powered it on in over a year.)

> ...why so many devs seem to insist...

I actually wonder if this was originally driven by devs. At my first real job (2001-2004) I was issued a desktop machine (and a Sun Ray terminal!), and only did work at the office. I wouldn't even check work email from home. At my second job (2004-2009), I was given a Windows laptop, and was expected to be available to answer the odd email in my off hours, but not really do much in the way of real work. I also had to travel here and there, so having the laptop was useful. I often left the laptop in the office overnight, though. When I was doing programming at that company, I was using a desktop machine running Linux, so I was definitely not coding at home for work.

At the following job, in 2009, I was given a MacBook Pro that I installed Linux on. I didn't have a choice in this, that's just what I was given. But now I was taking my work laptop home with me every day, and doing work on my off hours, even on weekends. Sneaky, right? I thought it was very cool that they gave me a really nice laptop to do work on, and in return, I "accidentally" started working when I wasn't even in the office!

So by giving my a laptop instead of a desktop, they turned me from a 9-5 worker, into something a lot more than that. Pretty good deal for the company! It wasn't all bad, though. By the end of the '10s I was working from home most days, enjoying a flexible work schedule where I could put in my hours whenever it was most convenient for me. As long as I was available for meetings, spent at least some time in the office, and produced solid work in a timely manner, no one cared specifically when I did it. For me, the pandemic just formalized what I'd already been doing work-wise. (Obviously it screwed up everything outside of work, but that's another story.)

> My best guess is this is mostly the apple crowd...

Linux user here, with a non-Apple laptop.


* Me shamefully hiding my no-fan MBA used for development... *

When ever I've built a new desktop I've always gone near the top performance with some consideration given to cache and power consumption (remember when peeps cared about that? lol).

From dual pentium pros to my current desktop - Xeon E3-1245 v3 @ 3.40GHz built with 32 GB top end ram in late 2012 which has only recently started to feel a little pokey, I think largely due to cpu security mitigations added to Windows over the years.

So that extra few hundred up front gets me many years extra on the backend.


I think people overestimate the value of a little bump in performance. I recently built a gaming PC with a 9700X. The 9800X3D is drastically more popular, for an 18% performance bump on benchmarks but double the power draw. I rarely peg my CPU, but I am always drawing power.

Higher power draw means it runs hotter, and it stresses the power supply and cooling systems more. I'd rather go a little more modest for a system that's likely to wear out much, much slower.


Is it really 2x or is it 2x at max load ? Since, as you say, you're not peggig the CPU - would be interesting to compare power usage on a task basis and the duration. Could be that the 3D cache is really adding that much overhead even to idle CPU.

Anyway I've never regretted buying a faster CPU (GPU is a different story, burned some money there on short time window gains that were marginally relevant), but I did regret saving on it (going with M4 air vs M4 pro)


I recently had some fun overclocking my old i5 4690.

IIRC, running the base frequency at 3.9Ghz instead of 3.5GHz, yield a very modest performance boost but added 20% more power consumption and temperature.

I then underclocked it to 3.1Ghz and the thing barely ran at more than 40°C under load and power consumption was super low! The performance was more than mediocre though...


Devil's Canyon and Haswell-E were great overclockers. I had an i7-4790K re-lidded with liquid metal stable at 4.7GHz all-core and (a bit later) an i7-5960X stable at 4.5GHz all-core. But yes power consumption and thermal output were through the roof.

Employers, even the rich FANG types, are quite penny-wise and pound-foolish when it comes to developer hardware.

Limiting the number and size of monitors. Putting speedbumps (like assessments or doctor's notes) on ergo accessories. Requiring special approval for powerful hardware. Requiring special approval for travel, and setting hotel and airfare caps that haven't been adjusted for inflation.

To be fair, I know plenty of people that would order the highest spec MacBook just to do web development and open 500 chrome tabs. There is abuse. But that abuse is really capped out at a few thousand in laptops, monitors and workstations, even with high-end specs, which is just a small fraction of one year's salary for a developer.


Every well funded startup I’ve worked for went through a period where employees could get nearly anything they asked for: New computers, more monitors, special chairs, standing desks, SaaS software, DoorDash when working late. If engineers said they needed it, they got it.

Then some period of time later they start looking at spending in detail and can’t believe how much is being spent by the 25% or so who abuse the possibly. Then the controls come.

> There is abuse. But that abuse is really capped out at a few thousand in laptops, monitors and workstations, even with high-end specs,

You would think, but in the age of $6,000 fully specced MacBook Pros, $2,000 monitors, $3,000 standing desks, $1500 iPads with $100 Apple pencils and $300 keyboard cases, $1,000 chairs, SaaS licenses that add up, and (if allowed) food delivery services for “special circumstances” that turns into a regular occurrence it was common to see individuals incurring expenses in the tens of thousands range. It’s hard to believe if you’re a person who moderates their own expenditures.

Some people see a company policy as something meant to be exploited until a hidden limit is reached.

There also starts to be some soft fraud at scales higher than you’d imagine: When someone could get a new laptop without questions, old ones started “getting stolen” at a much higher rate. When we offered food delivery for staying late, a lot of people started staying just late enough for the food delivery to arrive while scrolling on their phones and then walking out the door with their meal.


If $20k is misspent by 1 in 100 employees, that's still $200 per employee per year: peanuts, really.

Just like with "policing", I'd only focus on uncovering and dealing with abusers after the fact, not on everyone — giving most people "benefits" that instead makes them feel valued.


So then just set a limit of $200 per head instead of allowing a few bad apples to spend $20k all on themselves.

This was extra on top of whatever the average cost really is for employees who are not abusing the system.

So, if other engineers get their equipment for $6k (beefed-up laptop, 32" or 30" 5k widescreen screen, ergonomic chair, standing desk — in theory amortized over 3-10 years, but really, on the retention period which is usually <3 years in software), we are talking about an increase of $200 on that.

Maybe not peanuts, but the cost of administration to oversee spending and the cost to employees to provide proof and follow due process (in their hourly rate for time used) will quickly add up and usually negate any "savings" from stopping abuse altogether — since now everybody needs to shoulder the cost.

Any type of cap based on average means that those who needed something more special-cased (more powerful machine, more RAM vs CPU/storage, more expensive ergonomic setup due to their anatomy [eg. significantly taller than average]...) can't really get it anymore.

Obviously, having no cap and requiring manager approval is usually enough to get rid of almost all abuse, though it is sometimes important to be able to predict expenses throughout the year.


You could just have an average cap and require manager approval above that.

The effect on morale shouldn't be ignored tho either

Don’t you think the problem there is that you hired the wrong people?

Was trying to remember a counter example on good hires and wasted money.

Alex St. John Microsoft Windows 95 era, created directX annnnd also built an alien spaceship.

I dimly recalled it as a friend in the games division telling me about some someone getting 5 and a 1 review scores in close succession.

Facts i could find (yes i asked an llm)

5.0 review: Moderately supported. St. John himself hosted a copy of his Jan 10, 1996 Microsoft performance review on his blog (the file listing still exists in archives). It reportedly shows a 5.0 rating, which in that era was the rare top-box mark. Fired a year later: Factual. In an open letter (published via GameSpot) he states he was escorted out of Microsoft on June 24, 1997, about 18 months after the 5.0 review. Judgment Day II alien spaceship party: Well documented as a plan. St. John’s own account (quoted in Neowin, Gizmodo, and others) describes an H.R. Giger–designed alien-ship interior in an Alameda air hangar, complete with X-Files cast involvement and a Gates “head reveal” gag. Sunk cost before cancellation: Supported. St. John says the shutdown came “a couple of weeks” before the 1996 event date, after ~$4.3M had already been spent/committed (≈$1.2M MS budget + ≈$1.1M sponsors + additional sunk costs). Independent summaries repeat this figure (“in excess of $4 million”).

So: 5.0 review — moderate evidence Fired 1997 — factual Alien spaceship build planned — factual ≈$4M sunk costs — supported by St. John’s own retrospective and secondary reporting


I’m not quite sure I see how building directx and building an alien spaceship are incompatible.

Nor how either translates to being a bad hire.


Well partly, yes.

But also, when I tell one of my reports to spec and order himself a PC, there should be several controls in place.

Firstly, I should give clear enough instructions that they know whether they should be spending around $600, $1500, or $6000.

Second, although my reports can freely spend ~$100 no questions asked, expenses in the $1000+ region should require my approval.

Thirdly, there is monitoring of where money is going; spending where the paperwork isn't in order gets flagged and checked. If someone with access to the company amazon account gets an above-ground pool shipped to their home, you can bet there will be questions to be answered.


Basic statistics. You can find 10 people that will probably not abuse the system but definitely not 100.

It’s like your friend group and time choosing a place to eat. It’s not your friends, it’s the law of averages.


Maybe so but it's not like that's something you can really control. You can control the policy so that is what's done.

As a company grows, it will undoubtedly hire some "wrong people" along the way.

Absolutely, but then you fire them again. Saves both salaries and expenses.

Which is a process that takes time.

"$1,000 chairs"

Not an expert here, but from what I heard, that would be a bargain for a good office chair. And having a good chair or not - you literally feel the difference.


I've been using the same $25 chair I bought 45 years ago. I've always thought the "ergonomic chair" was a scam.

I think ergonomic chairs are good for people who have poor posture. If you have a strong core and sit up straight all the time, you can probably sit on just about anything and be fine.

(I'm not saying you're wrong. I think the real solution is that people should take better care of their physical selves. Certainly there are also people with particular conditions and do need the more ergonomic setup, but I expect that's a small percentage of the total.)


Well, I do lot's of sport and can sit comfortable on hard ground meditating for quite some time. I still enjoy a good chair way more than something "normal" for any longer computer sessions.

If a chair is too comfortable, I just fall asleep in it. It doesn't make me work better.

A chair isn't any answer to poor posture. The answer is exercising your core muscles, being aware of your posture, and constantly correcting it.


Well, I found a good chair helps me, with keeping a good posture. Otherwise good advise.

For sure. $1000 Herman Miller Aeron has been worth every penny considering the time spent sat on it.

I've been on those fancy chairs when I worked at a faang.

Honestly, they aren't any better than my ikea office chair I stole from my first house when I was a student (and that's been with me for the last 15 years). It has probably costed less than 100 €/$.

Ikea stuff is really underrated in this sense.


Ergonomics is definitely something to skimp on!

> It’s hard to believe if you’re a person who moderates their own expenditures.

Yeah, it's hard to convey to people who've never been responsible for setting (or proposing) policy that it's not a game of optimizing the average result, but of minimizing the worst-case result.

You and I and most people are not out to arbitrage the company's resources but you and I and most people are also not the reason policy exists.

It was depressing to run into that reality myself as policy controls really do interfere sometimes in allowing people to access benefits the organization wants them to have, but the alternative is that the entire budget for perks ends up in the hands of a very few people until the benefit goes away completely.


Is it “soft fraud” when a manager at an investment bank regularly demands unreasonable productivity from their junior analysts, causing them to work late and effectively reduce their compensation rate? Only if the word “abuse” isn’t ambiguous and loaded enough for you!

Lying about a laptop being stolen is black and white. I'm not sure how you are trying to say that is ambiguous.

I don't know what the hell you mean by the term unreasonable. Are you under the impression that investment banking analysts do not think they will have to work late before they take the role?


> Lying about a laptop being stolen is black and white. I'm not sure how you are trying to say that is ambiguous.

I've been at startups where there's sometimes late night food served.

I've never been at a startup where there was an epidemic about lying about stolen hardware.

Staying just late enough to order dinner on the company, and theft by the employee of computer hardware plus lying about it, are not in the same category and do not happen with equal frequency. I cannot believe the parent comment presented these as the same, and is being taken seriously.


Nah the laptop and the dinner are exactly the same, they only differ in timing.

You can steal $2000 by lying about a stolen laptop or lying about working late. The latter method just takes a few months.


We're not discussing lying about working late, we're discussing actually working late.

The person way upthread said:

> people started staying just late enough for the food delivery to arrive while scrolling on their phones and then walking out the door with their meal.

That doesn't sound like actually working late?

(I still agree with you, though, that this isn't the equivalent of stealing a laptop, even if you do it enough to take home $2,000 worth of dinner.)


> Lying about a laptop being stolen is black and white.

Well, it was stolen. The only lie is by whom.


Your employer being unreasonable is not an excuse to defraud him in return.

Negotiate for better conditions. If agreement cannot be reached, find another job.


The pay and working hours are extremely well known to incoming jr investment bankers

Working late is official company policy in investment banking.

Is this meant to be a gotcha question? Yes, unpaid overtime is fraud, and employers commit that kind of fraud probably just as regularly as employees doing the things up thread.

none of it is good lol


>, unpaid overtime is fraud,

gp was talking about salaried employees which is legally exempt from overtime pay. There is no rigid 40-hour ceiling for salary pay.

Salary compensation is typical for white-collar employees such as analysts in investment banking and private equity, associates at law firms, developers at tech startups, etc.


The overtime is assumed and included in their 6-figure salaries.

Overtime can't be assumed by definition. If it's an expectation, it should be written into the contracted working hours, and then it's not overtime. (Or if there are no contracted working hours, then overtime could only be defined in relation to legally required maximum working hours, in which case it can't be an expectation for employees to exceed these without appropriate compensation.)

> contracted working hours,

> legally required maximum working hours

Neither of these apply in the context of full-time salaried US investment banking jobs that the parent comment is referring to.

People work these jobs and hours because the compensation and career advancement can be extremely lucrative.

People who worry about things like limiting their work hours do not take these jobs.


Sure, but in that case they are not working 'overtime', they are just working in the absence of any effective regulations governing reasonable working hours.

2 things:

1. My brothers (I have a number of them) mostly work in construction somehow. It feels most of them drive a VW Transporter, a large pickup or something, each carrying at least $30 000 in equipment.

Seeing people I work with get laptops that use multiple minutes to connect to a postgres database that I connect to in seconds feels really stupid. (I'm old enough that I get what I need, they usually rather pay for a decent laptop rather than start a hiring process.)

2. My previous employer did something really smart:

They used to have a policy that you got a basic laptop and an inexpensive phone, but you could ask for more if you needed. Which of course meant some people got nothing and some people got custom keyboards and what not.

That was replaced with a $1000 budget on your first day an $800 every year that was meant to cover phones and everything you needed. You could alsp borrow from next year. So if someone felt they needed the newest iPhone or Samsung? Fine, save up one year(or borrow from next year) and you have it.

Others like me who don't care that much about phones could get a reasonably priced one + a gpod monitor for my upstairs office at home + some more gear.

And now the rules are the same for everyone so even I get (I feel I'm hopeless when it comes to arguing my case with IT, but now it was a simple: do you have money for it? yes/no)


$3,000 standing desks?? It's some wood, metal and motors. I got one from IKEA in about 2018 for 500 gbp and it's still my desk today. You can get Chinese ones now for about 150 gbp.

The people demanding new top spec MacBook Pros every year aren’t the same people requesting the cheapest Chinese standing desk they can find.

I can understand paying more for fast processors and so on but a standing desk just goes up and down. What features do the high end desks have that I am missing out on?

Furniture for managed office space has different requirements.

If someone's unstable motorized desk tips over and injures someone at the office, it's a big problem for the company.

A cheap desk might have more electrical problems. Potential fire risk.

Facilities has to manage furniture. If furniture is a random collection of different cheap desks people bought over the years they can't plan space without measuring them all. If something breaks they have to learn how to repair each unique desk.

Buying the cheapest motorized desk risks more time lost to fixing or replacing it. Saving a couple hundred dollars but then having the engineer lose part of a day to moving to a new desk and running new cables every 6 months while having facilities deal with disposal and installation of a new desk is not a good trade.


I went with Uplift desks which are not $150 but certainly sub $1000. I think what I was paying for was the stability/solidity of the desk, the electronics and memory and stuff is probably commodified.

$300 electronic leg kit from Amazon + Ikea top is pretty solid and has memory etc

Doesnt matter, some people just want whatever the company will spring for them.

Stability and reliability.

Stability is a big one, but the feel of the desk itself is also a price point. You're gonna be paying a lot depending on the type of tabletop you get. ($100-1k+ just for the top)

Mine is very stable. Top is just some kind of board. It took a bit of damage from my cat's claws but that's not a risk most corporate offices have.

What price point did you buy at?

I paid a premium for my home height-adjustable desk because the frame and top are made in America, the veneer is much thicker than competitors, the motors and worm gears are reliable, and the same company makes coordinating office furniture.

The same company sells cheap imported desks too. Since my work area is next to the dining table in my open-plan apartment, I considered the better looks worth the extra money.


500 GBP in 2018. It looks functional but not stylish, which is all I needed. You make a good point about appearance: companies that want to create a certain impression for visitors are going to spring for better looking furniture.

If you buy from a dealer/manufacturer they come and set up the desk for you. You can also get stuff like really good sound absorbing panels and better integrated electricity and other stuff like that. If you buy system furniture like connected desks and cubibles it is probably the way to go.

Breaking news: "Trump tariffs live updates: Trump says US to tariff furniture imports following investigation"<https://finance.yahoo.com/news/live/trump-tariffs-live-updat...>

> individuals incurring expenses in the tens of thousands range

peanuts compared to their 500k TC


Very few companies pay $500K. Even at FAANG a lot of people are compensated less than that.

I do think a lot of this comment section is assuming $500K TC employees at employers with infinite cash to spend, though.


But at the FAANGy companies I’ve worked at this issue persists. Mobile engineers working on 3yo computers and seeing new hires compile 2x (or more) faster with their newer machines.

If they care that much about compile time, they would work on a desktop instead of a laptop.

Then the company would issue a desktop and a laptop, since they want engineers to be able to use computers in places other than their desk.

Yep, but then the laptop just becomes a screen and keyboard and can be the cheapest on the market. Remoting into your desktop to code is much more efficient than actually coding on a laptop, especially if you care about compile time. Then you can even set something compiling, shut your laptop and jump on your bike home, and have it done by the time you get there!

...and we're back to trying to convince a penny-wise pound-foolish company to buy twice the computing hardware for every developer.

Or just buy everyone desktops. Honestly I think laptops are completely superfluous for every business I've ever worked at. Nobody is truly getting value out of bringing a laptop to meetings, they just like them.

I think whatever companies you were at just didn't have very effective meetings. There's a time for "laptops down" and there's a time for laptops. If we can't prototype, brainstorm, outline ideas... why even have meetings in the first place?

> why even have meetings in the first place

Exactly. I personally have never been in a meeting which I thought was absolutely necessary. Except maybe new fire regs.


Nope, laptops are just very cheap thin clients to remote onto the desktops with much higher power. This gives the advantage of being able to leave things compiling whilst you shut your laptop at the end of the day.

Not only are most developers (let alone other employees) making nowhere near that, why should spending $500k mean you waste $10k? Even saving small amounts matters when you add it up.

Why waste? If you get more than 2% value increase out of your 10k it’s a net gain.

500k is not the average, and anyone at that level+ can get fancy hardware if they want it.

One, not everybody gets 500K TC.

Two, several tens of thousands are in the 5%-10% range. Hardly "peanuts". But I suppose you'll be happy to hear "no raise for you, that's just peanuts compared to your TC", right?


Netflix, at least the Open Connect org, was still open ended adjacent to whatever NTech provided (your issued laptop and remote working stuff). It was very easy to get "exotic" hardware. I really don't think anyone abused it. This is an existence proof to the comment parents, it's neither a startup and I don't see engineers screwing the wheels off the bus anywhere I've ever worked.

There also starts to be some soft fraud at scales higher than you’d imagine: When someone could get a new laptop without questions, old ones started “getting stolen” at a much higher rate. When we offered food delivery for staying late, a lot of people started staying just late enough for the food delivery to arrive while scrolling on their phones and then walking out the door with their meal.

Ehh. Neither of these are soft fraud. The former is outright law-breaking, the latter…is fine. They stayed till they were supposed to.


> the latter…is fine. They stayed till they were supposed to.

This is the soft fraud mentality: If a company offers meal delivery for people who are working late who need to eat at the office and then people start staying late (without working) and then taking the food home to eat, that’s not consistent with the policies.

It was supposed to be a consolation if someone had to (or wanted to, as occurred with a lot of our people who liked to sleep in) stay late to work. It was getting used instead for people to avoid paying out of pocket for their own dinners even though they weren’t doing any more work.

Which is why we can’t have nice things: People see these policies as an opportunity to exploit them rather than use them as intended.


Good grief, no. They got an extra hour of productive (or semi-productive time; after 8 hours most people are, unsurprisingly, kind of worn down) out of us while waiting for dinner to arrive and a bit of team-building as we commiserate over whatever we're working on causing us to stay late over a meal. That more than offsets the cost of the food.

If an employee or team is not putting in the effort desired, that's a separate issue and there are other administrative processes for dealing with that.


Are you saying the mentality is offensive? Or is there a business justification I am missing?

Note that employers do this as well. A classic one is a manager setting a deadline that requires extreme crunches by employees. They're not necessarily compensating anyone more for that. Are the managers within their rights? Technically. The employees could quit. But they're shaving hours, days, and years off of employees without paying for it.


It’s basic expense fraud.

If a company policy says you can expense meals when taking clients out, but sales people started expensing their lunches when eating alone, it’s clearly expense fraud. I think this is obvious to everyone.

Yet when engineers are allowed to expense meals when they’re working late and eating at the office, but people who are neither working late nor eating at the office start expensing their meals, that’s expense fraud.

These things are really not gray area. It seems more obvious when we talk about sales people abusing budgets, but there’s a blind spot when we start talking about engineers doing it.


Frankly this sort of thing should be ignored, if not explicitly encouraged, by the company.

Engineers are very highly paid. Many are paid more than $100/hr if you break it down. If a salaried engineer paid the equivalent of $100/hr stays late doing anything, expenses a $25 meal, and during the time they stay late you get the equivalent of 20 minutes of work out of them- including in intangibles like team bonding via just chatting with coworkers or chatting about some bug- then the company comes out ahead.

That you present the above as considered "expense fraud" is fundamentally a penny-wise, pound-foolish way to look at running a company. Like you say, it's not really a gray area. It's a feature not a bug.


> Like you say, it's not really a gray area. It's a feature not a bug.

Luckily that comes down to the policy of the individual company and is not enforced by law. I am personally happy to pay engineers more so they can buy this sort of thing themselves and we dont open the company to this sort of abuse. Then its a known cost and the engineers can decide from themselves if they want to spend that $30 on a meal or something else.


To give them enough money to buy that $30 meal as a personal expense, you need to pay them around $50 in marginal comp expenses.

It can be a win for both sides for the employees to work an extra 30-90 minutes and have some team bonding and to feel like they’re getting a good deal. (Source: I did this for years at a place that comp’d dinner if you worked more than 8 hours AND past 6 PM; we’d usually get more than half the team staying for the “free” food.)


I have found that the success of things like this depend greatly on so many factors such as office type, location, team moral, management style, individual personalities, even mean age etc.

I have worked in places where the exact opposite of what you describe happens. As OP says, people just stop working at 6 and just start reading reddit or scrolling their phones. No team bonding and chat because everyone is wiped out from a hard day. Just people hanging around, grabbing their food when it arrives, and leaving.

We too had more than half the team staying for the “free” food, but they definitely didnt do much work whilst they were there.


> It’s basic expense fraud.

I'm making the case that mandatory unpaid overtime is effectively wage theft. It is legal in the US because half of jobs there are "exempt" from the usual overtime protections. There's no ethical reason for that, just political ones.

At any rate, I think people who want to crack down on meal expenses out of a sense of justice should get at least as annoyed by employers taking advantage of their employees in technically allowed ways.


Tragedy of the Commons is a real thing. The goto solution that most companies use is to remove all privileges for everyone. But really, this is a cultural issue. This is how company culture is lost when a company gets larger.

A better option is for leadership to enforce culture by reinforcing expectations and removing offending employees if need be to make sure that the culture remains intact. This is a time sync, without a doubt. For leadership to take this on it has to believe that the unmeasurable benefit of a good company culture outweighs the drag on leadership's efficiency.

Company culture is will always be actively eroded in any company and part of the job of leadership is to enforce culture so that it can be a defining factor in the company's success for as long as possible.


soft fraud mentality

This isn’t about fraud anymore. It’s about how suspiciously managers want to view their employees. That’s a separate issue (but not one directed at employees).


If a company says you have permission to spend money on something for a purpose, but employees are abusing that to spend money on something that clearly violates that stated purpose, that’s into fraud territory.

This is why I call it the soft fraud mentality: When people see some fraudulent spending and decide that it’s fine because they don’t think the policy is important.

Managers didn’t care. It didn’t come out of their budget.

It was the executives who couldn’t ignore all of the people hanging out in the common areas waiting for food to show up and then leaving with it all together, all at once. Then nothing changed after the emails reminding them of the purpose of the policy.

When you look at the large line item cost of daily food delivery and then notice it’s not being used as intended, it gets cut.


This might come as a bit of a surprise to you, but most (really all) employees are in it for money. So if you are astonished that people optimize for their financial gain, that’s concerning. That’s why you implement rules.

If you start trying to tease apart the motivations people have even if they are following those rules, you are going to end up more paranoid than Stalin.


> This might come as a bit of a surprise to you

> So if you are astonished that people optimize for their financial gain, that’s concerning.

I’m not “surprised” nor “astonished” nor do you need to be “concerned” for me. That’s unnecessarily condescending.

I’m simply explaining how these generous policies come to and end through abuse.

You are making a point in favor of these policies: Many will see an opportunity for abuse and take it, so employers become more strict.


I find your tone commendable, and I hope I can extend you the same courtesy of being respectful while disagreeing.

The idea that a company offering food in some capacity can be seen as generous is, at best, confusing and possibly naïve. A company does this because it expects such a policy will extract more work for less pay. There is no benevolence in the relationship between a company and an individual — only pure, raw self-interest.

In my opinion, the best solution is not to offer benefits at all, but simply to overpay everyone. That’s far more effective, since individuals then spend their own money as they choose, and thus take appropriate care of it.


> but most (really all) employees are in it for money

Yes, but some also have a moral conscience and were brought up to not take more than they need.

If you are not one of these types of people, then not taking complete over advantage of an offer like free meals probably seems like an alien concept.

I try to hire more people like this, it makes for a much stronger workforce when people are not all out to get whatever they can for themselves and look out for each others interests more.


This is disingenuous but soft-fraud is not a term I’d use for it. Fraud is a legal term. You either commit fraud or you do not. There is no “maybe” fraud—you comply with a policy or law or you don’t.

As you mentioned, setting policy that isn’t abused is hard. But abuse isn’t fraud—it’s abuse—and abuse is its own rabbit hole that covers a lot of these maladaptive behaviors you are describing.


It’s called expense fraud.

I call the meal expense abuse “soft fraud” because people kind of know it’s fraud, but they think it’s small enough that it shouldn’t matter. Like the “eh that’s fine” commenter above: They acknowledged that it’s fraud, but also believe it’s fine because it’s not a major fraud.

If someone spends their employer’s money for personal benefit in a way that is not consistent with the policies, that is legally considered expense fraud.

There was a case local to me where someone had a company credit card and was authorized to use it for filling up the gas tank of the company vehicle. They started getting in the habit of filling up their personal vehicle’s gas tank with the card, believing that it wasn’t a big deal. Over the years their expenses weren’t matching the miles on the company vehicle and someone caught on. It went to court and the person was liable for fraud, even though the total dollar amount was low five figures IIRC. The employee tried to argue that they used the personal vehicle for work occasionally too, but personal mileage was expensed separately so using the card to fill up the whole tank was not consistent with policy.

I think people get in trouble when they start bending the rules of the expense policy thinking it’s no big deal. The late night meal policy confounds a lot of people because they project their own thoughts about what they think the policy should be, not what the policy actually is.


Fraud is also used colloquially and it doesn't seem we're in a court of justice rn.

Where do you even get the $3,000 standing desk? I am don't even compare prices and I got mine from Amazon for $200-$300. Sure the quality might not be the best but I just can't see there are people buying $3000 standing desks.

This desk (used to?) fit the budget, for example: https://www.architonic.com/en/p/holmris-b8-milk-classic-1070...

Essentially, you pay a lot for fancy design.


Early in the pandemic I bought a decent motorized standing desk for $520. It's nice, but I could very easily imagine a desk that costs 6x that. I would never buy that desk, but some people go for that sort of thing.

Seriously? You can't budget an extra 10-20k for hardware upgrades for the Software Engineer that you are paying 300k a year for ?

Plus, not to mention the return on investment you get from retaining the talent and the value they add to your product and organization.

If you walk into a mechanic shop, just the Snap On primary tool kit is like 50k.

It always amazes me that companies go cheap on basic tools for their employees, yet waste millions in pointless endeavors.


I know a FAANG company whose IT department, for the last few years, has been "out of stock" for SSD drives over 250GB . They claim its a global market issue (it's not). There's constant complaining in the chats for folks who compile locally. The engineers make $300k+ so they just buy a second SSD from Amazon on their credit cards and self-install them without mentioning it to the IT dept. I've never heard a rational explanation for the "shortage" other than chronic incompetence from the team supplying engineers with laptops/desktops. Meanwhile, spinning up a 100TB cloud VM has no friction whatsoever there. It's a cushy place to work tho, so folks just accept the comically dumb aspects everyone knows about.

I've wondered if that's to make dealing with full disk backup/forensic collections/retention legal hold/etc easier: keep the official amount of end-user device storage to a minimum. And/or it forces the endpoint to depend on network/cloud storage, giving better business intelligence on what data is "hot".

Unfortunately, there isn’t much you can do other than fuss at some random director or feedback form. Or quit, I guess. But that seems a little extreme.

Anyway, your choices of what to do about idiocy like this are pretty limited.


I think you're maybe underestimating the aggregate cost of totally unconstrained hardware/travel spending across tens or hundreds of thousands of employees, and overestimating the benefits. There need to be some limits or speedbumps to spending, or a handful of careless employees will spend the moon.

It's the opposite.

You're underestimating the scope of time lost by losing a few percent in productivity per employee across hundreds of thousands of employees.

You want speed limits not speed bumps. And they should be pretty high limits...


I don't believe anyone is losing >1% productivity from these measures (at FANG employers).

When Apple switched to their own silicon, I was maintaining the build systems at a scaleup.

After I saw the announcement, I immediately knew I needed to try out our workflows on the new architecture. There was just no way that we wouldn't have x86_64 as an implicit dependency all throughout our stack. I raised the issue with my manager and the corporate IT team. They acknowledged the concern but claimed they had enough of a stockpile of new Intel machines that there was no urgency and engineers wouldn't start to see the Apple Silicon machines for at least another 6-12 months.

Eventually I do get allocated a machine for testing. I start working through all the breakages but there's a lot going on at the time and it's not my biggest priority. After all, corporate IT said these wouldn't be allocated to engineers for several more months, right? Less than a week later, my team gets a ticket from a new-starter who has just joined and was allocated an M1 and of course nothing works. Turns out we grew a bit faster than anticipated and that stockpile didn't last as long as planned.

It took a few months before we were able to fix most of the issues. In that time we ended up having to scavenge under-specced machines form people in non-technical roles. The amount of completely avoidable productivity wasted from people swapping machines would have easily reached into the person-years. And of course myself and my team took the blame for not preparing ahead of time.

Budgets and expenditure are visible and easy to measure. Productivity losses due to poor budgetry decisions, however, are invisible and extremely difficult to measure.


> I raised the issue with my manager and the corporate IT team.

> And of course myself and my team took the blame for not preparing ahead of time.

If your initial request was not logged and then able to be retrieved by yourself in defence, then I would say something is very wrong at your company.


> able to be retrieved by yourself in defence

You are suggesting a level of due process that is wildly optimistic for most companies. If you are an IC, such blame games are entirely resolved behind closed doors by various managers and maybe PMs. Your manager may or may not ask you for supporting documentation, and may or may not be able to present it before the "retrospective" is concluded.


I could perhaps have been clearer with that point - this was more about public perception. People have a tendency to jump to conclusions - build system is not working, must be the build system team's fault.

But regardless, I already left there a few years back.


Actually, just time spent compiling, or waiting for other builds to finish makes investing in the top level macbook pro worth it every 3 years. I think the calculation assumed something like 1-2% of my time was spent compiling, and I cost like $100k per year.

Who is still doing builds on developer laptops instead of a remote build farm? You can have so much more compute available when it doesn't need to be laptop form factor.

Scaling cuts both ways. You may also be underestimating the aggregate benefits of slight improvements added up across hundreds or thousands of employees.

For a single person, slight improvements added up over regular, e.g., daily or weekly, intervals compound to enormous benefits over time.

XKCD: https://xkcd.com/1205/


The breakeven rate on developer hardware is based on the value a company extracts not their salary. Someone making X$/year directly has a great deal of overhead in terms of office space and managers etc, and above that the company only employees them because the company gains even more value.

Saving 1 second/employee/day can quickly be worth 10+$/employee/year (or even several times that). But you rarely see companies optimizing their internal processes based on that kind of perceived benefits.

Water cooler placement in a cube farm comes to mind as a surprisingly valuable optimization problem.


The cost of a good office chair is comparable to a top tier gaming pc, if not higher.

Not for an enterprise buying (or renting) furniture in bulk it isn’t. The chair will also easily last a decade and be turned over to the next employee if this one leaves… unlike computer hardware which is unlikely to be reused and will historically need to be replaced every 24-36 months even if your dev sticks around anyway.

> computer hardware which is unlikely to be reused and will historically need to be replaced every 24-36 months

That seems unreasonably short. My work computer is 10 years old (which is admittedly the other extreme, and far past the lifecycle policy, but it does what I need it to do and I just never really think about replacing it).


> My work computer is 10 years old... but it does what I need it to do and I just never really think about replacing it

It depends what you're working on. My work laptop is 5 years old, and it takes ~4 minutes to do a clean compile of a codebase I work on regularly. The laptop I had before that (which would now be around 10 years old) would take ~40 minutes to compile to the same codebase. It would be completely untenable for me to do the job I do with that laptop (and indeed I only started working in the area I do once I got this one).


Right, the employee with unlimited spend would want to sit in a used chair.

That’s more or less my point from a different angle: unlimited spend isn’t reasonable and the justification “but $other_thing is way more expensive!” Is often incorrect.

An Aeron chair that's not been whacked with baseball bats looks pretty much the same after many, many years.

Are there any FANG employers unwilling to provide good office chairs? I think even cheap employers offer these.

I think my employer hed a contest to see which of 4 office chairs people liked the most, then they bought the one that everyone hated. I’m not quite sure anymore what kind of reason was given.

There are many that won’t even assign desks, much less provide decent chairs. Amazon and LinkedIn are two examples I know from personal experience.

It's not abuse to open 500 Chrome tabs if they're work-related and increase my productivity.

I am 100x more expensive than the laptop. Anything the laptop can do instead of me is something the laptop should be doing instead of me.


I agree with your overall point, but:

> But that abuse is really capped out at a few thousand

That abuse easily goes into the tens of thousands of dollars, even several hundred thousand, even at a relatively small shop. I just took a quick look at Apple's store, and wow! The most expensive 14" MacBook Pro I could configure (minus extra software) tops out at a little over $7,000! The cheapest is at $1,600, and a more reasonably-specced, mid-range machine (that is probably perfectly sufficient for dev work), can be had for $2,600.

Let's even round that up to $3,000. That's $4,000 less than the high end. Even just one crazy-specced laptop purchase would max out your "capped out at a few thousand" figure.

And we're maybe not even talking about abuse all the time. An employee might fully earnestly believe that they will be significantly more productive with a spec list that costs $4,000, when in reality that $3,000 will be more or less identical for them.

Multiply these individual choices out to a 20 or 40 or 60 person team, and that's real money, especially for a small startup. And we haven't even started talking about monitors and fancy ergonomic chairs and stuff. 60 people spending on average $2,000 each more than they truly need to spend will cost $120k. (And I've worked at a place that didn't eliminate their "buy whatever you think you'll need" policies until they had more than 150 employees!)


Just to do web development? I regularly go into swap running everything I need on my laptop. Ideally I'd have VScode, webpack, and jest running continuously. I'd also occasionally need playwright. That's all before I open a chrome tab.

This explains a lot about why the modern web is the way it is.

I do think a lot of software would be much better if all devs were working on hardware that was midrange five years ago and over a flaky WiFi connection.

Always amuses me when I see someone use web development as an example like this. Web dev is very easily in the realm of game dev as far as required specs for your machine, otherwise you're probably not doing much actual web dev. If anything, engineers doing nothing but running little Java or Python servers don't need anything more than a PI and a two-color external display to do their job.

What would be a good incentivizing strategy to prevent over spending on hardware? I can think of giving a budget and the amount not spend is payed out to them (but when the salary is that high it might not make sense) or like having a internal dashboard where everybody can see every body’s spending on hardware, so people feel bad when they order to much.

Probably better to just request an unreviewed but detailed justification, and then monitor spend and police the outliers after the fact (or when requesting above an invisible threshold, e.g. any fully-specced Apple products).

The outliers will likely be two kinds:

1) People with poor judgement or just an outright fraudulent or entitled attitude. These people should be watched for performance issues and managed out as needed. And their hardware reclaimed.

2) People that genuinely make use of high end hardware, and likely have a paper trail of trying to use lower-end hardware and showing that it is inefficient.

This doesn't stop the people that overspend slightly so that they are not outliers, but those people are probably not doing substantial damage.


It's straightforward to measure this; start a stopwatch every time your flow gets interrupted by waiting for compilation or your laptop is swapping to keep the IDE and browser running, and stop it once you reach flow state again.

We managed to just estimate the lost time and management (in a small startup) was happy to give the most affected developers (about 1/3) 48GB or 64GB MacBooks instead of the default 16GB.

At $100/hr minimum (assuming lost work doesn't block anyone else) it doesn't take long for the upgrades to pay off. The most affected devs were waiting an hour a day sometimes.

This applies to CI/CD pipelines too; it's almost always worth increasing worker CPU/RAM while the reduction in time is scaling anywhere close to linearly, especially because most workers are charged by the minute anyway.


FANG is not monolithic. Amazon is famously cheap. So is Apple in my opinion based on what I have heard (you get random refurbished hardware that is available not some standardized thing, sometimes with 8GB RAM sometimes something nicer) Apple is also famously cheap on their compensation. Back in the day they proudly said shit to the effect of "we deliberately don't pay you top of the market because you have to love Apple" to which the only valid answer is "go fuck yourself."

Google and Facebook I don't think are cheap for developers. I can speak firsthand for my past Google experience. You have to note that the company has like 200k employees and there needs to be some controls and not all of the company are engineers.

Hardware -> for the vast majority of stuff, you can build with blaze (think bazel) on a build cluster and cache, so local CPU is not as important. Nevertheless, you can easily order other stuff should you need to. Sure, if you go beyond the standard issue, your cost center will be charged and your manager gets an email. I don't think any decent manager would block you. If they do, change teams. Some powerful hardware that needs approval is blanket whitelisted for certain orgs that recognize such need.

Trips -> Google has this interesting model you have a soft cap for trips and if you don't hit the cap, you pocket half of the trips credit in your account which you can choose to spend later when you are overcap or you want to get something slightly nicer the next time. Also, they have clear and sane policies on mixing personal and corporate travel. I encourage everyone to learn about and deploy things like that in their companies. The caps are usually not unreasonable, but if you do hit them, it is again an email to your management chain, not some big deal. Never seen it blocked. If your request is reasonable and your manager is shrugging about this stuff, that should reflect on them being cheap not the company policy.


iOS development is still mostly local which is why most of the iOS developers at my previous Big Tech employer got Mac Studios as compiler machines in addition to their MacBook Pros. This requires director approval but is a formality.

I read Google is now issuing Chromebooks instead of proper computers to non-engineers, which has got to be corrosive to productivity and morale.


Google issued Chromebooks are not crap with 2GB RAM and Celeron. There were even engineers who voluntarily preferred them. From a security standpoint they are superb.

If you're not a developer and everything you need for your job runs in a browser, what's wrong with a Chromebook?

And has the upside of not having to force an antivirus or Crowdstrike or similar corporate spyware.

> Chromebooks ... to non-engineers

"AI" (Plus) Chromebooks?


Google used to be so un-cheap they had a dedicated ergo lab room where you could try out different keyboards.

They eventually became so cheap they blanket paused refreshing developer laptops...


Yahoo was cheap/stingy/cost concious as hell. They still had a well stocked ergo team, at least for the years I was there. You'd schedule an ergo consult during new hire orientation, and you'd get a properly sized seat and your desk height adjusted if needed and etc. Lots of ergo keyboards, although I didn't see a lot of kinesis back then.

Proper ergo is a cost concious move. It helps keep your employees able to work which saves on hiring and training. It reduces medical expenses, which affects the bottom line because large companies are usually self-insured; they pay a medical insurance company only to administer the plan, not for insurance --- claims are paid from company money.


Some BigCos would benefit from <Brand> version numbers to demarcate changes in corporate leadership, culture and fiscal policy.

The soft cap thing seems like exactly this kind of penny-foolish behavior though. I’ve seen people spend hours trying to optimize their travel to hit the cap — or dealing with flight changes, etc that come from the “expense the flight later” model.

All this at my company would be a call or chat to the travel agent (which, sure, kind of a pain, but they also paid for dedicated agents so wait time was generally good).


> sometimes with 8GB RAM

Apple have long thought that 8Gb ram is good enough for anything, and will continue to for some time now.


> Back in the day they proudly said shit to the effect of "we deliberately don't pay you top of the market because you have to love Apple" to which the only valid answer is "go fuck yourself."

So people started slacking off, because "you have to love your employees"?


Not sure what you are talking about re amzn.

I have a pretty high end MacBook Pro, and that pales in comparison to the compute I have access to.


The OP was talking beyond just compute hardware. Stuff like this: https://www.reddit.com/r/womenintech/comments/1jusbj2/amazon...

That’s fair criticism. I only corrected the hardware aspect of it all.

All of OPs posts in that thread are blatantly Chat GPT output

Because.. em-dashes? As many others have mentioned, ios/mac have auto em-dashes so it's not really a reliable indicator.

It’s so annoying that we’ve lost a legit and useful typographic convention just because some people think that AI overusing it means that all uses indicate AI.

Sure, I’ve stopped using em-dashes just to avoid the hassle of trying to educate people about a basic logical fallacy, but I reserve the right to be salty about it.


I find adding some typos and 1 or 2 bad grammer things lets you get away with whatever you want

> 1 or 2 bad grammer things

1 or 2 bed gamer things


Several things:

1) Em-dashes

2) "It's not X, it's Y" sentence structure

3) Comma-separated list that's exactly 3 items long


>1) Em-dashes

>3) Comma-separated list that's exactly 3 items long

Proper typography and hamburger paragraphs are canceled now because of AI? So much for what I learned high school english class.

>2) "It's not X, it's Y" sentence structure

This is a pretty weak point because it's n=1 (you can check OP's comment history and it's not repeated there), and that phrase is far more common in regular prose than some of the more egregious ones (eg. "delve").


You sound like a generated message from a corporate reputation AI defense bot

How do you know someone worked at Google?

Don’t worry, they’ll tell you


With compiler development work, a low end machine will do just fine, as long as it has a LARGE monitor. (Mine is 3840x2160, and I bought a satellite monitor to extend it.)

P.S. you can buy a satellite monitor often for $10 from the thrift store. The one I bought was $10.

I don't buy used keyboards because they are dirty and impossible to clean.


> highest spec MacBook just to do web development and open 500 chrome tabs. There is abuse.

Why is that abuse? Having many open browser tabs is perfectly legitimate.

Arguably they should switch from Chrome to Safari / lobby Google to care about client-side resource use, but getting as much RAM as possible also seems fine.


This is especially relevant now that docker has made it easy to maintain local builds of the entire app (fe+be). Factor in local AI flows and the RAM requirements explode.

I have a whisper transcription module running at all times on my Mac. Often, I'll have a local telemetry service (langfuse) to monitor the 100s of LLM calls being made by all these models. With AI development it isnt uncommon to have multiple background agents hogging compute. I want each of them to be able to independently build + host and test their changes. The compute load apps up quickly. And I would never push agent code to a cloud env (not even a preview env) because I don't trust them like that and neither should you.

Anything below an M4 pro 64GB would be too weak for my workflow. On that point, Mac's unified VRAM is the right approach in 2025. I used windows/wsl devices for my entire life, but their time is up.

This workflow is the first time I have needed multiple screens. Pre-agentic coding, I was happy to work on a 14 inch single screen machine with standard thinkpad x1 specs. But, the world has changed.


> On that point, Mac's unified VRAM is the right approach in 2025. I used windows/wsl devices for my entire life, but their time is up.

AMD's Strix Halo can have up to 128GB of unified RAM, I think. The bandwidth is less than half the Mac one, but it's probably going to accelerate.

Windows doesn't inherently care about this part of the hardware architecture.


Not providing 2 monitors to those who want is hare-brained. As far as I'm concerned, 2 monitors = 2x more efficient working.

Isn't it about equal treatment? You can't buy one person everything they want, just because they have high salary, otherwise the employee next door will get salty.

I previously worked at a company where everyone got a budget of ~$2000. The only requirement was you had to get a mac (to make it easier on IT I assume), the rest was up to you. Some people bought a $2000 macbook pro, some bought a $600 mac mini and used the rest on displays and other peripherals.

Equality doesn't have to mean uniformity.


I saw this tried ones and it didn’t work.

Some people would minimize the amount spent on their core hardware so they had money to spend on fun things.

So you’d have to deal with someone whose 8GB RAM cheap computer couldn’t run the complicated integration tests but they were typing away on a $400 custom keyboard you didn’t even know existed while listening to their AirPods Max.


I mean; looks like someone volunteered to make the product work on low spec machines. That's needed.

I've been on teams where corporate hardware is all max spec, 4-5 years ahead of common user hardware, provided phones are all flagships replaced every two years. The product works great for corporate users, but not for users with earthly budgets. And they wonder how competitors swallow market in low income countries.


> I mean; looks like someone volunteered to make the product work on low spec machines. That's needed.

The developer integration tests don’t need to run on a low spec machine. That is not needed.


That's probably another reason why we were limited to a set menu of computer options.

I've often wondered how a personal company budget would work for electrical engineers.

At one place I had a $25 no question spending limit, but sank a few months trying to buy a $5k piece of test equipment because somebody thought maybe some other tool could be repurposed to work, or we used to have one of those but it's so old the bandwidth isn't useful now, or this project is really for some other cost center and I don't work for that cost center.

Turns out I get paid the same either way.


If we're talking about rich faang type companies, no, it's not about equal treatment. These companies can afford whatever hardware is requested. This is probably true of most companies.

Where did this idea about spiting your fellow worker come from?


That doesn’t matter. If I’m going to spend 40% of my time alive somewhere, you bet a requirement is that I’m not working on ridiculously outdated hardware. If you are paying me $200k a year to sit around waiting for my PC to boot up, simply because Joe Support that makes 50k would get upset, that’s just a massive waste of money.

I don't think so. I think mostly just keeping spend down in aggregate.

"even the rich FANG types"

I think you wanted to say "especially". You're exchanging clearly measurable amounts of money for something extremely nebulous like "developer productivity". As long as the person responsible for spend has a clear line of view on what devs report, buying hardware is (relatively) easy to justify.

Once the hardware comes out of a completely different cost center - a 1% savings for that cost center is promotion-worthy, and you'll never be able to measure a 1% productivity drop in devs. It'll look like free money.


Multi-core operations like compiling C/C++ could benefit.

Single thread performance of 16-core AMD Ryzen 9 9950X is only 1.8x of my poor and old laptop's 4-core i5 performance. https://www.cpubenchmark.net/compare/6211vs3830vs3947/AMD-Ry...

I'm waiting for >1024 core ARM desktops, with >1TB of unified gpu memory to be able to run some large LLMs with

Ping me when some builds this :)


Yes, just went from i3770 (12 years old!) to a 9900x as I tend to wait for a doubling of single core performance before upgrading (got through a lot of PCs in the 386/486 era!). It's actually only 50% faster according to cpubenchmark [0] but is twice as fast in local usage (multithread is reported about 3 times faster).

Also got a Mac Mini M4 recently and that thing feels slow in comparison to both these systems - likely more of a UI/software thing (only use M4 for xcode) than being down to raw CPU performance.

[0] https://www.cpubenchmark.net/compare/Intel-i9-9900K-vs-Intel...


M4 is amazing hardware held up by a sub-par OS. One of the biggest bottlenecks when compiling software on a Mac is notarization, where every executable you compile causes a HTTP call to Apple. In addition to being a privacy nightmare, this causes the configure step in autoconf based packages to be excruciatingly slow.

They added always-connected DRM to software development, neat

Exactly. They had promised to make notarization opt-out but reneged.

Notarization isn’t called for the vast majority of builds on the vast majority of build systems.

Your local dev builds don’t call it or require it.

It’s only needed for release builds, where you want it notorized (required on iOS, highly recommended for MacOS). I make a Mac app and I call the notarization service once or twice a month.


Does this mean that compilation fails without an internet connection? If so, that's horrifying.

Yes, of course it does, isn't it nice?

Even better if you want to automate the whole notarization thing you don't have a "nice" notarize-this-thing command that blocks until its notarized and fails if there's an issue, you send a notarization request... and wait, and then you can write a nice for/sleep/check loop in a shell script to figure out whether the notarization finished and whether it did so successfully. Of course from time to time the error/success message changes so that script will of course break every so often, have to keep things interesting.

Xcode does most of this as part of the project build - when it feels like it that is. But if you want to run this in CI its a ton a additional fun.


None of this comment is true.

Compilation works fine without notarization. It isn't called by default for the vast majority of complications. It is only called if you submit to an App Store, or manually trigger notarization.

The notarization command definitely does have the wait feature you claim it doesn't: `xcrun notarytool ... --wait`.


Wait wait wait wait wait. So you're saying that a configure script that compiles a 5-line test program (and does this 50 times) to check if a feature is present or a compiler flag works... will have to call out to Apple for permission to do so??

Ugh. Disgusting. So glad I stopped using macOS years ago. (Even if this isn't actually true... still glad I stopped using Apple's we-know-better-than-you OS years ago.)

It is amazing to me that people put up with this garbage and don't use an OS that respects them more.


I jumped ahead about 5 generations of Intel, when I got my new laptop and while the performance wasn't much better, the fact that I changed from a 10 pound workstation beast that sounded like a vacuum cleaner, to a svelte 13 inch laptop that works with a tiny USB C brick, and barely runs its fans while being just as fast made it worthwhile for me.

Tangential: TIL you can compile the Linux kernel in < 1 minute (on top-spec hardware). Seems it’s been a while since I’ve done that, because I remember it being more like an hour or more.

I suspect that's a very slimmed-down .config and probably only e.g. `make bzImage`, versus something like a Debian kernel .config with modules, headers, etc. A full Ubuntu kernel takes quite a bit longer on my 5950X, which is admittedly quite a bit slower than a 9950X but still no laggard. I'll time it and update this comment later...

ETA: After dropping caches it took 18m40s to build Ubuntu mainline 6.16.3 with gcc-14 on my 5950X, with packages for everything including linux-doc, linux-rust-lib, linux-source, etc. I'd expect the same operation to take about 11m on a 9950X.

It's also not clear whether OP used Clang or GCC to build the kernel. Clang would be faster.


I remember I was blown away by some machine that compiled it in ~45 minutes. Pentium Pro baby! Those were the days.

Painful memories of trying to build the kernel on my 486DX2/50, letting it run overnight and waking up to a compile-time error or non-booting kernel...

My memory must be faulty, then, because I was mostly building it on an Athlon XP 2000+, which is definitely a few generations newer than a Pentium Pro.

I’m probably thinking of various other packages, since at the time I was all-in on Gentoo. I distinctly remember trying to get distcc running to have the other computers (a Celeron 333 MHz and Pentium III 550 MHz) helping out for overnight builds.

Can’t say that I miss that, because I spent more time configuring, troubleshooting, and building than using, but it did teach me a fair amount about Linux in general, and that’s definitely been worth it.


Linux kernel compilation time depends heavily on what you're compiling though. You can have wildly different compilation times just by enabling or disabling some drivers or subsystems.

Yep - I have a 9950X desktop. Building the kernel as part of NixOS builds almost every possible option as a module - for ARM64 that takes something like 15 minutes.

Let's also not forget that kernel was not stagnant, but grew a lot over the years.

Also true! Hard to say which was moving faster, though: CPU speed increases & RAM amount increase, or the additional code and complexity written into the kernel.

Spinning hard drives were soooo slow! Maybe very roughly an order of magnitude from SSDs and an order of magnitude from multi-core?

i think you nailed the centrail point.

rotational disks usually top up at ~85 MB/sec, with seek time being up to 12 msec for consumer drives and ~6 msec for enterprise drives (15k rpm).

ssd could saturate the sata bus and would top at 500-550 MB/sec, with essentially no seek time. latency wold be anything between 50 and 250 microseconds (depending on the operation).

nvme disks instead can sometimes full utilise a pci-e lane and reach multiple gigabytes/second in sequential read (ie: pci-e gen5 nvme disks can peak at 7 GB/sec) with latencies as low as 10–30 microseconds for reads and 20–100 microseconds for writes.

As compiling the kernel meant/means doing a lot of i/o on small files, you can see why disk access is a huge factor.

A friend of mine did work on LLVM for his PhD... The first thing he did when he got funding for his phd was getting a laptop with as much memory as possible (i think it was 64gb on a dell workstation, at the time) and mount his work directory in tmpfs.


I'd like to know why making debian packages containing the kernel now takes substantially longer than a clean build of the kernel. That seems deeply wrong and rather reduces the joy at finding the kernel builds so quickly.

Are you building the Debian kernel with the same .config as when you build the kernel directly? If not, that's probably why. A Debian kernel build will build basically everything, mostly as modules. A kernel tailored for your specific hardware and needs will be much faster to build.

Also disabling building the _dbg package for the Debian build will significantly speed things up. Building that package takes a strangely long amount of time.


In 2003, Egenera boxes could compile Linux in around 15 seconds.

Even my dual 7402 with 96 threads and 512 GiB of RAM can't compile a maximal Linux config x86 build in RAM in under 3 minutes.

What I find helps repeated builds is maintaining a memcache server instance for sccache.


> Top end CPUs are about 3x faster than the comparable top end models 3 years ago

I wish that were true, but the current Ryzen 9950 is maybe 50% faster than the two generations older 5950, at compilation workloads.


Not even. Probably closer to 30%, and that's if you are doing actual many-core compile workloads on your critical path.

Phoronix has actual benchmarks: https://www.phoronix.com/review/amd-ryzen-9950x-9900x/2

It's not 3x, but it's most certainly above 1.3x. Average for compilation seems to be around 1.7-1.8x.


Oh, fair enough -- you are correct compared to 5950x. I was mentally comparing my personal experience with 7950x. Yes 1.7-1.9x seems fair for a bump from 5950x to 9950x (albeit at a higher power draw).

The author used kernel compilation as a benchmark. Which is weird, because for most projects a build process isn't as scalable as that (especially in the node.js ecosystem), even less after a full build.

Depends on the workload.

I spent a few grand building a new machine with a 24-core CPU. And, while my gcc Docker builds are MUCH faster, the core Angular app still builds a few seconds slower than on my years old MacBook Pro. Even with all of my libraries split into atoms, built with Turbo, and other optimizations.

6-10 seconds to see a CSS change make its way from the editor to the browser is excruciating after a few hours, days, weeks, months, and years.


Web development is crazy. Went from a Java/C codebase to a webdev company using TS. The latter would take minutes to build. The former would build in seconds and you could run a simulated backtest before the web app would be ready.

It blew my mind. Truly this is more complicated than trading software.


For this reason the TypeScript 7 compiler will be written in Go.

That won't help Angular, because its design doesn't lend itself to such speedups. The compiler produces a change detector function for every reactive structure such a class field, so the final app is necessarily huge.


No ccache?

A lot of this seems to have gotten a lot better with esbuild for me, at least, and maybe tsgo will be another big speed-up once it's done...

This compares a new desktop CPU to older laptop ones. There are much more complete benchmarks on more specialized websites [0, 1].

> If you can justify an AI coding subscription, you can justify buying the best tool for the job.

I personally can justify neither, but not seeing how one translates into another: is a faster CPU supposed to replace such a subscription? I thought those are more about large and closed models, and that GPUs would be more cost-effective as such a replacement anyway. And if it is not, it is quite a stretch to assume that all those who sufficiently benefit from a subscription would benefit at least as much from a faster CPU.

Besides, usually it is not simply "a faster CPU": sockets and chipsets keep changing, so that would also be a new motherboard, new CPU cooler, likely new memory, which is basically a new computer.

[0] https://www.cpubenchmark.net/

[1] https://www.tomshardware.com/pc-components/cpus


I wish developers, and I'm saying this as one myself, were forced to work on a much slower machine, to flush out those who can't write efficient code. Software bloat has already gotten worse by at least an order of magnitude in the past decade.

Yeah, I recognize this all too well. There is an implicit assumption that all hardware is top-tier, all phones are flagships, all mobile internet is 5G, everyone has regular access to free WiFi, etc.

Engineers and designers should compile on the latest hardware, but the execution environment should be capped at the 10th percentile compute and connectivity at least one rotating day per week.

Employees should be nudged to rotate between Android and IOS on a monthly or so basis. Gate all the corporate software and ideally some perks (e.g. discounted rides as a ride-share employee) so that you have to experience both platforms.


If they get the latest hardware to build on the build itself will become slow too.

My biggest pet peeve is designers using high end apple displays.

You've average consumer is using a ultra cheap LCD panel that has no where near the contrast ratio that you are designing your mocks on, all of your subtle tints get saturated out.

This is similar to good audio engineers back in the day wiring up a dirt cheap car speaker to mix albums.


Those displays also have a huge resolution and eye-blindingly bright contrast by default, which is also how you get UI elements which are excessively large, tons of wasted space padding, and insanely low contrast.

> This is similar to good audio engineers back in the day wiring up a dirt cheap car speaker to mix albums.

Isn't that the opposite of what's happening?

I have decent audio equipment at home. I'd rather listen to releases that were mixed and mastered with professional grade gear.

Similarly, I'd like to get the most out of my high-end Apple display.

Optimizing your product for the lowest common denominator in music/image quality sounds like a terrible idea. The people with crappy gear probably don't care that much either way.


Ideally, you do both. Optimize on crap hardware, tweak on nice hardware.

They shouldn't work on a slower machine - however they should test on a slower machine. Always.

Even better is to measure real performance at your customers.

Yes!

> were forced to work on a much slower machine

I feel like that's the wrong approach. Like saying a music producer to always work with horrible (think car or phone) speakers. True, you'll get a better mix and master if you test it on speakers you expect others to hear it through, but no one sane recommends you to default to use those for day-to-day work.

Same goes for programming, I'd lose my mind if everything was dog-slow, and I was forced to experience this just because someone thinks I'll make things faster for them if I'm forced to have a slower computer. Instead I'd just stop using my computer if the frustration ended up larger than the benefits and joy I get.


Although, any good producer is going to listen to mixes in the car (and today, on a phone) to be sure they sound at least decent, since this is how many consumers listen to their music.

Yes, this is exactly my point :) Just like any good software developer who don't know exactly where their software will run, they test on the type of device that their users are likely to be running it with, or at least similar characteristics.

The car test has been considered a standard by mixing engineers for the past 4 decades

That's actually a good analogy. Bad speakers aren't just slow good speakers. If you try to mix through a tinny phone speaker you'll have no idea what the track will sound like even through halfway acceptable speakers, because you can't hear half of the spectrum properly. Reference monitors are used to have a standard to aim for that will sound good on all but the shittiest sound systems.

Likewise, if you're developing an application where performance is important, setting a hardware target and doing performance testing on that hardware (even if it's different from the machines the developers are using) demonstrably produces good results. For one, it eliminates the "it runs well on my machine" line.


> can't write efficient code. Software bloat has already gotten worse by at least an order of magnitude in the past decade.

Efficiency is a good product goal: Benchmarks and targets for improvement are easy to establish and measure, they make users happy, thinking about how to make things faster is a good way to encourage people to read the code that's there, instead of just on new features (aka code that's not there yet)

However they don't sell very good: Your next customer is probably not going to be impressed your latest version is 20% faster than the last version they also didn't buy. This means that unless you have enough happy customers, you are going to have a hard time convincing yourself that I'm right, and you're going to continue to look for backhanded ways of making things better.

But reading code, and re-reading code is the only way you can really get it in your brain; it's the only way you can see better solutions than the compiler, and it's the only way you remember you have this useful library function you could reuse instead of writing more and more code; It's the only guaranteed way to stop software bloat, and giving your team the task of "making it better" is a great way to make sure they read it.

When you know what's there, your next feature will be smaller too. You might even get bonus features by making the change in the right place, instead of as close to the user as possible.

Management should be able to buy into that if you explain it to them, and if they can't, maybe you should look elsewhere...

> a much slower machine

Giving everyone laptops is also one of those things: They're slow even when they're expensive, and so developers are going to have to work hard to make things fast enough there, which means it'll probably be fine when they put it on the production servers.

I like having a big desktop[1] so my workstation can have lots of versions of my application running, which makes it a lot easier to determine which of my next ideas actually makes things better.

[1]: https://news.ycombinator.com/item?id=44501119

Using the best/fastest tools I can is what makes me faster, but my production hardware (i.e. the tin that runs my business) is low-spec because that's cheaper, and higher-spec doesn't have a measurable impact on revenue. But I measure this, and I make sure I'm always moving forward.


Perhaps the better solution would be to have the fast machine but have a pseudo VM for just the software you are developing that uses up all of those extra resources with live analysis. The software runs like it is on a slower machine, but you could potentially gather plenty of info that would enable you to speed up the program for everyone.

Why complicated? Incentivize the shit out of it at the cultural level so they pressure their peers. This has gotten completely out of control.

Develop on a fast machine, test and optimise on a slow one?

Efficiency costs development time and thus money. Computers getting faster is what made software development cheaper and possible to use for solving problems.

Usually software gets developed to be so fast that people just barely accept it with the computers of their time. You can do better by setting explicit targets like the RAIL model by google. Optimizing any further usually is just a waste of resources.


The beatings will continue until the code improves.

I get the sentiment but taken literally it's counter productive. If the business cares about perf, put it in the sprint planning. But they don't. You'll just be writing more features with more personal pain.

For what its worth, console gamedev has solved this. You test your game on the weakest console you're targeting. This usually shakes out as a stable perf floor for PC.


I'll agree with one modification: developers should be forced to test on a much slower machine.

My final compiled binary runs much faster than something written in, say, python or javascript, but my oh my is the rust compiler (and rust-analyzer) slow compared to the nonexistent compile steps in those other languages.

But for the most part the problem here isn't developers. It's product management and engineering managers. They just do not make performance a priority. Just like they often don't make bug-fixing and robustness a priority. It's all "features features features" and "time to market" and all that junk.

Maybe make the product managers use 5-year-old mid-range computers. Then when they test the stuff the developers have built, they'll freak out about the performance and prioritize it.


I came here to say exactly this.

If developers are frustrated by compilation times on last-generation hardware, maybe take a critical look at the code and libraries you're compiling.

And as a siblimg comment notes, absolutely all testing should be on older hardware, without question, and I'd add with deliberately lower-quality and -speed data connections, too.


This is one of the things which cheeses me the most about LLVM. I can't build LLVM on less than 16GB of RAM without it swapping to high heaven (and often it just gets OOM killed anyways). You'd think that LLVM needing >16GB to compile itself would be a signal to take a look at the memory usage of LLVM but, alas :)

The thing that causes you run out of memory isn't actually anything in LLVM, it's all in ld. If you're building with debugging info, you end up pulling in all of the debug symbols for deduplication purposes during linking, and that easily takes up a few GB. Now link a dozen small programs in parallel (because make -j) and you've got an OOM issue. But the linker isn't part of LLVM itself (unless you're using lld), so there's not much that LLVM can do about it.

(If you're building with ninja, there's a cmake option to limit the parallelism of the link tasks to avoid this issue).


Contrarian here. I wish all product managers were forced to work on a much slower machine, so that "make this shit fast" becomes the highest priority issue in the backlog.

Nobody is writing slow code specifically to screw over users with old devices. They're doing it because it's the easiest way to get through their backlog of Other Things. As an example, it is a priority for a lot of competitive games, and they perform really well on everything from the latest 5090 to a pretty-old laptop integrated graphics GPU. It's done not because they only hired rockstar performance experts, but because it was a product priority.


it's absolutely the wrong approach.

software should be performance tested, but you don't want a situation when time of single iteration is dominated by duration of functional tests and build time. the faster software builds and tests, the quicker solutions get delivered. if giving your developers 64GB or RAM instead of 32GB halves test and build time, you should happily spend that money.


Assuming you build desktop software; you can build it on a beastly machine, but run it on a reasonable machine. Maybe local builds for special occasions, but it's special, you can wait.

Sure, occasionally run the software on the build machine to make sure it works on beastly machines; but let the developers experience the product on normal machines as the usual.


I wish this hell on other developers, too. ;-)

Yeah but working with windows, visual studio and cooperate security software in a 8gb machine is just pain

Right, optimize for horrible tools so the result satisfies the bottom 20%. Counterpoint, id Software produced amazingly performant programs using top of the line gear. What are you trying to do is enforce a cultural norm by hobbling the programmer's hardware. If you want fast programs, you need to make that a criteria, slow hardware isn't going to get you there.

Something I find weird is that this article compares a 9950x with two different laptop CPUs and concludes that performance has increased massively in the past few years. If you compare the 9950x with its two Desktop predecessors (released 2 and 4 years before), you see about a 6% increase from the 7950x and a 45% increase from the 5950x. So you should consider upgrading regularly, but potentially not every single generation. I think it makes sense to consider the performance and offer an upgrade when you see a 50% or so cumulative improvement. Everywhere I have worked has upgraded developers every 3-4 years, and it might make sense to upgrade if there is a massive change (like when Macbooks went to M-series).

As for Desktop vs Laptop, that is relevant too. Desktops are typically much faster than Laptops because they are allowed much larger power envelopes, which leads to more cores and higher clock speeds for sustained periods of time. However, there is always a question as to whether your use case will be able to use all 16/32 cores/threads in a 9950X CPU. If not, you may not notice much difference with a smaller processor.

Source for CPU benchmarks: https://www.cpubenchmark.net/compare/6211vs5031vs3862vs5717/...


I generally agree you should buy fast machines, but the difference between my 5950x (bought in mid 2021. I checked) and the latest 9950x is not particularly large on synthetic benchmarks, and the real world difference for a software developer who is often IO bound in their workflow is going to be negligible

If you have a bad machine get a good machine, but you’re not going to get a significant uplift going from a good machine that’s a few years old to the latest shiny


Yup. Occasionally I wonder whether it's time to upgrade (5800X3D / 3090), do a back of envelope calc on cost vs incremental gains and very quickly decide I'm good for now

5950x is such a great platform. I can’t see replacing mine for several years at least.

The combination of the twin Matisse I/O dies (one on the CPU package, one serving as the X570 chipset), Zen 3 chiplets, and overall maturity of the AM4 platform is still unmatched. I'm waiting for AM5 to "get good" before upgrading my 5950X (even though Zen 4 and Zen 5 chiplets are already quite good). I hope the next gen brings a core count bump, better DDR5 stability at higher speeds in 2DPC configurations, and USB4V2.

Certainly agree on the chipset front - I went from an X570 to B650 which was absolutely a downgrade, the most painful being the loss of ACS on the chipset PCIe lanes.

Terrible for VFIO!

I enjoy building PCs so I've tried to justify upgrading my 5800x to a 9950x3d. But I really absolutely cannot justify it right now. I can play doom dark ages at 90fps 4k. I don't need it!

FYI, going from some Radeon I had from 6 years ago to a 9950x made a huge impact on game frame rate: choppy to smoother-than-i-can-percieve. And much faster compile times, and code execution if using thread pools. But, I think it was a 3 series Radeon, not 5. Totally worth it compared to say, GPU costs

Dunno. I got a Ryzen 7 with 16 cores from 2021 and the modern web still doesn't render smoothly. Maybe its not the hardware?

Right now I am on my ancient cheap laptop with some 4 core intel and hard drive noises, the only time it has issues with webpages is when I have too many tabs open for its 4gigs of ram. My current laptop which is a 16 core Rhyzen 7 from about 2021 (x13) has never had an issue and I have yet to have too many tabs open on it. I think you might be having a OS/browser issue.

As an aside, being on my old laptop with its hard drive, can't believe how slow life was before SSDs. I am enjoying listening to the hard drive work away and I am surprised to realize that I missed it.


As an alternative anecdote, I've got a Ryzen 7 5800X from 2021, and it's stil blazingly fast for just about everything I throw at it...

Maybe, but I can’t repro. Do you have a GPU? What browser? What web site? How much RAM do you have, and how much is available? What else is running on your machine? What OS, and is it a work machine with corporate antivirus?

I wish I could. But most software nowadays is still limited by single core speed and that area hasn’t seen relevant growth in years.

„Public whipping for companies who don’t parallelize their code base“ would probably help more. ;)

Anyway, how many seconds does MS Teams need to boot on a top of the line CPU?


I'm forced to use teams and SharePoint in my university as a student and I hate every single interaction with it, I wish curse upon their creators, and may their descendants never have a smooth user experience with any software they use.

Except for the ridiculous laggy interface, it has some functional bugs as well such as things just disappearing for a few days and then they pop up again


You're lucky to not have experienced what came before Teams in most corporate environments.

What do you have in mind? Skype was ok, and before that it was mostly e-mail in companies I had contact with.

SharePoint sucked from its inception though.


Lync and Skype for Business were far faster and more pleasant to use than Teams, in my experience.

My issue with corporate laptops isn't so much the PC's hardware but the antimalware. I'm not even sure what the antimalware does, and I think, at one time or another, I've probably had all the major ones installed, and they've all managed to slow down my PC down to frustratingly slow speeds.

But single core performance has been stagnant for ages!

Considering ‘Geekbench 6’ scores, at least.

So if it’s not a task massively benefiting from parallelization, buying used is still the best value for money.


Single core performance has not been stagnant. We're about double where we were in 2015 for a range of workloads. Branch prediction, OoO execution, SIMD, etc. make a huge difference.

The clock speed of a core is important and we are hitting physical limits there, but we're also getting more done with each clock cycle than ever before.


Doubling single-core performance in 10 years amounts to a less than 10% improvement year-over-year. That will feel like "stagnant" if you're on non-vintage hardware. Of course there are improvements elsewhere that partially offset this, but there's no need to upgrade all that often.

> less than 10% improvement year-over-year. That will feel like "stagnant"...

Especially when, in that same time frame, your code editor / mail client / instant messaging / conference call / container management / source code control / password manager software all migrate to Electron...


It’s actually just 7% improvements year over year!

I certainly will not die on this hill: my comment was motivated by recently comparing single core scores on Geekbench6 from 10 years apart CPUs.

Care to provide some data?


Single core performance has tripled in the last 10 years

This. I just did a comparison between my MacBook Pro Early 2015 to MacBook Air M4 Early 2025.

*Intel Core i5-5287U*: - *Single-Core Maximum Wattage*: ~7-12W - *Process Node*: 14nm - *GB6 Single Core *: ~950

- *Apple M4*: - *Single-Core Maximum Wattage*: ~4-6W - *Process Node*: 3nm - *GB6 Single Core *: ~3600

Intel 14nm = TSMC 10nm > 7nm > 5nm > 3nm

In 10 years, we got ~3.5x Single Core performance at ~50% Wattage. i.e 7x Performance per Watt with 3 Node Generation improvements.

In terms of Multi Core we got 20x Performance per Watt.

I guess that is not too bad depending on how you look at it. Had we compared it to x86 Intel or AMD it would have been worst. I hope M5 have something new.


I don’t think that’s true. AMD’s ****X3D chips are evidence that’s not true, with lots of benchmarks supporting this.

> Desktop CPUs are about 3x faster than laptop CPUs

Maybe that’s an AMD (or even Intel) thing, but doesn’t hold for Apple silicon.

I wonder if it holds for ARM in general?


Apple doesn’t really make desktop CPUs, though. Just very good oversized mobile ones.

For AMD/Intel laptop, desktop and server CPUs usually are based on different architectures and don’t have that much overlap.


What’s the difference between a M4 max and a “real” desktop processor?

It can’t be used as a space heater?

It’s not that it’s worse than a “real” desktop chip. In a way it’s better you get almost comparable performance with way lower power usage.

Also the M4 Max has worse MT performance than e.g. the 14900k which is architecture ancient in relative terms and also costs a fraction


Generally PCI-E lanes and memory bandwidth tend to be the big difference between mobile and proper desktop workstation processors.

Core count used to be a big difference but the ARM Procs in the Apple machines certainly meet the lower end workstation parts now. to exceed it you're spending big big money to get high core counts in the x86 space.

Proper desktop processors have lots and lots of PCI-E Lanes. The current cream of the crop Threadripper Pro 9000 series have 128 PCI-E 5.0 Lanes. A frankly enormous amount of fast connectivity.

M2 Ultra, the current closest workstation processor in Apple's lineup (at least in a comparable form factor in the Mac Pro) has 32 lanes of PCI-E 4.0 connectivity that's enhanced by being slotted into a PCI-E Switch fabric on their Mac Pro. (this I suspect is actually why there hasn't been a rework of the Mac Pro to use M3 Ultra - that they'll ditch the switch fabric for direct wiring on their next one)

Memory bandwidth is a closer thing to call here - using the Threadripper pro 9000 series as an example we have 8 channels of 6400MT/s DDR5 ECC. According to kingston the bus width of DDR5 is 64b so that'll get us ((6400 * 64)/8) = 51,200MB/s per channel; or 409.6 GB/s when all 8 channels are loaded.

On the M4 Max the reported bandwidth is 546 GB/s - but i'm not so certain how this is calculated as the maths doesn't quite stack up from the information i have (8533 MT/s, bus width of 64b, seems to point towards 68,264MB/s per channel. the reported speed doesn't neatly slot into those numbers).

In short the memory bandwidth bonus workstation processors traditionally have is met by the M4 Max, but PCI-E Extensibility is not.

In the mac world though that's usually not a problem as you're not able to load up a Mac Pro with a bunch of RTX Pro 6000s and have it be usable in MacOS. You can however load your machine with some high bandwidth NICs or HBAs i suppose (but i've not seen what's available for this platform)


The M4 Max’s bus width is 512 bytes, not 64.

Aha! That'll definitely get you to 546G/s.

The author is talking about multi-core performance rather than single core. Apple silicon only offers a low number of cores on desktop chips compared to what Intel or AMD offers. Ampere offers chips than are an order of magnitude faster in multi-core but they are not exactly "desktop" chips. But they are a good data point to say it can be true for ARM if the offer is here.

> Apple silicon only offers a low number of cores on desktop chips compared to what Intel or AMD offers.

* Apple: 32 cores (M3 Ultra)

* AMD: 96 cores (Threadripper PRO 9995WX)

* Intel: 60 cores (W‑9 3595X)

I wouldn’t exactly call that low, but it is lower for sure. On the other hand, the stated AMD and Intel CPUs are borderline server grade and wouldn’t be found in a common developer machine.


Yeah i9-14900 and 9950x are better comparisons, at 24 and 16 cores respectively.

Tangent: IMO top tier CPU is a no brainer if you play games, run performance-sensitive software (molecular dynamics or w/e), or compile code.

Look at GPU purchasing. It's full of price games, stock problems, scalpers, 3rd party boards with varying levels of factory overclock, and unreasonable prices. CPU is a comparative cake walk: go to Amazon or w/e, and buy the one with the highest numbers in its name.


For games its generally not worthwhile since the performance is almost entirely based on gpu these days.

Almost all build guides will say ‘get midrange cpu X over high end chip Y and put the savings to a better GPU’.

Consoles in particular are just a decent gpu with a fairly low end cpu these days. The xbox one with a 1.75Ghz 8core AMD from a couple of generations ago now is still playing all the latest games.


Anecdote: I got a massive performance (FPS) improvement in games after upgrading CPU recently, with no GPU change.

I think currently, that build guide doesn't apply based on what's going on with GPUs. Was valid in the past, and will be valid in the future, I hope!


How old was your previous CPU? Different people have vastly different expectations when it comes to upgrading. I'm certain I can play all of the games that I'm interested in on my 3 year old Ryzen 7600x, and that I'm limited by the 5 year old GPU (which I dread upgrading because of the crunch). Would someone with a 5 year old CPU be well served by upgrading to a 9600x, absolutely. But some people think they have to upgrade their Threadripper every year.

(As far as work goes, I realize this directly contradicts the OP's point, which is the intent. If you know your workflow involves lots of compiling and local compute, absolutely buy a recent Threadripper. I find that most of the time the money spent on extra cores would be better spent on a more modest CPU with more RAM and a faster SSD. And more thoughtful developer tooling that doesn't force me to recompile the entire Rust work tree and its dependencies with every git pull.)


I think it was a Radeon 3600x, state of the art 6-7 years ago. Replaced with 9950x. I was surprised by how big of a difference the CPU update had on frame rates. (GPU: 4080)

I also do a lot of rust compiling (Which you hinted at), and molecular dynamics sims leveraging a mix of CUDA/GPU, and thread pools + SIMD.


Makes sense, yeah, a 3600x is far behind the curve now.

Edit: Took a look at AMD's lineup and realized they did something I got conditioned not to expect: they've maintained AM5 socket compatibility for 3 generations in a row. This makes me far more likely to upgrade the CPU!

https://www.amd.com/en/products/processors/chipsets/am5.html

> all AMD Socket AM5 motherboards are compatible with all AMD Socket AM5 processors

I love this. Intel was known to change the socket every year or two basically purely out of spite, or some awful marketing strategy. So many wasted motherboards.


Oh wow. Didn't save me though. I've never been able to drop a new CPU into a motherboard - it's always CPU + RAM + MB time due to the socket consideration you mention.

Even for compilation workloads, you need to benchmark beforehand. Threadrippers have lower boost clocks and (in the higher core count models) lower base frequencies than the high end Ryzen desktop CPUs. Most build systems are not optimized for such high core counts.

Depending on the game there can be a large difference. Ryzen with larger cache have a large benefit in singleplayer games with many units like civilization or in most multiplayer games. Not so much GHz speed but being able to keep most of hot path code and data you need in cache.

>> For games its generally not worthwhile since the performance is almost entirely based on gpu these days.

It completely depends on the game. Civilization series, for example, are mostly CPU bound, which is why turns take longer and longer as the games progress.


Factorio and stellaris are others i’m aware of.

Factorio it's an issue when you go way past the end game into the 1000+ hour megabases.

Stellaris is just poorly coded with lots of n^2 algorithms and can run slowly on anything once population and fleets grow a bit.

For civilisation the ai does take turns faster with a higher end cpu but imho it’s also no big deal since you spend most time scrolling the map and taking actions (gpu based perf).

I think it’s reasonable to state that the exceptions here are very exceptional.


If Civ/Stellaris devs can't handle something as basic iteration vs recursion, then they are damn lost.

Also, a language with a GC (not Java) would shine there, it's ideal for a turn based game.


It’s not quite that simple. Often the most expensive chips trade off raw clock speed for more cores, which can be counterproductive if your game only uses 4 threads.

The 8 core X3D chips beat the 16 core ones on almost all games, so that's not that simple.

Can’t you pin a game to the 8 threads with the extra cache on the 16 core parts to get equivalent performance?

Both Windows and Linux handle asymmetric scheduling and the extra cache cores are considered performance cores.

It almost never makes sense to buy the fastest CPU.

My explanation:corporations want to benefit from scale. That means lots of computers with identical specs. Not only everyone gets the same machine, they get the same machine for many years. It's not unthinkable for a spec to stay constant for 5 years. In exchange for this stability, the makers of the machines (Dell, HP) can lower the price significantly. As a corporation you can buy a very powerful machine for something that a regular consumer needs to pay about twice the price. But that's when the spec is new. As years pass, a machine with the same spec gets to be downright sluggish.

Too bad it’s so hard to get a completely local dev environment these days. It hardly matter what CPU I have since all the intensive stuff happens on another computer.

You dont run an IDE with indexing, error checking and code completion?

vscode-server runs the LSP etc in the remote machine. The local machine is really just the UI.

OK, I'm convinced. Can someone tell me what to buy, specifically? Needs to run Ubuntu, support 2 x 4K monitors (3 would be nice), have at least 64GB RAM and fit on my desk. Don't particularly care how good the GPU is / is not.

Here's my starting point: gmktec.com/products/amd-ryzen™-ai-max-395-evo-x2-ai-mini-pc. Anything better?


Beelink GTR9 Pro. It has dual 10G Ethernet interfaces. And get the 128GB RAM version, the RAM is not upgradeable. It isn't quite shipping yet, though.

The absolute best would be a 9005 series Threadripper, but you will easily be pushing $10K+. The mainstream champ is the 9950X but despite being technically a mobile SOC the 395 gets you 90% of the real world performance of a 9950X in a much smaller and power efficient computer:

https://www.phoronix.com/review/amd-ryzen-ai-max-arrow-lake/...


And this would be something cheaper https://www.bee-link.com/products/beelink-ser9-ai-9-hx-370?v...

Similar single core performance, less cores, less GPU. Depends what you’re doing.


I just bought myself one of these for the same use case https://www.minisforum.uk/products/minisforum-bd770. I did also want to use my existing 3060 GPU, though. It all fit in a relatively small case.

You might be better off buying a mini pc if you’re happy with an integrated GPU. There are plenty of Ryzen mini pcs that end up cheaper than building around an itx motherboard.


Fastest threadripper on the market is usually a good bet. Worth considering mini-pc on vesa mount / in a cable tray + fast machine in another room.

Also, I've got a gmktec here (cheaper one playing thin client) and it's going to be scrapped in the near future because the monitor connections keep dropping. Framework make a 395 max one, that's tempting as a small single machine.


Threadripper is complete overkill for most developers and hella expensive especially at the top end. May also not even be that much faster for many work-loads. The 9950X3D is the "normal top-end" CPU to buy for most people.

Whether ~$10k is infeasibly expensive or a bargain depends strongly on what workloads you're running. Single threaded stuff? Sure, bad idea. Massively parallel set suites backed by way too much C++, where building it all has wound up on the dev critical path? The big machine is much cheaper than rearchitecting the build structure and porting to a non-daft language.

I'm not very enamoured with distcc style build farms (never seem to be as fast as one hopes and fall over a lot) or ccache (picks up stale components) so tend to make the single dev machine about as fast as one can manage, but getting good results out of caching or distribution would be more cash-efficient.


Yes of course it depends, which is why I used "most developers" and not "all developers". What is certainly it not is a good default option for most people, like you suggested.

Different class of machines, the Threadripper will be heavier on multicore and less bottlenecked by memory bandwidth, which is nice for some workloads (e.g. running large local AIs that aren't going to fit on GPU). The 9950X and 9950X3D may be preferable for workloads where raw single-threaded compute and fast cache access are more important.

The computer you listed is specifically designed for running local AI inference, because it has an APU with lots of integrated RAM. If that isn't your use case then AMD 9000 series should be better.

Not just local AI inference, plenty of broader workstation workloads will benefit from a modern APU with a lot of VRAM-equivalent.

Huh, that's a really good deal at 1500 USD for the 64Gb model considering the processor it's running. (It's the same one that's in the Framework desktop that there's been lots of noise about recently - lots of recent reviews on YouTube.)

Get the 128Gb model for (currently) 1999 USD and you can play with running big local LLMs too. The 8060 iGPU is roughly equivalment to a mid-level nVidia laptop GPU, so it's plenty to deal with a normal workload, and some decent gaming or equivalent if needed.


Yeah, I like the look of the Framework but the (relative) price and lead times are putting me off a little.

There are also these which look similar https://www.bee-link.com/products/beelink-gtr9-pro-amd-ryzen...


From what I've been able to tell from interwebz reviews, the Framework one is better/faster as the GMTek is thermally throttled more. Dunno about the Beelink.

9950x. If you game, get that in X3D version, or a lower-numbered version in X3D.

Multimonitor with 4K tends to need fast GPU just for the bandwidth, else dragging large windows around can feel quite slow (found that out running 3 x 4K monitors on a low-end GPU).

If you want 3x 4k monitors, you need to care how good the GPU is.

The new top of the line CPU might only cost $500, but even if you have a Framework laptop with replaceable parts, the new top-of-the-line mainboard is going to cost you north of $1k. If you don't, you're probably talking about a new $2k+ full laptop purchase.

And I don't want a desktop.

But I would agree that, when you are purchasing that new laptop (or mainboard), you get the best that you can afford. Pushing your upgrade cycle out even just one year can save you a lot of money, while still getting the performance you need.


I suppose it depends on the programming tasks you do. If you are working on stuff like recompiling large codebases all day- then yes, a fast CPU and and lots of RAM will really help. If you are working more on scripting stuff, then maybe the fastest CPU isn't really needed, rather more RAM. I just bought myself a Ryzen 9950x deal from a local Microcenter. I will run handbrake on ripped files which this CPU is really good at.

I worked on a Spring Boot application that would take more than a minute to start. Not Spring's fault, IIUC, but nevertheless a focus killer.

A minute? What was taking so long? Spring itself is only a few seconds (as you said that’s not the issue).

That sounds horrible to work on.


If you have many beans that does IO, startup will be slower. For example, just removing sql/database initialization can shave off a second.

But, does your work constantly compile Linux kernel or encoding AES-256 more than 33GB/s?

Oh this is fun. My company recently did a garage sale on old hardware and I intend to post something similar compiling across 4 different hardwares and 4 different languages. I’m including some M series Macs in the mix though because the main motivation behind me doing this is how astonishing even their mobile performance is.

I find it crazy that some people only use a single laptop for their dev work. Meanwhile I have 3 PCs, 5 monitors, keyboards and mouses, and still think they are not enough.

There are a lot of jobs that should run in a home server running 24/7 instead of abusing your poor laptop. Remote dedicated servers work, but the latency is killing your productivity, and it is pricey if you want a server with a lot of disk space.


Hah, I find your setup crazy! I don't need or want that much stuff. To me it would get in the way, and increase my maintenance burden.

I also like having things Just So; and keeping windows and browser tabs and terminals etc. all in the same place on several different machines would drive me nuts.

I have one 3:2 aspect ratio 13" laptop, and I do everything on it. My wife had two large external screens for work that she doesn't use anymore, but I never use them. I like the simplicity of staring at one single rectangle, and not having extraneous stuff around me.

> There are a lot of jobs that should run in a home server running 24/7 instead of abusing your poor laptop.

Not sure I buy that. My laptop has 20 cores and 64GB of RAM, and sure, maybe a full build from scratch will peg all those cores, but for incremental builds (the 99% case), single core perf is what really matters, and a home server isn't going to give me meaningful gains over my laptop there.

And my laptop can very easily handle the "strain" of my code editor with whatever developer-assistance features I've set up.

Sure, CI jobs run elsewhere, but I don't need a home server for that; the VPS that hosts the git repository is fine.


I dunno, alt+tab works as fast for me as moving my head to look at another monitor.

And I'm not particularly concerned with "abusing" my laptop. I paid for its chips and all of its cores, I'm not going to not use them...


it is not about the chips or cores. it is about the pcie ssd that you torture it with high temperature for a long period of time due to a laptop poor cooling.

or the battery, it does not like high temperature either.

you only regret it when it happens. and it will happen.


> and it will happen.

I dunno... I've been doing this a long time. And haven't had any of the failures you're talking about. You're aware these machines throttle performance as necessary to prevent from getting too hot?


I was away from my regular desktop dev PC for multiple months recently and only used a crappy laptop for dev work. I got used to it pretty quickly.

This makes me remember so many years ago starting to program on a dual core plastic MacBook.

Also, I’m very impressed by one of my coworkers working on 13 inch laptop only. Extremely smart. A bigger guy so I worry about his posture and RSI on such a small laptop.

TLDR I think more screen space does not scale near linearly with productivity


My 5 year old Threadripper with 128GB RAM is still going strong. A 9950x would probably be faster (especially for single thread,) but moving to AM5 would be a massive downgrade on IO. Can't really see spending $10K now for a TRX50 or WRX90 build... Maybe when Zen 6 Threadrippers come out.

I'm still using the same CPU I had 15 years ago. It's an AMD Phenom II X6 1090T. The only upgrades I've made are increasing the RAM to 16 GB and replacing the hard drive with an SSD. It continues to meet all my needs.

I bought one too. It's bananas how powerful the R9 9950X is. And it runs great with a €40 Thermalright cooler too - no need for AIO liquid cooling! It's amazing how great AMD CPUs got.

I do not care if my CPU is 10 years old. Everything is instant anyway.

I've seen more and more companies embrace cloud workstations.

It is of course more expensive but that allows them to offer the latest and greatest to their employees without needing all the IT staff to manage a physical installation.

Then your actual physical computer is just a dumb terminal.


> I've seen more and more companies embrace cloud workstations.

In which movie ? "Microsoft fried movie" ? Cloud sucks big time. Not all engineers are web developers.


With tools like Blaze/Bazel (Google) or Buck2 (Meta) compilations are performed on a massive parallel server farm and the hermetic nature of the builds ensures there are no undocumented dependencies to bite you. These are used for nearly everything at Big Tech, not just webdev.

It's for example being rolled out at my current employer, which is one of the biggest electronic trading companies in the world, mostly C++ software engineers, and research in Python. While many people still run their IDE on the dumb terminal (VSCode has pretty good SSH integration), people that use vim or the like work fully remotely through ssh.

I've also seen it elsewhere in the same industry. I've seen AWS workspaces, custom setups with licensed proprietary or open-source tech, fully dedicated instances or kubernetes pods.. All managed in a variety of ways but the idea remains the same: you log into a remote machine to do all of your work, and can't do anything without a reliable low-latency connection.


There are big tech companies which are slowly moving their staff (for web/desktop dev to asic designers to HPC to finance and HR) to VDI, with the only exception being people who need a local GPU. They issue a lightweight laptop with long battery life as a dumb terminal.

The desktop latency has gotten way better over the years and the VMs have enough network bandwidth to do builds on a shared network drive. I've also found it easier to request hardware upgrades for VDIs if I need more vCPUs or memory, and some places let you dispatch jobs to more powerful hosts without loading up your machine.


The comment wasn’t about web development. Lots of devs use TeamViewer / RDP / VNC / SSH, especially post-covid devs working remotely.

Great, now every operation has 300ms of latency. Kill me

If your ping is that high, you might be doing it wrong. Even Starlink is usually less than half that, usually a lot less. Most Remote Desktop setups I’ve seen are fairly and surprisingly responsive. It’s not quite as nice as a desktop 3 feet away, sure, but that only matters for a few things, and it’s good enough for most work. The tradeoff can be worth it since you can now work from anywhere via laptop and connect to the same machine. No need for multiple setups, most of the machine management is taken care of, upgrades are seamless. For various reasons I haven’t been able to move permanently to a cloud workstation, but TBH I often want to.

Worse when the VPN they also force on you adds 300ms.

All of the big clouds have regions throughout the world so you should be able to find one less than 100ms away fairly easily.

Then realistically in any company you'll need to interact with services and data in one specific location, so maybe it's better to be colocated there instead.


I wonder if everyone on HN has just woken from a 20 year coma.

What's with the post title being completely incongruent with the article title? Moreover, I'm pretty sure this was not the case when it was first posted...

Where are the Apple Silicon chips in this comparison?

You can actually do a lot with a non-congested build server.

But I would never say no to a faster CPU!


Specifically: buy a good desktop computer. I couldn't imagine working on a laptop several hours per day (even with an external screen + keyboard + mouse you're still stuck with subpar performance).

FWIW, my recent hn submission had a really good discussion on this very same topic.

https://news.ycombinator.com/item?id=44985323


I've been struggling with this topic a lot, I feel the slowness everyday and productivity loss of having slow computers, 30m for something that could take 10 times less... it's horrible.

It is true, but also funny to think back on how slow computers used to be. Even the run-of-the-mill cheap machines today are like a million times faster than supercomputers from the 70s and 80s. We’ve always had the issue that we have to wait for our computers, even though for desktop personal computers there has been a speedup of like seven or eight orders of magnitude over the last 50 years. It could be better, but that has always been true. The things we ask computers to do grows as fast as the hardware speeds up. Why?

So, in a way, slow computers is always a software problem, not a hardware problem. If we always wrote software to be as performant as possible, and if we only ran things that were within the capability of the machine, we’d never have to wait. But we don’t do that; good optimization takes a lot of developer time, and being willing to wait a few minutes nets me computations that are a couple orders of magnitude larger than what it can do in real time.

To be fair, things have improved on average. Wait times are reduced for most things. Not as fast as hardware has sped up, but it is getting better over time.


I suspect for most a faster SSD is probably the better bet

more generally - it is worth it to pay for a good developer experience. It's not exactly about the CPU. As you compared build times - it is worth it to make a build faster. And, happily, often you don't need new CPU for this.

On a different note. I just bought a Sienna-series EPYC, which is a 48-core Zen 4c chip capped at 200W. AMD 8434PN at 1200 EUR price-point. My understanding is it's a two- or three years out-of-date CPU, but I was simply blown away! I contemplated purchasing a 9005-class CPU, but even 32-core variant would cost more than 3000 EUR... at sommat crazy like 400W. Thank you, my PCIe stuff alone draws a kilowatt; no way I'm putting in a 400W CPU on top.

"You should buy a faster CPU" is the post's actual title.

And an evergreen bit of advice. Nothing new to see here, kids, please move along!


I wonder what triggered this massive gains in term of CPU Perfs? Any major innovation I might have missed?

This is quite the silly argument.

* "people" generally don't spend their time compiling the Linux kernel, or anything of the sort.

* For most daily uses, current-gen CPUs are only marginally faster than two generations back. Not worth spending a large amount of money every 3 years or so.

* Other aspects of your computer, like memory (capacity mostly) and storage, can also be perf bottlenecks.

* If, as a developer, you're repeatedly compiling a large codebase - what you may really want is a build farm rather than the latest-gen CPU on each developer's individual PC/laptop.


Just because it doesn't match your situation, doesn't make it a silly argument.

Even though I haven't compiled a Linux kernel for over a decade, I still waste a lot of time compiling. On average, each week I have 5-6 half hour compiles, mostly when I'm forced to change base header files in a massive project.

This is CPU bound for sure - I'm typically using just over half my 64GB RAM and my development drives are on RAIDed NVMe.

I'm still on a Ryzen 7 5800X, because that's what my client specified they wanted me to use 3.5 years ago. Even upgrading to (already 3 years old) 5950X would be a drop-in replacement and double the core count so I'd expect about double the performance (although maybe not quite, as there my be increased memory contention). At current prices for that CPU, that upgrade would pay for itself in terms within 1-2 weeks.

The reason I don't upgrade is policy - my client specified this exact CPU so that my development environment matches their standard setup.

The build farm argument makes sense in an office environment where the majority of developer machines are mostly idle most of the time. It's completely unsuitable for remote working situations where each developer has a single machine and latency and bandwidth to shared resource is slow.


Why is the client so strict on what you use as a dev machine?

I work in game development. All the developers typically have the same spec machine, chosen at the start of the project to be fairly high end with the expectation that when the project ships it'll be a roughly mid range spec.

My most cpu intensive task is running the full test suite of a customer's Rails app. I can probably shave off a large percentage of its running time, but it also contains integration tests run with chrome. What I do to shorten the test time is running only the ones of the files that changed. The boot time of Rails is there anyway.

The CI system is still slower that my laptop. We are not really concerned about it.

I'm waiting for something to fail because $3000 on a laptop won't make me gain $3000 from my customer.


It's worth paying for reliability. If that also means more speed, fine -- but raw CPU benchmarks rarely matter outside of very specific workloads. What you'll notice day to day is whether the machine crashes, wipes out a project, or forces you into an unexpected week of downtime.

Computers are like lightbulbs, and laptops are the extra fragile kind. They burn out. I've never had one more than five years, and after year three I just assume it could fail at any moment -- whether it's an SSD crash, a swollen battery, or drivers breaking after the next OS update.

If you replace machines every three years, like I do, you're not necessarily paying for performance -- you're really just paying for peace of mind.


Never buy the fastest new, always the 2nd~4th fastest, preferably used.

Depends on what you need, but it's totally worth it if you want to spend less time say on compiling stuff.

Video processing, compression, games and etc. Anything computationally heavy directly benefits from it.


Like so many things it depends on the use-case...

If you are gaming... than high core count chips like Epyc CPUs can actually perform worse in Desktops, and is a waste of money compared to Ryzen 7/Ryzen 9 X3D CPUs. Better to budget for the best motherboard, ram, and GPU combo supported by a specific application test-ranked CPU. In general, a value AMD GPU can perform well if you just play games, but Nvidia rtx cards are the only option for many CUDA applications.

Check your model numbers, as marketers ruined naming conventions:

https://opendata.blender.org/

https://www.cpubenchmark.net/multithread/

https://www.videocardbenchmark.net/high_end_gpus.html

Best of luck, we have been in the "good-enough" computing age for some time =3


Or, perhaps, make it easier to run your stuff on a big machine over -> there.

It doesn't have to be the cloud, but having a couple of ginormous machines in a rack where the fans can run at jet engine levels seems like a no-brainer.


> the top end CPU, AMD Ryzen 9 9950X

This is an "office" CPU. Workstation CPUs are called Epyc.


Yeah. I would say, do get a better CPU, but do also research a bit deeper and really get a better CPU. Threadrippers are borderline workstation, too, though, esp. the pro SKUs.

Threadrippers are workstation processors and support ECC, Epycs are servers and the 9950X is HEDT (high end desktop).

Standard Ryzen chips also support ECC (minus the monolithic G models).

I run 2x48GB ECC with my 9800x3d.


Even better way to improve the quality of your computer sessions is "Just use Mac." Apple is so much ahead at the performance curve.

They have good performance, especially per watt, for a laptop.

Certainly not ahead of the curve when considering server hardware.


Not just that; they have a decent GPU and the unified memory architecture which allows to directly run many ML models locally with good performance.

Server hardware is not very portable. Reserving a c7i.large is about $0.14/hour, this would equal the cost of an MBP M3 64GB in about two years.

Apple have made a killer development machine, I say this as a person who does not like Apple and macOS.


Multi monitor support is still flaky and unreliable, cant boot linux environments, cant upgrade M.2 drives yourself, magic mouse still cannot be charged whilst using it etc etc.

On top of that when you look at price vs performance they are way behind.

Apple may have made good strides in single core cpu performance, but they have definitely not made killer development machines imo.


Apple still has quite atrocious performance per $. So it economically makes sense for a top end developer or designer, but perhaps not the entire workforce let alone the non-professional users, students etc.

In my experience comments like these are generally made by people who dont have much experience outside of using laptops.

Funny thing we just talked about this in a thread 2 days ago. Comments like this leads me to dismiss anything coming from apple fan boys.

It's not like objective benchmarks disproving these sort of statements don't exist.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: