One lesson I learned from this latest upgrade: DON'T SKIP READING THE DOCUMENTATION BEFORE UPGRADING.
I rebooted after upgrade and found myself dropped in a console-only environment. Turns out the (proprietary) drivers for my video card are not supported anymore. Now I'm on nouveau and it sucks.
My fault for having an Nvidia card, I guess, but an AMD equivalent is on its way.
For this exact reason I enable Timeshift on every computer I run. If an upgrade fails spectacularly like that, recovering and reverting to the old edition is as simple as picking a pre-upgrade snapshot from the boot menu.
If you're not on btrfs, I'm sure there are similar tools for your favorite file system. For me, it's been a life saver.
OpenSUSE has this sort of thing configured by default using BTRFS and snapper. Automatically creates copy-on-write snapshots before and after every time you use the package manager. I think this should be the default on most end-user oriented distros.
I'm a little surprised they use casync and multiple partitions, given that root is on btrfs. As they have it is more reliable if anything does go bad, so makes sense, but: given that steam decks are on a given snapshot of the btrfs system, it seems like they could just send an incremental update to the next release and try to boot that. Oh ah, that breaks down if, like me, you disable the readonly root.
You can think about it like this: the snapshot creates a bunch of hard links to the existing files. So every file basically has 2 pointers to it.
The upgrade now updates the files in the non- snapshot paths, creating new files/keeping the snapshots on the original file. (That's afaik why it's called a CoW filesystem - copy on write)
Now, every changed file will be stored "twice", but all unchanged files are still only on the disk once.
As most upgrades still keep the majority of all files unchanged, there is effectively little space used for the snapshot
Similarly, Trixie contains an updated version of Dovecot that (even though the version number seems to indicate otherwise) has a new configuration format that is not backwards compatible. This is clearly stated in the release notes but may be surprising nevertheless.
I have too developed PTSD from nvidia + debian (even worse, nvidia optimus on a laptop).
To the point where I would not upgrade my system for months at a time, when I was in the midst of important work, because I didn't want to spend a day on tty trying to make the DE work again.
If your card is that old, shouldn't the nouveau drivers be quite full-featured anyway? Regardless, you may also want to run a Bookworm kernel with a full Trixie userspace. It should install cleanly once you add the Bookworm repos, and you'd be fully supported for quite some time thanks to LTS. It would technically be a FrankenDebian, but one that might be quite acceptable nonetheless.
> If your card is that old, shouldn't the nouveau drivers be quite full-featured anyway?
Normally they would probably be, but it looks like my ultra-wide screen is too much to handle for those drivers.
> you may also want to run a Bookworm kernel with a full Trixie userspace.
Good idea indeed. But at this point I'm afraid of borking everything again. I'll just wait for the replacement card I ordered and hope it doesn't make things worse.
Especially on older cards actually! The new hardware, since the 3000 generation or so, moved some "secret sauce" (probably bullshit) into the firmware so that everything that the drivers need to do is something that Nvidia is willing to disclose, so there can be open source drivers that can get the hardware out of low power mode. That lack of "reclocking" crippled performance with nouveau.
I tried, but it fails at compile time and I can't be bothered to try and understand why it doesn't find a stdargs.h that is _clearly_ there. I attempted various solutions I found online for half a day, then just threw the towel and ordered a 25€ replacement from AMD which I'm told has better support than nvidia on linux anyway.
Debian Trixie drops 32-bit x86 support. Ubuntu dropped 32-bit support already earlier, which meant that lightweight Lubuntu and Xubuntu don't support it either. It's sad to see old hardware support getting dropped like that. They are still good machines as servers and desktop terminals.
Are there any good Linux distros left with 32-bit x86 support? Do I have to switch to NetBSD?
> They are perfectly good machines as servers and desktop terminals.
On the power usage alone surely an upgrade to a still extremely old 64bit machine would be a significant upgrade. For a server that you run continuously a 20+ year old machine will consume quite a bit.
Indeed. I still keep around a couple of old computers, because they have PCI slots (parallel, not PCIe), unlike any newer computer that I have, and I still use some PCI cards for certain purposes.
However, those computers are not so old as to have 32-bit CPUs, they are only about 10 years old, but that was because I was careful at that time to select MBs that still had PCI slots, in order to be able to retire all older computers.
The only peripherals that truly don't work in more modern boards would be AMR/ACR/CNR? ISA and plain PCI being reasonably easy to acquire expansion boxes (might be a problem for EISA though, I guess...)
> It's sad to see old hardware support getting dropped like that.
The problem is that someone needs to work to support different hardware architectures. More exotic hardware, more complicated and expensive the work becomes.
People who run these 32-bit machines are unlikely to vouch in terms of work contributed or money contributed to get the people paid for this work, so it is better to drop the support and focus the same developer resources on areas which benefits larger user base.
the real problem is Linux is written in such manner that APIs change very often, and code all over the place including drivers is monkeyed without testing and often broken, sometimes in very subtle and hard to debug ways.
It is an overall design issue. Linux has a huge maintenance burden.
Indeed it is. I am not sure if BSD people have better support for legacy hardware. Outside BSD, there is no competition for open source operating systems, or generally legacy PC hardware systems overall as Windows support was dropped long time ago.
The kernel APIs change never and 32bit support is still within the kernel without any issue at all. How does this implicate drivers?
The rest is user space code. Usually in C. Which has plentiful facilities for managing this. We've been doing it for decades. How is it a sudden burden to continue?
Where are the examples of distributions failing because of this "issue?"
The internal kernel API used by device drivers changes at every version, even at minor versions.
If you maintain an out-of-tree driver, it is extremely likely that it will fail to compile at every new kernel release.
The most common reasons are that some definitions have been moved from one kernel header to another or some members have been added to a structure or deleted from it, or some arguments have been added to a function invocation or deleted from it.
While the changes required to fix a broken device driver are typically small, finding which they are can require wasting a lot of time. The reason is that I have never seen any of the kernel developers who make these changes that break unrelated device drivers writing any migration document that instructs the device maintainers how to change the old source code to be compatible with the new interfaces.
Normally there is absolutely no explanation about the purpose of the changes and about what the users of the old API should do. Only in rare cases scanning the kernel mail messages may find some useful information. In other cases you just have to read the kernel source, to discover the intentions of whoever has changed the API.
That does the job eventually, but it consumes far more time than it should.
Even in-tree drivers for old devices, which may no longer have a current maintainer, will become broken eventually.
I have actually upgraded a really old 32 bit only laptop from bookworm to trixie - it works. Two important (for the desktop environment) packages required SSE3 which the CPU doesn't support, so... I installed package "sse3-support" with environment variable IGNORE_ISA=1 set. The kernel was not upgraded because trixie doesn't contain an i386 kernel. The laptop works surprisingly okay, though of course it's not much fun with its weak performance, low RAM and mechanical disk.
32-bit userspace packages are still supported, though with increased hardware requirements compared to Bookworm. You may find that you're able to run a more-or-less complete 32-bit Trixie userspace, while staying on Bookworm wrt. the kernel and perhaps a few critical packages.
If 32-bit support gets dropped altogether (which might happen for 'forky' or 'duke') it can probably move to the unofficial Debian Ports infrastructure, provided that people are willing to keep it up-to-date.
I kept my old 32-bit laptop alive for quite a bit using archlinux32 but in the end more and more software started breaking, so that is not really a route I can recommend anymore. I was using the laptop mainly if I was traveling or on holidays, so not so very often so it felt wasteful to to buy a new one. But this year the software breakages really started costing too much time, so I bought a new one. RIP old laptop (2007-2025).
Alpine still supports x86 and other 32bit platforms. It’s also very lightweight, though I’d say it targets a very different userbase than Debian or Ubuntu — the default install is quite minimal and requires more setup.
Does dropping 32-bit support just mean that there are no supported x86-32 OS images, or that 32-bit applications are generally not supported?
If it means that no 32-bit apps are supported, how does Steam handle this? Does it run 32-bit games in a VM? Is the Steam client itself a 64-bit application these days or still stuck on 32-bits?
Just keep in mind that this does not apply to armhf, which is 32 bits and all old Raspberry Pi boards use.
What's your use case for 32-bit x86 where you are still keeping Debian at its latest version? Alone for the power consumption you might be better off by switching to a newer low-spec machine.
>It's sad to see old hardware support getting dropped like that. They are still good machines as servers and desktop terminals
How many 32 bit PCs are still actively in used at scale to make that argument? Linux devs are now missing kernel regression bugs on 64bit Core 2 Duo hardware because not enough people are using them anymore to catch and report these bugs, and those systems are newer and way more capable for daily driving than 32 bit ones. So then if nobody uses Core 2 Duo machines anymore, how many people do you think are using 32bit Pentium 4/Athlon XP era machines to make that argument?
But let's pretend you're right, and assume there's hoards of Pentium 4 users out there refusing to upgrade for some bizarre reason, and are unhappy they can't run the latest Linux, then using a Pentium 4 with its 80W TDP as a terminal would be an insane waste of energy when that's less capable than some no-name Android table with a 5W ARM SoC which can even play 1080p Youtube while the Pentium 4 cannot even open a modern JS webpage. Even most of the developing world now has more capable mobile devices in their pockets and don't have use for the Pentium 4 machines that have long been landfilled.
And legacy industrial systems still running Pentium 4 HW, are just happy to keep running the same Windows XP/Embedded they came from the factory since those machines are airgapped and don't need to use latest Linux kernel for their purpose.
So sorry, but based on this evidence, your argument is literally complaining for the sake of complaining about problems that nobody outside of retro computing hobbyists have who like using old HW for tinkering with new SW as a challenge. But it's not a real issue for anyone. So what I don't get is the entitled expectations that the SW industry should keep writing new SW for free to keep it working for your long outdated 25+ year old CPU just because you, for some reason, refuse to upgrade to more modern HW that can be had for free.
There are good faith arguments to be had about the state of forced obsolescence in the industry with Microsoft, Apple, etc, but this is not one of them.
>Are there any good Linux distros left with 32-bit x86 support? Do I have to switch to NetBSD?
Yes there are, tonnes: AntiX, Devuan, Damn Small Linux, Tiny Core Linux, etc
> 80W TDP as a desktop terminal would be an insane waste of energy when that's less capable than some no-name Android table with a 5W ARM SoC which can even paly 1080p Youtube while the Pentium 4 cannot
Insane how far hardware got: the pentium 4 engineers probably felt like the smartest people alive and now a pentium 4 looks almost as ridiculous and outdated to us as vacuum tube computers.
What I find truly insane is how we're "wasting" the hardware.
I ran a mailserver for thousands of people on a 486 DX 33Mhz. It had smtp, pop3 and imap. It was more than powerful enough to handle it with ease.
I had a Pentium 3 w/1GB of RAM, and it was a supremely capable laptop.
These days I have a a machine from 2018 or 2019, which I upgraded to 32G of RAM and I added an NVME drive in addition to the spinning rust earlier this year.. because firefox got (extremely, more than a minute to start the browser) sluggish due to an HDD instead of NVME.
Now, it's obvious that an NVME drive is superior, but it surprises me how incredibly lackadaisical we've gotten with resource usage. It surprises me how little "extra" we get, except ads that requires more and more resources. Sure, we've got higher resolution photos, higher resolution videos, and now AI will require vast resources (which of course is cool). At the same time, we don't get that much more utility out of our computers.
>What I find truly insane is how we're "wasting" the hardware.
Only if you ignore the business economic realities of the world we live in. Unless you work at hyperscaleres of MS/google/meta where every fraction of percent optimization save millions of dollars, nobody is pays SW engineers to optimize SW for old consumer devices because it's money wasted you won't recoup so you offload it on the customer to buy better HW. Rinse and repeat.
>I had a Pentium 3 w/1GB of RAM, and it was a supremely capable laptop.
Why isn't it supremely capable anymore? It will still be just as capable if you run the same software from 1999 on it. Or do you expect to run 2025 SW on it?
Chips that old would be on a ~100nm process node, which is ancient. Anything using Flintstone transistors like those isn't going to hold up.
> the pentium 4 engineers probably felt like the smartest people alive
The P4's NetBurst architecture had some very serious issues, so that might not be true. NetBurst assumed that 10GHz clock speeds would soon be a thing, and some of the engineers back then might have had enough insight to guess that this approach was based on an oversimplified view of semiconductor industry trends. The design took a lot of stupid risks, such as the ridiculously long instruction pipeline, and they were always unlikely to pay off.
Did you read the comment I was replying to? They said old 32 bit systems can still be kept around to be used as a terminal. I replied saying that Android tablets can also be used as terminals with the right SW if that's what you're after.
Or if you have specific X86 terminal SW, use a PC with a 64 bit CPU if you want to run modern PC SW. They've been making them since 2003. That's 23 years of used 64bit HW on the market that you can literally get for free at this point.
Or keep using your old 32 bit system to run your old 32 bit terminal SW. Nobody's taking away your ability to run your old 32 bit version of the SW when new 64 bit SW comes out.
Is there any reason to run Debian as a user, as opposed to a sysadmin? I love running Debian on my servers, it's boring and rock-solid, but why should I run it on my PC instead of a derivative distro?
There are many reasons to run Debian as a user, why wouldn't there be? I'm sure there's also many reasons not to. By "derivative distro" I assume you mean for example Ubuntu, or maybe Linux Mint -- personally my love for Debian is that it's _not_ Ubuntu. There's no "firefox installed as snap", it just feels cleaner and snappier while still being very similar to what I'm used to. Debian used to be slightly more difficult to install, but I don't feel that's the case anymore, especially not since they ship non-free firmware on the install media now.
The only reason I'm on Ubuntu is that Gnome on Ubuntu permanently displays a visible dock. In vanilla Gnome on Debian, there is a dock extension, but it's only visible when the full-screen Applications menu is visible. I don't get how people can work with this. How have you adressed this?
Most people I've heard from who want 'Ubuntu, but without the 'bs' end up on Mint, since there you get the benefits of Ubuntu, but without all the stuff nobody actually wants, rather than 'not Ubuntu to begin with'.
I guess if you like Gnome that doesn't really apply, since that isn't availible easily on Mint, but Gnome is one of the reasons I didn't want Ubuntu.
My experience is that the only concrete "benefit of Ubuntu" is that ZFS ships pre-compiled as a kernel module instead of having to be compiled on upgrade, other than that I am not sure what benefits Ubuntu provide on desktop over Debian.
Saying that Debian is "not Ubuntu to begin with" is of course technically true, but the similarities are so large that I have a hard time seeing anyone would have much of an issue switching, Ubuntu is and always has been based on Debian.
On servers, my experience is that Ubuntu is, while not "better", far more common, simply because of the fact that they have a more clear and understandable paid support plan, which I guess makes sense. Not that most users bother to pay for it anyway, but at least it's there if you end up with an EOL system you can't decom.
Well... it just works, so it's fine. I remember a time when I was a student where I would change distribution every six months: Fedora, Debian, Archlinux, Gentoo, FreeBSD, etc. but I finally landed on Debian and stayed there as I grew older.
In the stable distribution, packages tend to be a little dated obviously, but at least it is _stable_. And you can go with the _testing_ distribution for more up-to-date packages.
Also, as a sysadmin, I like having it on my computer to develop and test scripts without having to SSH in a dedicated environment (I still have to eventually but only for the final tests).
My own view: rolling release distros gets less in my way than distros like Debian. They allow me to install anything I want, as close as possible to upstream.
Not saying one is better than the other, just remarking that it's interesting to see 'getting in the way' meaning completely opposite things for different people :).
But... Debian is also a rolling release distro. Just use the "testing" or "unstable" suite. I am using Debian unstable on my main desktop since 1999, and had very little issues with it. The testing suite is the one which filters out most bugs found in unstable, and is something you can definitely use as a regular user.
Can confirm, have been using Debian testing branch on my local server for AI experiments for a year and it works great. Never hit any major issues, always have (reasonably) up to date software.
I've found both to be true at different times in my life. When I was younger and had time to read release notes I found rolling release made my life a lot easier, because my software was always close to the documentation online, always had the latest fixes etc.
But now my schedule only lets me do an update once a month rather than daily, so it feels more likely to introduce breaking changes and I'd rather just leave her all until a specific moment when I have the time to work through it all, and the longer term support distris help with that because those big all-at-once upgrades seem to be better documented.
> My own view: rolling release distros gets less in my way than distros like Debian. They allow me to install anything I want, as close as possible to upstream.
Debian comes with backports repositories which allow you to cleanly install newer versions of selected packages, without affecting the rest of the system.
One could argue that in that case Rocky Linux (or any EL) would be perfect, but I think that touches on what I like the most about Debian: it has a very good trade-off between stability and including new software.
Well, right now I'd say it would be _consistent_, at least. I run Fedora on my desktop/laptop/remote devboxes and use Debian as either a base container image (for Docker/LXC) or as a server (whenever the option is there), and I must say I like Trixie's little CLI quality of life improvements.
Not really sure I'd swap Fedora's _really good_ driver support for it, but since I'm running Silverblue and most of my "civilian" apps are flatpaks I probably wouldn't notice the difference.
Yes. I've been running Debian unstable as my desktop for almost two decades. Especially nowadays its usually less hassle, more stable and more up-to-date than the derivatives.
I don't really see a difference between using Debian for workstations and servers.
The only real reason to use Debian is that you think what Debian is trying to do is a good thing. They have a very clear definition of what they consider acceptable software engineering practices, how they think things shoud work, what's acceptable from free software and they do a lot of work to ensure what they ship fit that. They value portability to an extreme, have strong opinions about how linking should be done, separations of concern and what can be considered free software. It's an extremely political distribution with a lot of patching happening to try to shoe horn things into their vision.
I personally think it's extremely misguided, breaks a lot of good sotfware, mostly unsafe, a significant drag on the people actually developping what they ship, and basically the embodyment of everything that is wrong with Linux distribution as a concept. Others will tell you they are fighting the good fight for their users freedom and the guardian of the kingdom.
How you stand on this will determine if Debian - and its derivatives which bring its flaws with the Debian inheritance - makes sense for you.
I installed it on my Dada laptop and for me its great since I can forget about it until the next major version releases.
The unattended updates also ensure he gets the latest security patches without me coming over and running apt every other week.
I would think more than 90% of all computer users (be it in their personal life or profession) do not need the latest versions of anything, but would prefer a stable system with no sudden changes. It's also much less on a burden on those that manage the systems for them.
Some common software gets very annoying or even unusable with age. Try using a version of yt-dlp older than a few months (Debian's is several years old and is completely useless). Or software like Discord which doesn't play well with Debian because it can't keep itself up to date.
I think of yt-dlp, Discord, or Debian (Debian proper, not downstream distros like Ubuntu), the order in which ordinary, non-computer people are likely to us them is, from most to least likely, Discord then yt-dlp then Debian.
I agree that there are probably more users of Discord and yt-dlp outside of the computer field than Debian users. However, people who use computers do use an operating system, and typically have/require somebody that they ask for help when they can't fix it. The question was, why Debian on a Desktop. Like other commenters, I had great success moving people to Debian because all they need nowadays is a browser, and it requires a lot less maintenance (by themselves or third parties) than other distributions. Most "ordinary" people just want their system to stay the same, and are not enjoying constantly having to adapt to new UIs.
It is targeted towards humans. The person managing the Debian system is not necessarily its user base. I find that I can easily introduce people to the few concepts they need to understand and then they can use it almost as fluently as their previously never really grasped Windows. And it gets in their way way less often. The experience on any and all operating systems for most users seems to be to have to confirm random dialogs at random points in time, with random words assembled almost like hieroglyphs, to get back to what they wanted to do.
I drive a car. For almost anything besides wiper water, gas and oil in terms of maintenance, I go to a mechanic. OK, yes, in theory I know how to change tires, and I even have the tools to do it myself, but I let a workshop do it for me, purely out of convenience. I couldn't care less about the guts of the car; the only thing I care about is how often it annoys me and basically requires me to bring it to a mechanic. Is the motor "not targeted at me"?
This is how most people I know see and use their computers/tablets/phones.
What a silly, uneducated and trolling point to make. Yes, and "as always" Red Hat Enterprise Linux is dragging its feet compared to Debian Trixie -- why do you think that is? Maybe Fedora, Debian and RHEL are different Linux distributions with different goals and trade-offs?
Yea gotta get rid of the 32bit x86 code that's been around forever.
So we can add support for RISC-V which has a fraction of x86 installed base and still doesn't have an appreciable hardware standard that allows for broad compatibility between chips.
Debian 13 “Trixie”, 412 comments:
https://news.ycombinator.com/item?id=44848782