GDP and GDP per capita are really bad metrics. Both are blind to inequality, which is a very important indicator for quality of life. You don't want to be rich in the middle of abject poverty.
Still odd. The OS should be able to manage the memory and balance performance more efficiently than that. There’s no reason to preallocate memory by hardware.
It was often used to supplement memory available in cheaper ways or otherwise more flexible. For example many hardware solutions allowed connecting more RAM than otherwise possible to be accessed by main bus, or at lower cost than the main memory (for example due to differences in interfaces required, adding battery backup, etc.)
RAMsan line for example started in 2000 with 64GB DRAM-based SSD with up to 15 1Gbit FC interfaces, providing a shared SAN SSD for multiple hosts (very well utilized by some of the beefier cluster SQL databases like Oracle RAC) but the company itself has been providing high speed specialized DRAM-based SSDs since 1978
The way it makes sense is when you can't add that much memory to the system directly, or when directly attached memory would be significantly more expensive. For this you can get away with much slower memory than you would attach to the memory bus directly - all you need is to be faster than the storage bus you are using.
Last time I saw one was with a mainframe, which kind of makes sense if adding cheaper third party memory to the machine would void warranties or breach support contracts. People really depend on company support for those machines.
Main cases I've seen with mainframes involved network-attached ram disks (actually, even earliest S/360 could share a disk device between two mainframes, so...)
A fast scratch pad that can be shared between multiple machines can be ideal at times.
Makes sense in batch environment - you can lock the volume, do your thing, and then freeing it to another task running on a different partition or host.
Still seems like a kludge - The One Right Way to do it would be to add that memory directly to a CPU addressable space rather than across a SCSI (or channel, or whatever) link. Might as well be added to the RAM in the storage server and let it manage the memory optimally (with hints from the host).
There was no locking (at least not necessarily), it was a shared resource that allowed programs on multiple computers to utilize together (also major use case for RAMsan where I worked with them - it was not about not being able to add memory, it was about common fast quorum and cache between multiple maxed out database servers)
> Still odd. The OS should be able to manage the memory and balance performance more efficiently than that. There’s no reason to preallocate memory by hardware.
You are arguing hypotheticals, whereas for decades the world had to deal with practicals. I recommend you spend a few minutes looking into how to create RAM drives on, say, Windows, and think through how to achieve that when your build workstation has 8GB of RAM and you need a scratchpad memory of, say, 16GB of RAM.
I know all that - I was there and I saw products like these in person (although they were in the megabyte range back then). I still remember a 5.25 hard-drive shaped box with a lead acid battery and lots of memory boards with 4164's (IIRC).
These are only for when the OS and the machine itself can't deal with the extra memory and wouldn't know what to do with it, things you buy when you run out of sensible options (such as adding more memory to your machine and/or configuring a RAM disk).
You don't really want that. I'm keeping my sanity there just because my small company is running their CI and testing as contractor.
They indeed are quite spoiled - and that's not necessarily a good thing. Part of the issue is that our CI was good and fast enough that at some point a lot of the new hires never bothered to figure out how to build the code - so for quite a few the workflow is "commit to a branch, push it, wait for CI, repeat". And as they often just work on a single problem the "wait" is time lost for them, which leads to the unhappiness if we are too slow.
Message doesn’t need to be fancy, but it should describe what you did. Being unable to articulate your actions is a thought smell. It’s often seen when the developer is trying stuff until it sticks and needs a commit because the only way to test the fix is in a testing environment, two bad practices.
You might call it weak and gimmicky all you want, but it entrenched itself deeply into our collective imagination.
I saw many people come up independently with a direct quote from this story when they saw the IBM promotional image of a man standing in front of an IBM Quantum System One machine.
I think humans are weak and prone to leaping on gimmicks and extrapolating with limited information to wherever their imagination takes them.
If shown a dazzling, mysterious, $10m glass cube draped with all that hyperbolic "quantum future wow!!" marketing, ordinarily smart people will leap to anything. In reality, QC doesn't seem to have a ton of applications yet.
> The post suggests sparkly marketing hooplah wrapping up meager actuality.
I don't think any of those who quoted from this story fell into the marketing hyperbole. We all know these machines are interesting engineering marvels, but far from actually useful at the moment, with the exception of a very narrow set of problems that fall within the narrow capabilities. The people who fell for it probably never heard of this story.
This one hit hard. It turns out Phineas Barnum was right this whole time.
reply