He's (unsurprisingly) making an analogy to the dotcom bubble, which seems to me correct. There was a bubble, many non-viable companies got funded and died, and nevertheless the internet did eventually change everything.
The biggest problem is the infrastructure left behind from the Dotcom boom that laid the path for the current world (the high speed fiber) doesn't translate to computer chips. Are you still using intel chips from 1998? And the chips are such a huge cost, and being backed by debt but they depreciate in value exponentially. It's not the same because so much of the current debt fueled spending is on an asset that has very short shelf life. I think AI will be huge, I don't doubt the endgame once it matures. But the bubble now, spending huge amounts on these data centers using debt without a path to profitability (and inordinate spending on these chips) is dangerous. You can think AI will be huge and see how dangerous the current manifestation of the bubble is. A lot of people will get hurt very very badly. This is going to maim the economy in a generational way.
And a lot of the gains from the Dotcom boom are being paid back in negative value for the average person at this point. We have automated systems that waste our time when we need support, product features that should have a one-time-cost being turned into subscriptions, a complete usurping of the ability to distribute software or build compatible replacements, etc..
The Dotcom boom was probably good for everyone in some way, but it was much, much better for the extremely wealthy people that have gained control of everything.
If you're ever been to a third world country then you'd see how this is completely untrue. The dotcom boom has revolutionized the way of life for people in countries like India.
Even for the average person in America, the ability to do so many activities online that would have taken hours otherwise (eg. shopping, research, DMV/government activities, etc). The fact that we see negative consequences of this like social network polarization or brainrot doesn't negate the positives that have been brought about.
I think you’re putting too much weight on cost (time, money), and not enough weight on “quality of life”, in your analysis.
For sure, we can shop faster, and (attempt) research and admin faster. But…
Shopping: used to be fun. You’d go with friends or family, discuss the goods together, gossip, bump into people you knew, stop for a sandwich, maybe mix shopping and a cinema or dinner trip. All the while, you’d be aware of other peoples’ personal space, see their family dynamics. Queuing for event tickets brought you shoulder to shoulder with the crowd before the event began… Today, we do all this at home; strangers (and communities) are separated from us by glass, cables and satalites, rather than by air and shouting distance. I argue that this time saving is reducing our ability to socialise.
Research: this is definitely accelerated, and probably mostly for the better. But… some kinds of research were mingled with the “shopping” socialisation described above.
Admin: the happy path is now faster and functioning bureaucracy is smoother in the digital realm. But, it’s the edge cases which are now more painful. Elderly people struggle with digital tech and prefer face to face. Everyone is more open to more subtle and challenging threats (identity theft, fraud); we all have to learn complex and layered mitigation strategies. Also: digital systems are very fragile: they leak private data, they’re open to wider attack surfaces, they need more training and are harder to intuit without that training; they’re ripe for capture by monopolists (Google, Palantir).
The time and cost savings of all these are not felt by the users, or even the admins of these systems. The savings are felt only by the owners of the systems.
Technologgy has saved billions of person-hours individual costs, in travel, in physical work. Yet, wemre working longer, using fewer ranges of motions, are less fit, less able to tolerste others’ differences and the wealth gap is widening.
> I think you’re putting too much weight on cost (time, money), and not enough weight on “quality of life”, in your analysis.
"Quality of life" is a hugely privileged topic to be zooming in on. For the vast majority of people both inside and outside the US, Time and Money are by far the most important factors in their lives.
Setting aside time, is money not downstream from quality of life? Meaning, in a better world one might not need to care as much about money? I believe that time and quality of life are congruent - good quality of life means control over one’s own time.
Two decades ago, in the Bay Area we used to have a lot of books stores, specialized, chains, children's, grade school, college slugbooks, etc. Places like Fry's had a coffee and a book store inside. The population grew, number of book stores went down to near zero.
It seems the crux is that we needed X people to produce goods, and we had Y demand.
Now we need X*0.75 people to do meet Y demand.
However, those savings are partially piped to consumers, and partially piped to owners.
There is only so much marginal propensity to spend that rich people have, so that additional wealth is not resulting in an increase in demand, at least commensurate enough to absorb the 25% who are unemployed or underemployed.
Ideally that money would be getting ploughed back into making new firms, or creating new work, but the work being created requires people with PHDs, and a few specific skills, which means that entire fields of people are not in the work force.
However all that money has to go somewhere, and so asset classes are rising in value, because there is no where else for it to go.
It's actually, "they end up" and the 33% gains you're talking about aren't realized en masse until all the coal miners have black lung. It's really quite the, "dealy" as Homer Simpson would say. See, "Charles Dickens" or, "William Blake" for more. #grease
> partially piped to consumers, and partially piped to owners.
Or, the returns on capital exceed the rate of economic growth (r > g), if you like Piketty's Capital in the Twenty First Century.
One of the central points is about how productivity and growth gains increasingly accrue to capital rather than labor, leading to capital accumulation and asset inflation.
Yep, that’s the source of the point. The effort is in finding a way to make it easy to convey. Communication of an idea is almost as critical as its verification now.
You're in luck! A couple years ago he released an "abridged version" of sorts. A Brief History of Inequality is the name. Much more accessible than the 700 pages of Capital in the 21st Century.
With telecom, we benefited from skipping generations. I got into a telecom management program because in 2001-ish, I was passed by on a village street by a farmer bicycling while talking on his cellphone. Mind you my family could not afford cellphone call rates at the time.
In fact, the technology was introduced out here assuming corporate / elite users. The market reality became such that telcos were forced kicking and screaming to open up networks to everybody. The Telecom Regulatory Authority of India (back then) mandated rural <> urban parity of sorts. This eventually forced telcos to share infrastructure costs (share towers etc.) The total call and data volumes are eye-watering, but low-yield (low ARPU). I could go on and on but it's just batshit crazy.
Now UPI has layered on top of that---once again, benefiting from Reserve Bank of India's mandate for zero-fee transactions, and participating via a formal data interchange protocol and format.
Speaking from India, having lived here all my life, and occasionally travelled abroad (USAmerica, S.E. Asia).
We, as a society and democracy, are also feeling the harsh, harsh hand of "Code is Law", and increasingly centralised control of communication utilities (which the telecoms are). The left hand of darkness comes with a lot of darkness, sadly.
Which brings me to the moniker of "third world".
This place is insane, my friend --- first, second, third, and fourth worlds all smashing into each others' faces all the time. In so many ways, we are more first world here than many western countries. I first visited USAmerica in 2015, and I could almost smell an empire in decline. Walking across twitter headquarters in downtown SF of all the places, avoiding needles and syringes strewn on the sidewalk, and avoiding the completely smashed guy just barely standing there, right there in the middle of it all.
That kind of extreme poverty juxtaposed to extreme wealth, and all of the social ills that come along with it, have always been a fixture of the American experience. I don’t think it’s a good barometer or whether the USA is in decline when there has long been pockets of urban decay, massive inequality, drug use etc. Jump back to any point in American history and you’ll find something similar if not much, much worse. Even in SF of all places, back in the wild west era gold rush or in the 1970s… America has always held that contradiction.
Yeah, I sort of recounted a stark memory. That juxtaposition was a bit too much.
However, it wasn't just that, and the feeling has only solidified in three further visits. It isn't rational, very much a nose thing, coming from an ordinary software programmer (definitely not an economist, sociologist, think tank).
AI itself is a manifestation of that too, a huge time waster for a lot of people. Getting randomly generated wrong but sounding right information is very frustrating. Start asking AI questions you already know the answer too and the issues can become very obvious.
I know HN and most younger people or people with otherwise political leanings always push narratives pointing at rich people bad but I feel a lot of tech has made our lives easier and better. It's also made it more complicated and worse in some ways. That effect has applied to everyone.
In poor countries, they may not have access to clean running water but it's almost guaranteed they have cell phones. We saw that in a documentary recently. What's good about that? They use cell phones not only to stay in touch but to carry out small business and personal sales. Something that wouldn't have been possible before the Internet age.
> The Dotcom boom was probably good for everyone in some way, but it was much, much better for the extremely wealthy people that have gained control of everything.
You are describing platform capture. Be it Google Search, YouTube, TikTok, Meta, X, App Store, Play Store, Amazon, Uber - they have all made themselves intermediaries between public and services, extracting a huge fee. I see it like rent going up in a region until it reaches maximum bearable level, making it almost not worth it to live and work there. They extract value both directions, up and down, like ISPs without net-neutrality.
But AI has a different dynamic, it is not easy to centrally control ranking, filtering and UI with AI agents. You can download a LLM, can't download a Google or Meta. Now it is AI agents that got the "ear" of the user base.
It's not like before it was good - we had a generation of people writing slop to grab attention on web and social networks, from the lowest porn site to CNN. We all got prompted by the Algorithm. Now that Algorithms is replaced by many AI agents that serve users more directly than before.
>You can download a LLM, can't download a Google or Meta.
You can download a model. That doesn't necessarily mean you can download the best model and all the ancillary systems attached to it by whatever service. Just like you can download a web index but you probably cannot download google's index and certainly can't download their system of crawlers for keeping it up to date.
That's true for the GPUs themselves, but the data centers with their electricity infrastructure and cooling and suchlike won't become obsolete nearly as quickly.
this is a good point, and it would be interesting to see the relative value of this building and housing 'plumbing' overhead Vs the chips themselves.
I guess another example of the same thing is power generation capacity, although this comes online so much more slowly I'm not sure the dynamics would work in the same way.
The data centers built in 1998 don't have nearly enough power or cooling capacity to run today's infrastructure. I'd be surprised if very many of them are even still in use. Cheaper to build new than upgrade.
How come? I'd expect that efficiency gains would lower power and thus cooling demands - are we packing more servers into the same space now or losing those gains elsewhere?
Power limitations are a big deal. I haven't shopped for datacenter cages since web 2.0, but even back then it was a significant issue. Lots of places couldn't give you more than a few kw per rack. State of the art servers can be 2kw each, so you start pushing 60kw per rack. Re rigging a decades old data center for that isn't trivial. Remember you need not just the raw power but cooling, backup generator capacity, enough battery to cover the transition, etc.
It's hugely expensive, which is why the big cloud infrastructure companies have spent so much on optimizing every detail they can.
Yes - blades-of-servers replacing what was 2 or 3 rack mount servers. Both air exchange and power requirements are radically different in order to fill that rack as it was before.
It's just an educated guess, but I expect that power density has gone up quite a bit as a form of optimization. Efficiency gains permit both lower power (mobile) and higher compute (server) parts. How tightly you pack those server parts in is an entirely different matter. How many H100s can you fit on average per 1U of space?
How much more of centralized data center capacity we actually need outside AI? And how much more we would need if we used slightly more time on doing things more efficiently?
This is true. It’s probably 2-3 times as long as a GPU chip. But it’s still probably half or a quarter of the depreciation timeline of a carrier fiber line.
Even if the building itself is condemnable, what it took to build it out is still valuable.
To give a different example, right now, some of the most prized sites for renewable energy are former coal plant sites, because they already have big fat transmission lines ready to go. Yesterday's industrial parks are now today's gentrifying urban districts, and so on.
Eh not really. Maybe retro cloud gaming services. But games haven't stopped getting more demanding every year. Not only are the AI GPUs focused on achieving clusters with great compute performance per watt and dollar rather than making singular GPUs with great raster performance; even the GPUs which are powerful enough for current games won't be powerful enough for games in 5 years.
Not to mean that we're still nowhere near close to solving the broadband coverage problem, especially in less developed countries like the US and most of the third world. If anything, it seems like we're moving towards satellite internet and cellular for areas outside of the urban centers, and those are terrible for latency-sensitive applications like game streaming.
> But games haven't stopped getting more demanding every year.
This is not particularly true.
Even top of the line AAA games make sure they can be played on the current generation consoles which have been around for the last N years. Right now N=5.
Sure you’ll get much better graphics with a high end PC, but those looking for cloud gaming would likely be satisfied with PS5 level graphics which can be pretty good.
If you look at year over year chip improvements in 2025 vs 1998, it's clear that modern hardware just has a longer shelf life than it used to. The difficulties in getting more performance for the same power expenditure are just very different than back in the day.
There's still depreciation, but it's not the same. Also look at other forms of hardware, like RAM, and the bonus electrical capacity being built.
In 1998, 16 MiB of RAM was ~$200, in 2025, 16 GiB of ram is about $50. A Pentium II in 1998 at 459 MHz was $600. Today, a AMD Ryzen 7 9800X can be had for $500. That Ryzen is maybe 100 times as powerful as the Pentium II. What's available at what price point has changed, but it's ridiculous how much computing I can get for $150 at Best Buy, and it's also ridiculous how little I can do with that much computing power. Wirth’s law still holds: software is getting slower more rapidly than hardware is getting faster.
Honestly I think the most surprising thing about this latest investment boom has been how little debt there is. VC spending and big tech's deep pockets keep banks from being too tangled in all of this, so the fallout will be much more gentle imo.
FLOP/s/$ is still increasing exponentially, even if the specific components don't match Moore's original phrasing.
Markets for electronics have momentum, and estimating that momentum is how chip producers plan for investment in manufacturing capacity, and how chip consumers plan for deprecation.
They kind of aren't. If you actually look at "how many dollars am I spending per month on electricity", there's a good chance it's not worth upgrading even if your computer is 10 years old.
Of course this does make some moderate assumptions that it was a solid build in the first place, not a flimsy laptop, not artificially made obsolete/slow, etc. Even then, "install an SSD" and "install more RAM" is most of everything.
Of course, if you are a developer you should avoid doing these things so you won't get encouraged to write crappy programs.
Companies want GW data centers, which are a new thing that will last decades, even if GPUs are consumable and have high failure rates. Also, depending on how far it takes us, it could upgrade the electric grid, make electricity cheaper.
And there will also be software infrastructure which could be durable. There will be improvements to software tooling and the ecosystem. We will have enormous pre-trained foundation models. These model weight artifacts could be copied for free, distilled, or fine tuned for a fraction of the cost.
About 40% of AI infrastructure spending is the physical datacenter itself and the associated energy production. 60% is the chips.
That 40% has a very long shelf life.
Unfortunately, the energy component is almost entirely fossil fuels, so the global warming impact is pretty significant.
At this point, geoengineering is the only thing that can earn us a bit of time to figure...idk, something out, and we can only hope the oceans don't acidify too much in the meantime.
Interesting. Do you have any sources for this 60/40 split?
And while I agree that the infrastructure has a long shelf life, it seems to me like an AI bubble burst would greatly depreciate the value of this infrastructure as the demand for it plummets, no?
While yes, I sure look forward to the flood of cheap graphics cards we will see 5-10 years from now. I don't need the newest card, but I don't mind the five-year old top-of-the-line at discount prices.
They're only replacing GPUs because investors will give "free" money to do so. Once the bubble pops people will realize that GPUs actually last a while.
I think you partially answer to yourself though. Is the value in the depreciating chips, or in the huge datacenters, with cooling, energy supply, at such scale etc. ?
The wealth the Dotcom boom left behind wasn't in dial up modems or internet over the telephone, it was in the huge amounts of high speed fiber optic networks that were laid down. I think a lot of that infrastructure is still in use today, fiber optic cables can last 30 years or more.
In the late 90s to 2001? Many people were still using modems at that time. Cable or DSL wasn't even an option for a considerable percentage of the population.
Low Global Penetration: Only 361 million people had internet access worldwide in 2000, a small fraction of the global population.
Specific Country Examples
United States: The US had a significant portion of the world's internet users, making up 31.1% of all global users in 2000. Its penetration rate was 43.1%.
Not still using, flat out modemless. Lots of guys got their hand on a mouse for the first time only after Windows XP launched. Which was after the collapse.
Personally I think people should stop trying to reason from the past.
As tempting as it is, it leads to false outcomes because you are not thinking about how this particular situation is going to impact society and the economy.
Its much harder to reason this way, but isnt that the point? personally I dont want to hear or read analogies based on the past - I want to see and read stuff that comes from original thinking.
Doesn't that line of reasoning leave you in danger of being largely ignorant? There's a wonderful quote from Twain "History doesn't repeat itself but it often rhymes" there are two critical things I'd highlight in that quote - first off the contrast between repetition and rhyming is drawing attention to the fact that things are never exactly the same - there's just a gist of similarities - the second is that it often but doesn't always rhyme - this sure looks like a bubble but it might not be and it might be something entirely new. _That all said_ it's important to learn from history because there are clear echoes of history in events because we, people in general, don't change that fundamentally.
IME the number of times where people have said "this time it's different" and been wrong is a lot higher than the number of times they've said "this time is the same as the last" and been wrong. In fact, it is the increasing prevalence of the idea that "this time it's different" that makes me batten down the hatches and invest somewhere with more stability.
This won’t even come close to maiming the economy, that’s one of the more extreme takes I’ve heard.
AI is already making us wildly more productive. I vibe coded 5 deep ML libraries over the last month or so. This would have taken me maybe years before when I was manually coding as an MLE.
We have clearly hit the stage of exponential improvement, and to not invest basically everything we have in it would be crazy. Anyone who doesn’t see that is missing the bigger picture.
The leap of faith necessary in LLMs to achieve the same feat is so large its very difficult to imagine it happening. Particularly due to the well known constraints on what the technology is capable of.
The whole investment thesis of LLMs is that it will be able to a) be intelligent b) produce new knowledge. If those two things that dont happen, what has been delivered is not commensurate to the risk in regards to the money invested.
Given they're referencing Icarus, they seem to agree with you.
Past bubbles leaving behind something of value is indeed no guarantee the current bubble will do so. For as many times as people post "but dotcom produced Amazon" to HN, people had posted that exact argument about the Blockchain, the NFT, or the "Metaverse" bubbles.
Many AI startups around LLMs are going to crash and burn.
This is because many people have mistaken LLMs for AI, when they’re just a small subset of the technology - and this has driven myopic focus in a lot of development, and has lead to naive investors placing bets on golden dog turds.
I disagree on AI as a whole, however - as unlike previous technologies this one can self-ratchet and bootstrap. ML designed chips, ML designed models, and around you go until god pops out the exit chute.
> commentators going on about the wax melting from their parents root cellar while Icarus was soaring.
Icarus drowned in the sea.
Even if you want to put the world into only two lumps of cellar dwellers and Icaruses it is still a group of living people on one side and a floating/semi-submerged pile of dead bodies that are literally only remembered for how stupid their deaths were on the other.
Cisco, Level3 and WorldCom all saw astronomical valuation spikes during the dotcom bubble and all three saw their stock prices and actual business prospects collapse in the aftermath of it.
Perhaps the most famous implosion of all was AOL who merged (sort of) with TimeWarner gaining the lion's share of control through market cap balancing. AOL fell so destructively that it nearly wiped out all the value of the actual hard assets that TW controlled pre-merger.
I would add more metrics to think about. For example, very few people used Internet in the dotcom era while now the AI use is distributed into all the population using the Internet that will probably not growth too much. In this case, if Internet population is the driver, and it will not growth significantly we are redistributing the attention. Assuming "all" society will be more productive we will all be in the same train at the relatively same speed.
The 90s bubble also had massive financial fraud and laid capital that wasn’t used at 100% utilization when it hit the ground like what we are seeing now.
It’s different enough that it probably isn’t relevant.
> [At dotcom time] There was a bubble, many non-viable companies got funded and died, and nevertheless the internet did eventually change everything.
It did, but not for the better. Quality of life and standard of living both declined while income inequality skyrocketed and that period of time is now known as The Great Divergence.
> He's (unsurprisingly) making an analogy to the dotcom bubble, which seems to me correct.
He's got no downside if he's wrong or doesn't deliver, he's promising an analogy to selling you a brand new bridge in exchange for taking half of your money... and you're ecstatic about it.
Thank you for acknowledging this. The internet was created around a lot of lofty idealism and none of that has been realized other than opening up the world's information to a great many. It made society and the global economy worse (occidental west; Chinese middle class might disagree) and has paralleled the destabilization of geopolitics. I am not luddite but until we can, "get our moral shit together" new technologies aren't but fuel on the proverbial fire.
Glad to be in agreement. The higher message here is that technology is no substitute for politics, cue crypto-hype which produced little more than crime and money-laundering. Without proper policies, corruption invades every strata of society.
Then why has my experience with AI started to see such dramatically diminishing returns?
2022-2023 AI changed enough to be me to convert from skeptic, to a believer. I started working as an AI Engineer and wanted to be on the front lines.
2023-2024 Again, major changes, especially as far as coding goes. I started building very promising prototypes for companies, was able to build a laundry list of projects that were just boring to write.
2024-2025 My day to day usage has decreased. The models seem better at fact finding but worse for code. None of those "cool" prototypes from myself or anyone else I knew seemed to be able to become more than just that. Many of the cool companies I started learning about in 2022 started to reduce staff and are running into financial troubles.
The only area where I've been impressed is the relatively niche improvements in open source text/image to video models. It's wild that you can make sure animated films on a home computer now.
But even there I'm seeing no signs of "exponential improvement".
I vibe coded 5 deep ML libraries this month. I'm an MLE by trade and it would have taken me ages without AI. This wasn't possible even a year ago. I have no idea how anyone thinks the models haven't improved
My experience has been that it was. I was using AI last year to build ML models about as well as I have been this year.
I'm not saying AI isn't useful, just that the progress certainly looks to be sigmoid not exponential in growth. By far the biggest year for improvement was 2022-2023. Early 2022 I didn't think any of the code assistants were useful, by 2023 I was able to use them more reliably. 2024 was another big improvement, but I honestly haven't felt the change (at least not for the better).
Some of the tooling may be better, but that has little do to with exponential progress in AI itself.
Wow really? The agentic coding work that has come out in the last year are super impressive to me.
And before it didn’t seem to understand the fundamentals of Torch well, not well enough to do novel work. Now with Codex in high it absolutely does, and MLE bench reflects that
Very few people predicted LLMs, yet lots of people are now very certain they know what the future of AI holds. I have no idea why so many people have so much faith in their ability to predict the future of technology, when the evidence that they can't is so clear.
It's certainly possible that AI will improve this way, but I'd wager it's extremely unlikely. My sense is that what people are calling AI will later be recognized as obviously steroidal statistical models that could do little else than remix and regurgitate in convincing ways. I guess time will tell which of us is correct.
If those statistical models are helping you do better research, or basically doing most of it better than you can, does it matter? People act like models are implicitly bad because they are statistical, which makes no sense at all.
If the model is doing meaningful research that moves along the state of the ecosystem, then we are in the outer loop of self improvement. And yes it will progress because thats the nature of it doing meaningful work.
> If the model is doing meaningful research that moves along the state of the ecosystem, then we are in the outer loop of self improvement.
That's a lot of vague language. I don't really see any way to respond. I suppose I can say this much: the usefulness of a tool is not proof of the correctness of the predictions we make about it.
> And yes it will progress because thats the nature of it doing meaningful work.
This is a non sequitur. It makes no sense.
And I never said there's anything bad about or wrong with statistical models.
Not even remotely. In LLM land, the progress seems slow the past few years, but a lot has happened under the hood.
Elsewhere in AI however progress has been enormous, and many projects are only now reaching the point where they are starting to have valuable outputs. Take video gen for instance - it simply did not exist outside of research labs a few years ago, and now it’s getting to the point where it’s actually useful - and that’s just a very visible example, never mind the models being applied to everything from plasma physics to kidney disease.
First were the models. Then the APIs. Then the cost efficiencies. Right now the tooling and automated workflows. Next will be a frantic effort to "AI-Everything". A lot of things won't make the cut, but absolutely many tasks, whole jobs, and perhaps entire subsets of industries will flip over.
For example you might say no AI can write a completely tested, secure, fully functional mobile app with one prompt (yet). But look at the advancements in Cline, Claude code, MCPs, code execution environments, and other tooling in just the last 6 months.
The whole monkeys typewriters shakespeare thing starts to become viable.
I'm paying for 3 different AI services and our company and most of my team is also paying money for various AI stuff. Sounds like a real industry to me. There's just going to be VC losers as always, where usually "losing" is getting bought by a bigger company or aquihires instead of 100xing or going public.
My team is doing the same, and yet all of us still aren't sure that we're actually more productive overall.
If anything it seems to me like we've just swapped coding with what is effectively a lot more code review (of whatever the LLM spits out), at the cost of also losing that long term understanding of a block of code that actually comes from writing it yourself (let's not pretend that a reviewer has the same depth of understanding of a piece of code as an author).
If you work in a team then you are likely already not writing most of the code yourself.
There will be point where ai will consistently write better prs - you can already start to see it here and there - finding and fixing bugs in existing code, refactoring, writing tests, writing and updating documentation and prototyping are some examples of areas where it often surpasses human contribution.
All the comments of AI writing code and making PRs remind me a lot of all the promises about self driving cars. That was more than 10 years ago and today I still don't know anybody that has a car that drives itself. Will AI write useful PRs one day? Probably. Will it do that before I retire or die of old age? Considering I have been using agents for about a year or so, and seen little to no improvement in that time, I'm afraid the current version of AI probably has already peaked and we'll only see marginal improvement due to diminishing returns.
Yes there is a very real trade off between labour and capital.
In the past the tradeoff has been very straight forward. But this is a unique situation because it involves knowledge and not just the physicality of the human in regards to productivity.
Uber didn't invent anything, they drove taxis out of business then jacked up the prices, squeezed the workers, and now everybody is riding around in regular people's crappy cars for more than a cab cost. And now I need a phone and to maintain my relationship with these two crappy companies rather than to wave at the street (which is what I used to do to get a ride.)
That was literally what everybody said would happen.
I think Uber invented (or at least made widely available) taxi-by-app. Having scores of cab companies, accessed by telephone call, was supremely user-hostile.
If the cab companies had gotten together on an app, they might have shortcut Uber and all of its many dubious practices. They finally are starting to but it's much too late.
Taxi-by-app existed pre-Uber, the innovation was making the taxi actual show up. Austin had the apps, and I would order one, and the taxis would get distracted on the way to my house and pick up other fares, so I couldn't go where I wanted. They had every chance to not be outcompeted by Uber, but they couldn't stop being taxis. And, here we are.
> the innovation was making the taxi actual show up
Uber regularly doesn’t show up, just playing with “4… 3… 6… minutes left”. I always have to wait half an hour at a certain location, with a message “Just book a few minutes before your trip.”
I’ve had a friend miss his flight because of Uber not showing up!
They didn't. They innovated, practically implemented ideas that resulted in the introduction of new goods and services [1]. This is a meaningful difference.
Uber didn't invent anything. But they did pull ridesharing out of a hat.
I think you're definitely in the minority. For most of the world, Uber/Grab/Bolt have made transportation cheaper, safer, more convenient, and more comfortable.
It completely changed an industry. Just because you don't like the current version doesn't mean it wasn't a major innovation. Uber was just the main player in scaling and marketing a new model across the entire globe, which was a huge and costly endeavor.
LLMs have already changed how software can be written and a thousands of other business/consumer usecases, these companies are just battling it out and finding the most profitable niches. It will be a major business for a long time and the technology will mature and plateau pretty quickly. If R&D doesn't scale economically, it will just slow down and existing models will be heavily optimized to be cheaper to run.
The dot com boom resulted in very few real industries and comparing the two is not very useful.
Uber definitely screwed the workers and probably existing taxi companies, but for the users it was a huge W, at least in many parts of the world. Taxi companies are notoriously scammy and it seems to be a very universal experience.
From my experience, this is not the case in many european cities. In those places the main benefit seems to be not having to interact with the cab driver.
You don't talk to your drivers? I've had a lot of genuinely good conversations in my rides. I always at least feel it out and see how much the driver wants to talk but it feels wrong to not say anything
I'm talking about the benefit to whoever is using them. Once you learn it's more expensive, there's little logic to keep on using it unless for the reason stated.
For almost 20 year, Amazon has been the poster child of "A company can be unprofitable for years and still turn out a winner", but of course - not all companies can pivot from being a regular e-commerce company to cloud infrastructure/hosting, and become a money machine.
So the question, at least to me, is how these AI companies will find a product or service that makes them profitable. Other than becoming actual monopolies in their current domains.
Uber sold something like $50 billion in equity and debt before it went public, and although they're profitable now, to me it doesn't seem like they have answer to Waymo coming up fast in the rear-view mirror. I think Uber is still a scam, just one where the earlier investors fleeced the later ones who are never going to see the returns they paid for.
> Why three? Will you ever be in a position where one will do it for you?
I believe LLMs will be niche tools like databases, you pay for the product not 'gpt' vs 'claude'. You choose the right tool for the job.
I have a feeling coding tool with be separate a niche like Cursor, which LLM it uses doesn't matter. It's the integration, guard rails, prompt seeding, and general software stuff like autocomplete and managing long todos.
Then I pay for ChatGPT because that's my "personal" chat LLM that knows me and knows how I like curt short responses for my dumb questions.
Finally I pay for https://www.warp.dev/terminal as a terminal which replaced Kitty terminal on macos (don't use it for coding) which is another niche. Cursor could enter that arena but VSCode terminal is kinda limited for day-to-day stuff given it's hidden in an larger IDE. Maybe a pure CLI tool will do both better.
there's some lock-in for both chatgpt (the history and natural chat personalization feature is super useful) and with Cursor I'm fully invested in the IDE experience.
The lie is that LLMs are the product itself rather than the endless integration opportunities via APIs and online services.
Seems like a measured approach- my read is him saying it’s probably a bubble in that bad ideas are being funded too, but there are a lot of really good ideas doing well.
Also nit: Typo right in the digest I assume, assuming “suring” is “during”, does cnbc proofread their content?
Faculty at the college I work for are kicking and screaming about AI and students using AI. They don't want to use new tools so they're just trying to outright ban students from using them.
One smug English faculty said, "well it's not that hard. You just look for dashes in their writing."
I responded with, "you know you can just tell it not to use those, right?"
The writing is a mandatory part of the learning process. Outsourcing it to tools is skipping and cheating. Nothing new here. It was the same when the tool was your buddy or upwork before LLMs.
Banning advanced graph calculators on undergrad math exams is not because "they don't want to use new tools", either.
Sure but there's also a continuum of practice between banning outright and letting the students use llm's to just write for them.
Use it as a peer review. Use it during brainstorming. Use it to clarify ambiguous thoughts. Use it to challenge your arguments and poke holes in your assumptions.
Are the examples you mentioned actually banned, as opposed to not actively used in classes? I'd think even where LLMs don't belong in the curriculum, self-studies in various forms as complimentary to the curriculum wouldn't be any of the teachers business?
Correct. The faculty at my institution do not want any use of LLM or 'AI' technology. End of sentence. If they learn that a student used an LLM, regardless of how it was used, they send a formal academic discipline complaint to administration. It's a fucking joke.
I'm not saying throw the doors open and let loose. I'm saying that we need to find places where using these tools makes sense, follows a sense of professional ethics, and encourages (rather than replaces) critical thinking.
And the problem with your cited paper is that people who kick and scream the loudest about this at my institution (again, this is just at mine and is in no way indicative of any other institution) are the ones who have not updated their courses since I was in college. I mean that quite literally. I attended the institution I currently work at. Decades later and I could turn in papers that I wrote for their classes my freshman year and pass their classes.
Three of them sent me that same linked article. But instead of seeing the message "we need to think about how to use these things responsibly" they just read "you can do what you've done for years and nothing needs to change."
That "research" article isn't as impactful as the faculty at my institution thought.
I'm all for the thoughtful integration or rejection of these technologies based on sound pedagogical practices rooted in student learning theory. At my institution, and I want to stress n=1, they literally do not want to take time updating lessons, regardless of the reason. Llm's are just a convenient scapegoat right now.
I would argue that it's more unethical to not update your classroom lessons in over 2 decades than it is to use llm's to supplement learning.
The sad difference that makes a difference is that banning graphing calculators from proctored exams is enforceable in practice.
Appealing to honor is a partial solution at best. Cheating is a problem at West Point, let alone the majority of places with a less disciplined culture. It's sad, but true. The fact that you and I would never cheat on exams simply does not generalize.
edit: good on West Point for actually following up on the cheating. I've witnessed another institution sweeping it under the rug even when properly documented and passed up two or three levels of reporting. As an academic director and thus manager of professors this was infuriating and demoralizing for all concerned.
I dont see anything reason at all to expect West Point to have less cheating then anywhere else. Army people dont commit crimes less then civilians either.
A market for lemons means there is an information asymmetry. Sellers know what they have and try to offload their lemons on clueless buyers. I don't think that's the case here.
It is indeed hard to tell how many of the people selling this stuff are True Believers. It's also a bit scary, given how incredibly implausible some of these stuff they're saying is.
My understanding is that the cost of training each next model is very very large, and a half trained model is worthless.
Thus when it is realised that this investment cannot produce the necessary returns, there will simply be no next model. People will continue using the old models, but they will become more and more out of date, and less and less useful, until they are not much more than historical artifacts.
My point is that the threshold for continuing this process (new models) is very big (getting bigger each time?), so the 'pop' will be a step function to zero.
Why do you think the models will become out of date and less useful? Like, compared to what? What external factor makes the models less useful?
If it's just to catch up with newly discovered knowledge or information then that's not the model, they can just train again with an updated dataset and probably not need to train from scratch.
> What external factor makes the models less useful?
Life. A great example can be seen in the AI-generated baseball-related news articles that involve the Athletics organization. AI articles this year have been generating articles that incorrectly state that the Atlanta Braves played in games that were actually played by the Athletics, and the reason is due to the outdated training model. For the last 60 years before 2025, the Athletics played in Oakland, and during that time their acronym was OAK. In 2025, they left Oakland for Sacramento, and changed their acronym to ATH. The problem is that AI models are trained on 60 years of data where 1. team acronyms are always based on the city, rather than the mascot of the team, and 2. acronyms OAK = Athletics, ATL = Atlanta Braves, and ATH = nothing. As a result, an AI model that doesnt have context "OAK == ATH in the 2025 season" will see ATH in the input data, associates ATH with nothing in it's model, and will then erroneously assume ATH is a typo for ATL.
If they stop getting returns in intelligence, they will switch to returns in efficiency and focus on integration, with new models being trained for new data cutoffs if nothing else. Even today there is at least a 5-10 year integration/adoption period if everything halts tomorrow.
There is no reality in which LLMs go away (shy of being replaced).
> If they stop getting returns in intelligence, they will switch to returns in efficiency
I don't think we can assume that people producing what appear to be addictive services are going to do that, especially when they seem to be addicted themselves.
Many argument the current batch of models provide a large capability overhang. That is, we are still learning how to get the most out of these models in various applications
So with every prompt you are expected to wait that long?
I highly doubt general people will be willing to wait, it also doesn't seem entirely viable if you want to do it locally, less bandwidth and no caching akin to what a literal search engine of data can do.
No they don't want that, because it could lead to an uprising against them. They want us provided with the essentials in exchange for being dutiful workers, so we have something to lose.
Everyone should live in pods stacked together, eat insects, not drive our own automobiles around or fly places, we should be able to get our entertainment and everything to keep ourselves happy from their subscription entertainment services. Basically we are to consume as little as possible to barely keep ourselves alive and sane while they sail themselves around to pat one another on the backs at their climate and economic conferences on their billion dollar luxury yachts.
Actually no that would be stupid they don't have the time or patience to sail their yachts around. They have crew for that. They will fly in one of their handful of private jets and have the yacht meet them there.
People do this to themselves and willingly, no spooky evil capitalists behind the curtains are necessary.
People love money and what they can bring on the table, people often hate each other ie within families stiffed by peer pressure and expectations of mentalities formed in another very different era, people love discovering new countries and cultures. And so on and on.
Ie me - I love my parents, my childhood was normal, only later to find and compare with others to see how such childhood was... abnormally uncommon. But I very much prefer seeing them few times a year only, even though we love when they help with kids. Some of their opinions are very outdated, their ramblings are often out of touch with reality, they tend sometimes to spoil kids (even after setting boundaries), and overall generational gap is absolutely massive. It is a form of freedom. Make that 10x more in much more strict societies where pressure and expectations from parents on kids are massive and then they wonder why kids stay the heck away from them once adults.
And fuck local communities, for every good-hearted neighbor who just wants to socialize and help out and otherwise stays away from one's life, there is easily 5 or 10 who are the epitome of nimbyism, voyeurism or similar hobbies of people with empty lives, clueless on how world and people actually work but always with very strong opinions on everything and will to push those on everybody else.
> And I mean, you idea of local comunity is all about other people doing free work for you and then tolerating your peculiarities with no reciprocation.
When it seems to you like everyone around you is the problem, you may actually be the problem.
Maybe I phrased wrongly (not a native speaker) but what you say is incorrect. I am not free coasting on parents, in contrary my first years all my earning went into providing them a good home for rest of their lives, while living in tiny rental rooms. You have no idea from sort of poor background I come from, most western kids have no clue. It was a massive improvement of QoL for them that they would never be able to afford themselves and they still appreciate it massively. Since we live 1500km from them they help with kids literally few hours per year, not a burden for anybody involved.
But freedom to chose with whom we spend our time is a thing, no matter how much people like you try to force their righteous values that are the only proper true way (TM) on everybody else. I am old enough and over time met plenty of folks like that, be it religious or other forms, they are at the end the same as your comment.
I suspect that's beyond what the Hackernews crowd can cope with contemplating. Billionaires bad except on that subject in which case billionaires good and working class bad.
At that stage, owners will very likely have enough drones and robots at their command to not need to worry about petty things like flesh and blood uprisings.
All the value will be diverted. I don’t know if you noticed, but in the last few years, you’d have come ahead if you gambled on the stock, crypto or commodities market than if you busted your ass.
> Everyone should live in pods stacked together, eat insects, not drive our own automobiles around or fly places, we should be able to get our entertainment and everything to keep ourselves happy from their subscription entertainment services.
Yes of course capitalists love when economy is bad. Sorry, these dystopic visions do not pass even simplest smell test.
It’s less complicated than that. Externalities are when an individual profits but others pay the price. Think of climate change. And when a small number are so rich they control the government, so there’s nothing to stop them. It’s narrow incentives driving the whole thing.
They love it when the richest people do well. They dont care about how anyone else lives. The poorer other people are better they feel about winning.
Those you call "capitalists" love monopolies as long as they are theirs. They love captured market. They dont care about competition unless it is someone not them competing to provide for them on lower price.
As of now, billionaires dont want or need strong economy as a "middle class and lower class doing good". They want the "our wealth goes up, we are getting tax breaks, if lower class pays for it cool" kind of economy.
It is not realistic - we are consuming way more than any of the previous generations. Poverty is at an all time low and steeply dropped since the last century. We are curing diseases, people are living long. What's the pessimism about?
I just thought you could make your point without calling other people cringe for disagreeing. I agree the average now is better than any century before, while also agreeing there's many interests of the type the comment you replied to illustrates, trying to either reduce personal freedoms or perpetuating a certain order of things. Both things can be true and they are both realistic view points, specially because there's multitudes of people all with their own priorities fighting for their own ideal futures and a few of those have a crazy amount of power.
Mat and protein consumption peaked around 20 years ago too.
> Poverty is at an all time low and steeply dropped since the last century. We are curing diseases, people are living long. What's the pessimism about?
It's not pessimism, it's reality. The ruling class are demanding we reduce consumption while increasing and flaunting theirs. That's just what is. If you're denying that or think it's pessimism I really don't know what to tell you.
That's because of efficiency gains. Easily explained by the fact that consumption over all other products increased.
>Mat and protein consumption peaked around 20 years ago too.
Meat consumption is not a realiable indicator of anything in developed countries. It is the same in Netherlands as well. But increased dramatically in India and develping countries.
>It's not pessimism, it's reality. The ruling class are demanding we reduce consumption while increasing and flaunting theirs. That's just what is. If you're denying that or think it's pessimism I really don't know what to tell you.
What's the proof that we reduced consumption? Without cherrypicking?
Go repeat that to the nearest homeless guy. Wealth inequality is rising rapidly, and around a billion people on this planet still can't eat as well as they should.
Not true of at least several major indicators of consumption vs previous generations according to data I posted in other thread.
> Life expectancy is at an all time high.
> How do you explain this?
The more important question is, how do you believe these things you wrote disprove the comment that the rich and ruling class wants us to reduce our consumption, even if they were true?
Because they do. Up until some time maybe around the end of the cold war, progress and development of countries were measured by (among other things) metrics like energy consumption, meat and protein consumption. The consumption based metrics have basically disappeared and the mantra these days is that we are consuming too much. We should minimize meat, energy consumption. There are many proposals to tax such things directly or indirectly, or even just outright limit the amount of animals that are farmed and so on.
I agree that poverty rate flatlined in USA but world poverty (counting India and China) reduced dramatically in the past 20 years. How do you explain this?
>Not true of at least several major indicators of consumption vs previous generations according to data I posted in other thread.
You posted energy consumption per capita which was due to efficiencies.
>Because they do. Up until some time maybe around the end of the cold war, progress and development of countries were measured by (among other things) metrics like energy consumption, meat and protein consumption. The consumption based metrics have basically disappeared and the mantra these days is that we are consuming too much. We should minimize meat, energy consumption. There are many proposals to tax such things directly or indirectly, or even just outright limit the amount of animals that are farmed and so on.
I don't know why you insist on licking rich people's boots for these. None of these good thing came from them.
I'm telling you: around a billion people are still hungry, and that number has been stable for 50 years. In the face of massive, global wealth inequality, this is unacceptable to me.
I have agency and I'm very happy with my lot. I'm not "blaming" anybody for anything. But unlike you I am not in a naive infantile delusion about what the ruling class are and want and work toward.
The difference between an observation and blame is, blame implies moral judgement and the implication that "it ought to be different". It is clear from this post and other posts that this is your agenda.
If you don't claim that it ought to be different, what are you arguing about?
You're the one arguing!! What TF are you arguing about??
Why don't you start by explaining how "not everything is getting worse / some things are getting better" addresses in the slightest what I wrote, or somehow proves that what I wrote is wrong. That would be a good start.
For sure, but most bridges aren't in scenic places and robocops will still be limited in numbers, so no worries, just ask your LLM assistance in case of doubt under which bridge you won't get evicted from. Altough, that would probably be illegal advice, but they can try to help you with therapy should you end up in such a situation. Not the top model of course, but hey, it is a benefit.
Based on extensive academic research on trickle-down economics, in particular looking into the evolution of real wages of different sectors of population, since 1980s.
See the work of recent Nobel prize laureates in economics. Many argue for redistribution and investment back to the society.
But the past few revolutions benefitted everyone and we are better off. Look at industrial revolution, digital revolution. Why do you think it is different this time? If trickle down economics don't work, why is world poverty at all time low and consumption at all time high?
I really don't see how one can separate the industrial revolution from colonialism, considering we have chiefs of government in colonial countries on the record saying that colonies are a necessary outlet for industrial goods [1].
Once you've established that link, it's hard to explain that "everyone" benefitted from the industrial revolution.
Even disregarding that, the working conditions created by industrialization allowed for situations that can hardly be described as "beneficial" [2][3][4].
What percent of the population in places which experienced the industrial revolution would be better off if they time-travelled back 200 years? 1%? 0.2%?
> in places which experienced the industrial revolution
People experienced the industrial revolution everywhere.
I suspect, when you think "places which experienced the industrial revolution", you think about a small subset of areas where some development happened as a result of that, likely the areas where industrialists lived.
But you would also have to consider other places' experience of industrialization. For instance, Congo under EIC colonial rule did experience industrialization - it was the place where industrial amounts of rubber were harvested to allow for plants elsewhere to produce joints, pipes, motor belts, etc. It's not really hard to believe that, had Congo not experienced that, its citizen would almost certainly have been better off now.
Does Congo lack electricity, modern medicine, and air conditioning?
If the industrial revolution has made their lives worse, it's a double-whammy because they are forced to suffer almost twice as long, as their life expectancy at birth has approximately doubled since 1870.
Yeah it’s a complicated picture and of course nobody knows, but it would be helpful to split “benefits” into things like;
- net benefits to the average person (considering drawbacks)
- overall relative benefits compared to income groups
- benefits in certain areas of society and topics
I think there’ll be some “benefits for all” in terms of things like medical advances and health technology. There will also be broader benefits to all in general areas but as a parent poster said it’ll benefit equity holders most and there might be some bad tradeoffs (like we’ll have access to much better information and entertainment but it may also affect the overall employment rate). It’s a very nuanced picture and it’s probably disingenuous of some tech leaders to say “we’ll all benefit) but some do believe that will be the future.
What you mean is those with some form of ownership of the technology. If development eventually results in full automation, with the expense of production reduced to zero, money will be irrelevant.
Energy, raw materials, and logistics still remain. I don't think we'll ever get to a place where there isn't some input to a production process that is not infinite and free.
Theoretically possible? Maybe (but still an extremely slim chance).
Practically possible? No. People (and countries) own land. Raw materials for robots comes from land. Energy for robots consumes land. Farming food requires massive inputs beyond just the land and energy (but also needs those).
I don’t imagine we’ll get to a world where my great-great-great^20-grandkids can hold out their hand and have a plate of steak and potatoes (or the then-equivalent) placed into it for free, anytime they want.
The expense of production and on-demand delivery of just a simple plate of steak and baked potato will not ever get to zero. If we can’t even get that simple of thing for free, I don’t believe in a world without the notion of money.
Expand that to even better dining, vacation, and leisure/recreational activities and I think the argument becomes even more solid that some form of rationing/limiting will be in effect and there will be a unit/notation of ration and trade that will be indistinguishable from money.
Believe me, ruling them out is the last thing I'd do. I fully expect them in the next decade or two.
A practically possible path to both: Starship is perfected and mining companies begin operations in space. Vast data centers training spatial ai using virtual simulations perfect it well enough for general robotics to become practicable. Automation is then as follows: robotics manufacturing and maintenance is handled by robots. Mining is performed by robots. General manufacturing performed by robots. Potential manufacturing scales increase by orders of magnitudes. Where are the costs in this scenario that would prevent prices falling to zero? And if prices for all goods and virtually all services* fall to zero, what possible role can money have at that point, other than sitting on a shelf as a memento of a vanished system?
The cost is in transportation (aside from the cost of developing and producing all those automated systems). Where do you expect extra-terrestrial mining to occur and why do you think what's mined there would be used on Earth? The nearest place to mine would be the moon, and it's on the order of 1 million dollars per kg to bring things back. We could potentially drop that, but that's a hell of a base cost just for material transport. What makes you think that's going to be happening soon?
In the next couple of decades? Starship is real, space mining companies are real, NVIDIA Cosmos is real, robotics development is nascent, but real and thrilling. Ordinary market forces will ensure the uptake of robotics.
You're calculating the expense of returning mined resources using past metrics that are superseded altogether in this scenario. For instance miniaturization suddenly won't be necessary for mining companies wishing to send gear to asteroids.
>metrics that are superseded altogether in this scenario. For instance miniaturization suddenly won't be necessary for mining companies wishing to send gear to asteroids
Nothing in the near future is superceding the tyranny of the rocket equation. It'll still be extremely expensive to send equipment to and retrieve material from space even if the spacecraft and mining equipment were literally free.
You are simply incorrect.[1] Starship will change it all if it succeeds. The space sector is abuzz with the possibility of orbital refueling and the opportunities it will open up, eg [2]
It was a reply to the assertion that land restrictions mean the resources for full automation will always be unavailable. I don't agree with his contention, but offered a likely workaround anyway.
The wealth gap widening is quite independent from AI being involved. A natural progression which was always happening and continues to be happening. Entil some sort of catastrophe reshuffles the cards. Usually a war or revolution. The poor simply rising up or a lazy and corrupt ruling class depriving their country of enough resources and will to defend itself that some outside power can take it.
There are many places on earth where people live no different from what we lived like 10,000s of years ago. You can just go there, you know, you can just do things. You are an adult.
Thought experiment - Startrek replicators are real.
This basically means almost everything can be built without human involvement. The guy who owns the replicators is the richest.
The wealth gap is so massive you get revolts (because we're educated, not serfs, right?) So then government needs to step in. Either tax->ubi?, socialize it, or make it a state asset?
If you can make many replicators, money stops making much sense. You probably end up with energy (if these devices take a lot of energy to operate) as the new currency.
My gut says that _somehow_ the middle class will get screwed as always, but I struggle to articulate the way that abundant cheap goods lead to that outcome.
Maybe because the very few that control the replicators will be able to cut people they don’t like out of partaking from them? That’d make some sense.
If replicators were replicatable, that control evaporates quickly. Remember how nervous we all were about LLM censorship, then suddenly a $2000 MacBook Pro could run pretty great open source models that seem a few months behind SOTA?
> If you can make many replicators, money stops making much sense.
There are many, many, many, many positional goods. Beachfront properties, original art, historical artifacts, elite clubs, limited edition luxury goods, top restaurants, etc.
The notion that we'd all live happily and contentedly without money if only we had some more iPhones and other goods produced by replicators strikes me as false.
Remember that Keynes predicted about a century ago that 100 years thence (in other words, now) everyone would just work 10 hours a week at most, and the biggest challenge would be to avoid boredom? He predicted productivity growth accurate enough, but assumed that people would have enough with 4x, 5x as much as they had back then while simultaneously working 4x, 5x less. Instead, people opted to work just as much and consume 16x as much.
What does it mean in practice to have energy instead of money as currency?
People would still want to be able to trade with lower friction than lugging batteries around, so don't you just re-invent money on top of it? orrrrrr just keep having the current money around the whole time?
--
The general limiting factor with the "one person controls the replicators, only they have income" idea is that they would rapidly lose that income because nobody else would have anything to trade them anymore. (If you toss in the AI/robotic dream scenario, they don't even need humans to manage the raw material.) But then does that turn into famine and mass-die-off, or Star Trek utopia?
> What does it mean in practice to have energy instead of money as currency?
Something like Bitcoin. When the progress in miners efficiency stalls any kWh of energy not used for something else will be used to make some amount of bitcoins. If you have energy you can make btc. If you have btc you can give your btc to someone in exchange for their energy so that they give you their energy, instead of using it to mine bitcoins themselves.
It sounds terrible when you approach it from the point of money. Of course you can do money more efficiently. But if you approach this form the side of energy it's a way to organically tie a value to any energy produced. Even the energy produced at times when production vastly exceeds the demand. And that's going to be most of the energy produced since we need to develop renewables capacity and can't really wait for the storage technologies that lag horribly so we can match the supply the demand.
This is a way to make all energy valuable and providing incentive to build renewables even when 90% of the energy they produce will find no traditional buyer.
Only if you assume people's major motivation is wanting what they don't have, as opposed to wanting a little more to survive. History shows the opposite.
> If you can make many replicators, money stops making much sense. You probably end up with energy (if these devices take a lot of energy to operate) as the new currency.
If you can make many replicators, you certainly won't be providing them to anyone else. You'd be using them to ensure that money starts funneling into your revenue stream, and use that as a cash cow to pursue other projects.
What are they replicating? Patented things, copyrighted things? Or groceries? Do they want to replicate things? In Star Trek, they travel light, wear uniforms, and have few personal possessions, because they're on a ship, in the navy. That's why everything has to be digital and everybody stuffs their life inside a phonecorder and drinks synthale. When he's back on earth, Picard has a horse. I could be wrong but I don't think he replicated it.
> Remember how nervous we all were about LLM censorship
You're taking the wrong lesson from that observation. Models that people actually use are just as censored now as they ever were. What changed was the the hysterical anti-censorship babies realized that it's not that big of a problem, at least acutely.
> I struggle to articulate the way that abundant cheap goods lead to that outcome.
It has nothing to do with how cheap the goods are
The problem is that at some point people won't be able to afford literally anything because all, and I mean literally all, of the wealth will be hyper concentrated in a super small percentage of the population
Simple hypothesis. Top 5percent of US wealth now belongs to top 50 richest American. Even if you ignore corruption, lobbying and any ill intent you can definitely conclude that this top 50 individuals have better way of getting return from money than rest of the population. Even if if their delta of return is 5% we can assume that withing next 50 years there is a high probability that these guys will own 30-50% of wealth. I have a strong belive AI will acclerate that further.
But all your numbers (except maybe top 5% one) are completely made up. Strong beliefs don't prevent one from being completely wrong. Neville Chamberlain has a strong belief that he had ensured peace; Einstein had a strong belief that quantum theory's "spooky action at a distance" was incorrect. Both were wrong. Fifty years is a long time, and anything could happen. The last fifty years had the fall of Communism, the EU, China going from an impoverished countryside to a superpower, video phones in our pocket, social media upending communication and mental health, renewable energy displacing coal, Trump, etc.
That's the key. The poor are useful to business so long as they are a source of money and power. What happens if the time comes that the poor have nothing the rich want?
The poor will always have the only thing the rich want. Labor. Without labor Trump cannot slather the white house in gold. Without labor Zuck cannot smoke meats. Without labor Musk cannot troll people on X.
The rich do not want anything you have, they want you. Body and soul.
Not in our lifetimes and certainly not in our form of society. Robots will only drive down the price of labor. People will always be able to supply labor at costs below the price of materials for robots.
" People will always be able to supply labor at costs below the price of materials for robots."
Why? What critical materials do robots need that will always be more expansive than raising a human?
Also, from the point of "the rich" - the benefit of a robot is, that it will (stupidly) do as command, unlike a human. They don't have a family they want to take care first.
I mean, eventually like 1 person is going to have more wealth than 60% of the nation combined. At that point, why even bother trying to earn customers or appeal to the lower half when you can instead curry favor with that 1 person.
If you scratch under the hood of UBI, it's a mechanism to keep revolutions at bay. The balance of tax the ultra wealthy vs giving people enough to "live comfortably" is always the job of governance.
> If you scratch under the hood of UBI, it's a mechanism to keep revolutions at bay.
It's also putting money in the hands of the consumers so the rich can compete between themselves at how much each can scoop back up.
Something like feeding animals in the forest to compete with your friends at who can hunt better.
When poor have nothing then you have to shift to taking money of the other rich, but they are clever, so it's easier to take a little bit from the hands of all rich equally, give that money to the poor and reduce the new problem to the old one, how to extract as much as you can from the poor.
The first sentence is definitely. But, UBI is a nerd/socialist fantasy. It would nevet work and will never happen. Everyone with these sci-fi fever dreams of what will happen if AI collapses white collar jobs are coming from people that don’t know how the world or people actually work outside of their daydreams. People aren’t going to just be like “ok, well I guess it’s time for bread and water and Soviet style tenement housing, all this progress in livings was great while it lasted.” And other people are talking about using batteries for money or something. People need to touch grass.
We're nearly-there. The humans then become the capital/resource to be acquired, not money.
That's why every country is somehow chasing that elusive "population growth". It creates more "things" to own, whether that be money by virtue of more people creating more money through economic activity or simply more people to claim as "yours" (for the elites/leaders).
Or use any of the wonders of military technology invented in the last 200 years to take back society. These fuckers have lived in prosperity for so long that they don't even think it is possible. But so many people throughout history thought they were untouchable until the masses decided they had enough of their shit. Kings with professional armies, full plate knights, men with cannons, mens with guns, machine guns, bombs, planes, tanks, helicopters, and yet at the end of the day when enough random people are pissed off enough the mass of people with nothing to lose are the ones who "win" in the end with the powerful dead or hiding.
When unrest happens the military sides with the ones that give better hopes of keeping the stream of money that funds the army flowing.
It sides with the poor only if the powerful (gov) are hopelessly inept at gathering money. If there's a chance that current civilian power can reform and keep collecting the money from the people and funding the army then the army sides with them and help quell the rebellion.
Those who hold kinetic power will never side with poor against the rich.
Im not talking about the military siding with the people, im talking about the military's side not being enough to prevent the people from rising up and taking down whoever is in power. Militaries require logistics, random people do not. Military needs a government or people to follow for direction, masses of angry people do not.
Every time a government or military force has decided they were unstoppable or untouchable, history has proven them wrong. Hell we spent 2 decades in Iraq and Afganistan with the most powerful military and military tech the world has ever known backed by the strongest economy in the world against guys with 60+ year old bolt actions and guns filed out by hand in caves living in mostly desert landscapes, and we still ended up abandoning it because it was too costly. How would the military fair any better against the best armed population in the world with direct access to their supplying economy and logistics networks?
Yeah sure masses of people aren't making aircraft carriers, but you don't need aircarft carriers to win a war at home. We have modern engineering and chemistry text books in every library across the US that will tell you how military technology works and its flaws, what technology you can utilize find or make yourself, and a supply of nearly any material someone could possibly want or need sitting in scrapyards across the nation.
Economies can work without currencies. It's a little inconvenient, but bartering/trading goods for services was common in the depression when nobody had any cash.
Printing money is lucrative for the printer so any time it might get even a little bit useful and feasible for other parties somebody will start printing.
No I had the speculative ponzi front of mind when making that comment.
Governments love crypto because it lets you seize lots of money from criminals across borders. And it is legal gambling where you can tax the winnings without reimbursing the losers (unless they can offset it but most probably can not)
> Okay, but why would we die on the vine? Wouldn’t we just… make a parallel economy without the AGI? The world works today without AGI.
Because you need things they want. Like why would they spare the electricity to heat your home, when it could go to "better" use powering a few dozen GPUs serving a billionaire? Why would they spare the land for you to grow food, when they could use it to build ziggurats dedicated to their power (or whatever else is their whim)?
The market sends the scarce resources to those with the most money.
What is the fantasy AGI supposed to be that's so great for billionaires to have? A human baby is an <s>A</s>GI, it won't tell you the Ultimate Answer, or even a penultimate one, because it has no way to know.
No, seriously, what? You think an AGI is going to be a willing slave and endowed with special knowledge? Where does either part of that come from?
Thought experiments in science work because there are falsifiable scientific theories that make definite predictions about the world than can be tested.
Ultimately labour goes and works on something else instead. And the availability of free labour makes that possible. New industries and markets develop as a result. But a huge number of people will be left behind. But people will focus on things that were a lower priority before.
I have bad news for you, we've run out of sectors to pretend labor could be funneled towards. Manufacturing and agriculture are highly automated, service industry is full tf up, and nobody can afford more construction.
What about medical, elder care, fitness, leisure. Even service industries that focus on a more human connection. Or jobs focused on nature, the environment etc.
And i don't think this would nbe an easy process or something that could or would be managed. But it is probably already happening.
Is the average person actually better off after the late 90’s internet is probably a harder question than it might seem.
The long tail may be closer to what I want, but the quality is also generally lower. YouTube just doesn’t support a team of talented writers, Amazon is mostly filled with junk, etc.
Social media and gig work is a mixed bag. Junk e-mail etc may not be a big deal, but those kinds of downsides do erode the net benefit.
Are you being objective or just romanticizing the past?
Just to use your example: YouTube is filled with talented writers and storytellers, who would have never been able to share their content in the past. *And* the traditional media complex is richer than ever.
I don’t think average quality matters. Just what you want to consume.
If anything, I’d be more open to the opposite argument. Media is so much richer and more engaging that it actually makes our lives worse. The quality of the drugs is too high!
Media is so much richer and more engaging that it actually makes our lives worse. The quality of the drugs is too high!
I am not sure it's the quality, it's more that it's optimized for dopamine shots. Heroine is highly addictive, but I think that few people would argue it's a quality drug.
Recently there was a TV item that was filmed (in NL) just before the broad adoption of mobile phones (not smartphones). People looked so much more relaxed and more oriented towards others. I am happy that until my 18th or so mobile phones were not really a thing and that smartphones were not a really a thing until I was 25-27. I was an early adopter of smartphones, but I don't think we realized how addictive and destructive social media + smartphones would become.
The early internet was very cool though. Lots of info to be found. A high percentage of users had their own web page. A lot of it was pretty whacky/playful. Addictive timelines etc. had not been invented yet.
Does the answer to "is the average person better off" have a lot to do with "how many TV shows are out there"...
or does it have to do with:
- how often their boss bugs them after hours
- how much their boss uses technology to keep an eye on them, their friends, their political views
- how often random strangers might get mad at them and SWAT them, make false claims to their employers, etc
- how often their neighbors are radicalized into shooting up a school
- how hard they find it to talk to a real person to resolve an issue with a company or government service vs being stuck on hold because of downsizing real support staff relative to population size, or with an ai chatbot?
I was trying to be objective which is why I didn’t try to compare individual shows.
Thus average production quality seems like a useful metric. There’s currently a handful of “traditional media/streaming” shows with absolutely crazy budgets today and if you happen to like them then that’s great. However, if you don’t things quickly fall off a cliff in terms of production quality.
The same is true of YouTube. The quality of 50,000 one man operations is irrelevant if you happen to like MrBeast, but if you don’t like MrBeast budgets drop off fast. A reasonable argument is you and everyone else may prefer a specific YouTube cooking show over Baking with Julia or other 90’s show with a much her budget, but there where several options to chose from.
Thus purely objectively even if 90’s TV had lower maximum budgets the floor being relatively high is worth taking into consideration.
It's worth noting that due to advances in technology, it is possible to deliver the same show for less money and time.
The average "how to cook on a food network" show was, ultimately, one person in the kitchen of a large home cooking for the camera, produced once a week. There are plenty of people delivering that style of cooking show with high production quality today. Obviously it's not the same because some things are less deliverable with smaller or one-person teams (Miss Piggy is not going to visit some Youtube show the way she visited Martha Stewart) but there are people making this content ranging from big shops like NYT Cooking to smaller outfits like Binging with Babish, Glen and Friends Cooking, etc. and there are even outfits like this dedicated to more niche topics like Tasting History or Emmymade.
> Many YouTube channels make great use of Zoom calls for example. It’s still generally a compromise vs an actual face to face conversation.
A lot of today's news footage with experts etc. these days is also not shot from studios but from online calls. Actually flying somebody out onto location is pretty uncommon; and I would say with the rise of filmed podcasting, that podcasters are more likely to have people on set than television news is.
Daily TV News is limited by travel times. If some story breaks finding the right person and getting them on an airplane and then into a studio can be impractical.
> it is possible to deliver the same show for less money and time
Do we, though?
I recently learned about the controversial scene "Baby, It's Cold Outside".
Ignoring the content of the scene for a moment, the quality of the choreography stood out to me as something you would never see in a movie today. Certainly not in one take.
I would say that has more to do with the decline in musicals involving small numbers of people doing choreography, and the current movie system de-prioritizing dance as an important skill. The highest grossing musical movie happened in 2024 with Wicked, and the second half of that movie is probably going to do the same thing next year.
Youtube has some marginal value, but I'm not sure "storytellers" bring a materially positive impact (and I reject the "richer" aspect outright). We had libraries in the 90s and they didn't force you to watch ads.
That’s my point. We still have libraries! And most have online lending programs, so you can access way more ad-free books than you ever could have in 90s. How is this not richer?
show business and things like it are famously pyramidal in shape. there are decades' worth of people who couldn't make it in previous generations in Los Angeles and New York.
i think what is relatively new is the unaffordability crisis making it so doing such pursuits and not being that successful is no longer a way to make a living on its own.
I wish this argument would die. We're asking for a better future among futures, not a better future compared to the past.
It's like buying a car, receiving a bike, and then being told, "A bike is great because you don't have to walk anymore." If you feel like that's unfair and the response misses the point, that's how that lands. I don't know who, when hearing that, feels better. It feels out of touch and dismissive.
I think in the near term social media can actually have a stabilising effect just because of how much it paralyzes people. It takes emotion and redirects it into something with little real world effect. Whilst most actual power is exercised offline. This sometimes breaks down and the crazy escapes from the internet and reeks havoc. But mostly the geriatrics are left alone to pull the levers of power in relative peace.
I question this narrative. While social media is certainly having a negative impact, most democratic societies have relatively short life spans historically speaking. There’s also a tendency for economies to falter, for irrationality to increase, for birth rates to drop, and so on. It would seem that the same trends (roughly) occur in every democracy as it starts to fail.
Yes. The meta problem is the trend toward “I dont’t like X, and I don’t like Y, therefore X is causing Y” thinking.
It’s always been the reactionary’s argument: immigrants cause crime, inter-racial marriage causes poverty, etc, etc.
The real collapse we’re seeing is the liberal / progressive adoption of these fallacies. Social media causes fascism (nevermind the absence of social media in previous collapses into fascism), etc.
Many of the people rightfully dismayed by trends are unwittingly contributing to the changes they dislike.
The correct answer to “I think social media leads to fascism” is not “let’s ban social media”. The correct answer is “let’s study the problem and see what science says”. Abandoning that is giving up.
Undeniably better off in every single way. Minimum is that the price of long distance phone calls is now zero, let alone video calls. Being able to speak to family and see them nonstop is incredible.
Do people actually visit family in-person at least less often? And to the degree they augment rare family in-person visits with a lot of phone calls and/or video calls.
I can believe a lot of friend get togethers IRL have been replaced with video calls. There's a tradeoff. I have a group of older friends and we still get together in-person but Zoom calls are a nice adder.
I'm in a few organizations where we also find Zoom a nice alternative to people schlepping somewhere for an hour meeting that mostly works as well online--and we still have a few physical get-togethers over the course of the year.
Young people are forced to move to wherever job opportunities are due to how hyper optimized the economy has become, so I would say most people spend way less time with family than 30 years ago
It's complicated and you can look at a lot of studies. People have always moved where opportunities existed--including across oceans pre-aircraft. Yes, when there were company towns, people moved less.
More precisely, Americans are more diagnosed with depression. In the five years prior to COVID it's shown as rising by 3.3 percentage points. Is that surprising? The trend isn't likely to be flat. Also, is this happiness? Also, does a graph starting in 2015 relate to the question about the 90s? Also, do you expect happiness to be driven by having new things, or for people to adjust their expectations and remain constantly unhappy? Isn't disruption the main cause of unhappiness?
I wonder what event started in 2020 that might have caused major social and economic disruption, and could have had that effect. Maybe something that drastically limited peoples social mobility and lead to mass isolation.
Expand the thinking to include impact on developing countries, the poor, minority groups who have few people like themselves in their local area, etc.
I’ll grant that for comparatively wealthy, privileged people who were always going to have an easy time (which frankly include me), the internet has been a mixed bag.
But for the kids growing up in comparatively poor countries, who can now access all of the world’s information, entertainment, and economy.. I think it’s a pretty clear win.
I expect AI will be similar: perhaps not a huge boon to the best off, but a substantial improvement for most people in the world. Even if we can sit back and say “oh, but they also get misinformation and lower quality YouTube content”
Would you rather be a 22 year old starting in life in 2025 or 1995? Unless you pick one of the few countries that underwent a drastic change of regime in that time, the answer’s pretty clear to me.
Given my skillset at age 22? Yeah, I'll take 1995. I was old enough to grow up hearing how great the world was going to be if I learned computer programming just to enter the job force at the start of the dotcom bubble burst. 1995 would have been a major upgrade.
Also, knocking that almost decade off my birthday would assure that I spent most of my adult life with the luxury of thinking that energy didn't have negative externalities that were being forced on later generations.
We had Chomsky-esq "any major world power is kind of fascist if you think about it" instead of literal talk by politicians about putting people in camps if they don't like your diet or country of origin.
TV was pretty bad I guess but music was great and I read more back then.
There was a lot of huffing and puffing about gang violence. I grew up on the street the local gang named themselves after and it only marginally touched my life at all.
Housing was dirt cheap, food was dirt cheap, gas was dirt cheap. There was undeveloped land everywhere around the city I live in and it gave a general sense of potential.
Yes, the US was in a particularly prosperous and exciting period compared to much of the rest of the world in 1995. If you’re, say, Chinese, chances are you find life in 2025 much more appealing
Overall, life is better in 2025 for the vast majority of humans. Life expectancy, child mortality, health (despite the obesity epidemic, which is a result of an abundance that has eliminated hunger and food insecurity from large swathes of the globe), purchasing power, access to technology and entertainment, etc, etc…
That some people in the US are feeling disillusioned because housing has become more unaffordable (partly because of regulations and technological advancements that have improved their quality and safety) and that they don’t have the same incredible economic trajectory as the preceding generations, especially since WWII, doesn’t negate that. A run like that can’t last forever, especially since it to a large extent depends on having a relative advantage over the rest of the world - at some point, they’ll start to catch up
Bezos didn't define "society", but knowing Devil is what Devil does, we can infer:
1. Amazon files the most petitions for H1-B work visas after Indian IT shops.
2. Amazon opposed minimum wage increase to $15/hr until 2018!
3. Amazon not only fires union organizers, it's claiming National Labor Relations Board is unconstitutional!
It is all society as long as they have access, and they do. Even if the big labs get more closed off, open source is right there and won’t die.
AI increases everyone’s knowledge and ultimately productivity. It’s on every person the learn to leverage it. The dynamics don’t need to change, we just move faster and smarter
> AI increases everyone’s knowledge and ultimately productivity. It’s on every person the learn to leverage it. The dynamics don’t need to change, we just move faster and smarter
This is incomplete in key ways: it only increases knowledge if people practice information literacy and validate AI claims, which we know is an unevenly-distributed skill. Similarly, by making it easier to create disinformation and pollute public sources of information, it can make people less knowledgeable at the same time they believe they are more informed. Neither of those problems are new, of course, but they’re moving from artisanal to industrial scale.
Another area where this is begging questions is around resource allocation. The best AI models and integrations cost money and the ability to leverage them requires you to have an opportunity to acquire skills and use them to make a living. The more successfully businesses are able to remove or deprofessionalize jobs, the smaller the pool will be of people who can afford to build skills, compete with those businesses, or contribute to open source software. Twenty years ago, professional translators made a modest white collar income; when AI ate those jobs, the workers didn’t “learn to leverage” AI, they had to find new jobs in different fields and anyone who didn’t have the financial reserves to do that might’ve ended up in a retail job questioning whether it’s even possible to re-enter the professional class. That’s great for people like Bezos until nobody can afford to buy things, but it’s worse for society since it accelerates the process of centralizing money and power.
Open source in particular seems likely to struggle here: with programmers facing financial downturns, fewer people have time to contribute and if AI is being trained on your code, you’re increasingly going to ask whether it’s in your best interests to literally train your replacement.
> This is incomplete in key ways: it only increases knowledge if people practice information literacy and validate AI claims, which we know is an unevenly-distributed skill. Similarly, by making it easier to create disinformation and pollute public sources of information, it can make people less knowledgeable at the same time they believe they are more informed. Neither of those problems are new, of course, but they’re moving from artisanal to industrial scale.
Totally agree with this
>The more successfully businesses are able to remove or deprofessionalize jobs, the smaller the pool will be of people who can afford to build skills, compete with those businesses, or contribute to open source software
I'm mixed on this, ultimately its the responsibility of individuals to adapt. AI makes people way more capable than they have ever been. It's on them to make something of it
> but it’s worse for society since it accelerates the process of centralizing money and power.
I'm not sure this is true, it enables individuals like they never have been before. Yes there are the model infrastructure providers, but they are in a race to the bottom
Once upon a time, society was all of us, but Society were the filks that held coming out parties and gossiped about whose J-class yacht was likely to defend the America's cup.
Society with a capital S are the beneficiaries of the bubble.
Counter prediction. AI is going to reduce the (relative) wealth of the tech companies.
AWS and Facebook have extremely low running costs per VPS or Ad sold. That IMO is one of the major reasons tech has received its enormously high valuation.
There is nuance to that, but average investors are dumb and don't care.
Add in a relatively high fixed-cost commodity into the accounting, and intuitively the pitch of "global market domination at ever lower costs" will be a much harder sell. Especially if there is a bubble pop that hurts them.
The fact that Bezos is saying this is precisely why the commenter is asking this. He clearly stands to benefit massively from the bubble. Statements like this are meant to encourage buy-in from others to maximize his exit. Presumably "rich" refers to those, like Bezos, who already have incredibly disproportionate wealth and power compared to the majority of people in the US. I'm honestly not sure what the thrust of your comment even is.
That's a very relevant question. And as your question implies, we all know which society the billionaires talk about. But AI is just a technology like any other. It does have the potential to bring great benefits to humanity if developed with that intent. It's the corruptive influence of the billionaire and autocrat greed that turns all technologies against us.
When I say benefits to humanity, I don't mean the AI slop, deepfakes and laziness enabler that we have today. There are niche applications of AI that already show great potential. Like developing new medicines to devising new treatments for dangerous diseases, solving long standing mathematical problems, creating new physics theories. And who knows? Perhaps even create viable solutions for the climate crisis that we are in. They don't receive as much attention as they deserve, because that's not where the profit lies in AI. Solving real problems require us to forgo profits in the short term. That's why we can't leave this completely up to the billionaires. They will just use it to transfer even more wealth from the poor and middle classes to themselves.
What are the actual benefits? Where are all these medicines that humans couldn’t develop on their own? Have we not been able to develop medicine? What theorems are meaningful and impactful that humans can’t prove without AI? I don’t know what a solution to the climate crisis is but what would it even say that humans wouldn’t have realistically thought of?
You're most likely correct in thinking 'we would get there eventually'. But in the case of medicine, would you like to make that case to those who don't have the time to wait for 'eventually' - or who'll spend their lives in misery?
It's a matter of prompt engineering, you have to be a really good engineer to pick the correct words in order to get the cure for cancer from ChatGPT, or the actual crabby patty recipe
May I ask why people immediately imagine AI slop whenever anybody mentions LLMs? This is exactly what I meant. Those companies ruined their reputation. LLM/AI applications extend well beyond chat and drawing bots.
> What theorems are meaningful and impactful that humans can’t prove without AI?
I'm not a mathematician. I cannot give a definitive answer. But I read somewhere that some proofs these days fill an entire book. There is no way anybody is creating that without machine validation and assistance. AI is the next step in that, just like how programming support is advancing from complex tools to copilots. I know that overuse of copilots is a reason for making some developers lose quality. But there are also experienced developers who have found ways to use them optimally to significantly increase their speed without filling the code base with AI slop. The same will arguably happen with Mathematics.
The point ultimately is, I don't have definitive answers to any of the questions you ask. I'm not a domain expert in any of those fields and I can't see the future. But none of that is relevant here. What's relevant is to understand how LLMs and AI in general can be leveraged to augment your performance in any profession. The exact method may vary by domain. But the general tool use will be similar. Think of it like "How can a computer help me do accounting, cook a meal, predict weather, get me an xray or pay my bills?" It's as generic as that.
I have a phd in mathematics and I assure you I am not happy that AI is going to make doing mathematics a waste of time. Go read Gower's essay on it from the 90s. He is spot on.
I would have loved to engage in a conversation, if only to learn something new. But something in the way you framed your reply tells me that that's not what you have in mind. Instead, here's what Dr. Terrence Tao thinks about the same subject [1]. Honestly, I can relate to what he says.
I'm not someone who likes or promotes LLMs due to the utterly unethical acts that the big corporations committed to make profits with them. However, people often forget that LLMs are a technology that was developed by people who practice Mathematics and Computer Science. That was also PhD level work. The fact that LLMs got such a bad reputation has nothing to do with those wonderful ideas, but was a result of the greed of those who are obsessed with endless profits. LLMs aren't just about vacuuming up the IP on the internet, dumping kilotonnes of CO2 into the atmosphere or endless streams of AI slop and low effort fakes.
Human minds process logic and the universe in extraordinary ways. But it's still very limited in the set of tools it uses to achieve that. That's where LLMs and AI in general raise the tantalizing possibility of perceiving and interpreting domains under Mathematics and Physics in ways that no living being has ever done or even imagined. Perhaps its training data won't be stolen text or art. It could be the petabytes of scientific data locked up in storage because nobody knows what to do with it yet. And instead of displacing us, it's likely to complement and augment us. That's where the brilliance of mathematicians and scientists are going to be needed. Nobody knows for sure. But how will one know if you close the doors to that possibility?
I admire Dr. Tao for keeping his mind open to anything new at his age. I wish I had as much curiosity as him.
(Terence Tao is his name.) Yes, he takes a rather measured view on AI, but I think for myself, not in terms of what X great person thinks. He is smarter than I am (you probably are not even aware how amazing he is, frankly, and I only say that to convey my immense admiration), and more successful by a million times, and is a millionaire with a tenure track job, and he's basically a fields medalist among fields medalists. The effect of AI on his life is very little compared to the effect of AI on mine. I am always impressed by Terence Tao, but there's basically no life lesson the average mathematician can glean from him. He is truly astounding (to be fair, there are a few other astonishing people in mathematics).
The truth is that with a few more innovations, even Terence Tao will have little to add to an AI's problem solving ability. I will personally enjoy having mathematics explained to me by the AI, but it will be in relative poverty and material insecurity caused by the AI.
A recent AI data point occurred this last weekend with many coming together to answer MathOverflow post, because Terence Tao answered it with some tedious parts done by AI. https://mathstodon.xyz/@tao/115325229364313459
Yes, Facebook is a benefit. Among other things, it gave me React which much of the modern web is built on, and React Native, PyTorch, GraphQL, Cassandra, Presto, and RocksDB just to name a few.
The question is, what are billions of people doing on Facebook if it's harmful? I don't know. My daycare sends me updates, my barbershop tells me when they're closing and I used it to sell my fridge.
This hole Facebook irrational hate is ridiculously overblown. It's an app, and compared to things like TikTok that is essentially a Chinese psy-op, it's really a great product.
I don’t want to spoil anything for you, but ethanol is actually a very reactive molecule — and in some ways, it acts similarly to opioids like heroin. It, among other things, stimulates endogenous opioid pathways, leading to the release of β-endorphins and activation of mu-opioid receptors. So, alcohol works indirectly, heroin directly – but both enhance opioid signalling. If you’re curious, this study explains it really well:
Food activates it within normal biological limits, alcohol and heroin artificially push the same system far beyond normal range, forcing the brain to compensate by downregulating receptors or reducing endogenous opioid production, so it's totally legit to compare alcohol to heroin
I think it's incredibly naive and arrogant to tell billions of people who use a product through their own free will "ackchyually its really bad and you should stop".
Almost everyone would give you a response similar to mine. They use it because its easy way to plan events since so many people are on it, or a small business can easily create a website, sell something or just kill some time on the can.
Would you say the same about cigarettes? Billions of people used to smoke it as well, it was (and still is) quite popular, is it arrogant and patronising to tell them: this is unhealthy for you, and affects society in harmful ways?
> My daycare sends me updates, my barbershop tells me when they're closing and I used it to sell my fridge.
To consider the other side of this, read "The age of surveillance capitalism" by Shoshana Zuboff (really read it though, not chatgpt the summary :).
All the benefits you mentioned are real. But, at what cost and could we have reaped the same benefits without surrendering all agency to those who can't be held accountable?
What are the costs? Seems like a huge benefit to me considering the alternative would be... I don't know. No updates? Maybe some shitty custom app that would 100% for sure have worse data security and privacy rights than something like Facebook?
Everyone's talking vaguely about the costs but no one actually makes a concrete case, where I made a concrete case of the benefits.
Facebook and many of these other VC companies have worked by building a moat through network effects by burning money to build something free and awesome. Then once you HAVE the network effect then it becomes hard to leave. Your history is on there, people know you through it, your friends and family are there; are you really going to leave? That’s when Facebook starts turning the screws. Ads. Manipulative algorithms. Polarizating recommendation algorithms. Social isolation. Making deals with dictatorships. Censorship of the worst crimes humanity can commit against itself (genocide).
Why? They are making money through all of it. It’s called rent extraction. You OWN something valuable. You no longer have to produce something of value. You can just charge people money for what you own. Rent. It’s various forms of rent. Sucking out money and souls into it. One of countless ways we’re leeched on by these companies and their billionaire owners.
Do the benefits outweigh the harms? Facebook and the VC playbook is boiling a frog and we are the frog.
It’s fast because there already gobs of people on the internet because of all the other products that came before. Facebook didn’t grow as fast because there weren’t as many people on line then. Gmail didn’t grow as fast because there weren’t as many people online then.
I don't understand this argument. Speaking as a kid who grew up middle-class as an 80's teen obsessed with (the then still new) computers, a non-rich person has access to more salient power today than ever in history, and largely for no or low cost. There are free AI's available that can diagnose illnesses for people in remote areas, non-Western nations, etc. and which can translate (and/or summarize) anything to anything with high quality. Etc. etc. AI will help anyone with an idea execute on it.
The only thing you have to worry about are not non-rich people, but people without any motivation. The difference of course is that the framing you're using makes it easy to blame "The System", while a motivation-based framing at least leaves people somewhat responsible.
Wealth may get you a seat closer to the table, but everyone is already invited into the room.
The problem is if the system leads to demotivating people more than motivating them on average, which risks a negative feedback loop where people demotivate each other further and so on
What is demotivating people is negativist-framing-obsessed doomer assholes like you dooming and glooming all the possible negatives and absolutely none of the positives. There's no actual unmanageable bad things occurring, and a ton of upside occurring.
People are literally quitting CS majors because of this BS. Hopefully only the people who aren't meant to do it in the first place, but anyway.
You have an incorrect reading of history and economy. Basically none of the wealth and comfort we (regular people) enjoy were "gifted" or "left over" willingly by the owner class. Everything had to be fought for: minimum wage, reasonable weekly hours, safe workplaces, child labor, retirement, healthcare...
Now, ask yourself, what happens when workers lose the only leverage they have against the owner class: their labor? A capitalist economy can only function if workers are able to sell their labor for wages to the owner class, creating a sort of equilibrium between capital and work.
Once AI is able to replace a significant part of workers, 99% of humans on Earth become redundant in the eyes of the owner class, and even a threat to their future prosperity. And they own everything, the police and army included.
> Everything had to be fought for: minimum wage, reasonable weekly hours, safe workplaces, child labor, retirement, healthcare...
Fair enough. True.
> Once AI is able to replace a significant part of workers
But it won't do that. It's going to shift people around a lot, like literally every other technological development in the history of mankind, sure. But there's literally no evidence that it's going to do what you're claiming, which means you're arguing against a spooky strawman. It's not like people are going to just sit around doing nothing and going homeless, dude. Ideas (and activity that ends up being economically-tangible) will fill the vacuum.
I bought a Mac IIci computer in 1990 from my savings working throughout high school, for my freshman year of college. It cost over $8k, which in today's dollars is over $20k.
So imagine my lack of sympathy when people complain about things being literally free (as long as you don't mind signing away your social media profile data)
All the smartest people I know in finance are preparing for AI to be a huge bust that wipes out startups. To them, everyone in tech is just a pawn in their money moving finance games. They view Silicon Valley at the moment salivating on what there plays are to make money off the coming hype cycle implosion. The best finance folks make the most money when the tide goes out… and they’re making moves now to be ready.
Especially startups in AI are at risk, because their specific niche projects can easily be outcompeted when BigTech comes with more generalized AI models.
Broader impact, but while the big players will take a hit the new wave of startups stands to take the brunt of the impact. VCs and startups will suffer the most.
At the end of the day though it’s how the system is designed. It’s the needed forrest fire that wipes out overgrowth and destroys all but the strongest trees.
My experience has gone the other way than OOP: Anecdotally, I have had VCs ask me to review AI companies to tell them what they do so they can invest. The VC said VCs don't really understand what they're investing in and just want to get in on anything AI due to FOMO.
The company I reviewed didn't seem like a great investment, but I don't even think that matters right now.
To be clear when I said “finance folks” I wasn’t really referring to VCs. I’m talking more family office types that manage big pools of money that you don’t know about. The super wealthy class that has literally has more money than the King but would be horrified if you knew their name. Old money types. They’re well aware of the “dumb VC” vibe that just throws money after hype. The finance folks I’m talking about are the type that eat failed VCs for lunch.
In my experience they invariably conflate LLMs with AI and can’t/won’t have the difference explained.
This is the blind spot that will cause many to lose their shirts, and is also why people are wrong about AI being a bubble. LLMs are a bubble within an overall healthy growth market.
Machine learning is a perfectly valid and useful field, traditional ML is super useful and can produce very powerful tools. LLMs are fancy word predictors that have no concept of truth.
But LLMs look and feel like they're almost "real" AI, because they talk in words instead of probabilities, and so people who can't discern the distance between and LLM and AGI assume that AGI is right around the corner.
If you believe AGI is right around the corner and skip over the bit where any mass application of AGI to replace workers for cheaper is just slavery but with extra steps, then of course it makes sense to pour money into any AI business.
And energy is the air in the economy at large. The Eliza effect is not the only bias disposing people to believe AGI is right around the corner. There are deeper assumptions many cling to.
Yeah, but net worth is weird because for most people, it just measures age. When you're young, you have nothing in your 401k and you have a brand new mortgage, so you're worth around $0. Negative if you have any student loans.
When you're in your 50s or 60s, the mortgage is repaid, and if nothing blew up, you probably also have a million or two in your 401k, so at that point, it's actually not that hard for a person who had a decent career in the SF Bay Area to be worth $4M+. And many FAANG retirees will probably flirt with $10M+ if they don't spend too much.
Entry-level total comp for SWEs in the SF Bay Area is probably $250k. A senior dev at a public tech company can easily clear $400k. Really senior engineers at FAANG routinely clear $1M.
For senior engineers with some fiscal discipline, it's great, although it's more great to work remotely.
On the more junior end, that $250k goes about as far as $100k in a less expensive region. Keep in mind that the average 1-bedroom rent is $3k+/mo, the average home for a family with kids is $2M, the average PG&E bill for a house is probably $400+, any repairs or remodels will cost you 2x as much as in most other places... and state income taxes are 10% on top of federal.
Most CoL estimates put the cost of living in San Jose at 2x the national average. And that's San Jose.
Adding to that, I love how his analysis is completely detached from the consequences this burst will impose on the working man. As it did in the dotcom bubble.
Of course they will, the ultra wealthy are too big to fail. In a bubble like this, they just invest in pretty much everything and take the losses on the 99% of failures to get the x1000 multiples on the 1% of successes. While the rest of us take the hit.
"During bubbles, every experiment or idea gets funded, the good ideas and the bad ideas. And investors have a hard time in the middle of this excitement, distinguishing between the good ideas and the bad ideas. ... But that doesn't mean anything that is happening isn't real."
Remind me again why we need investors to fund bad ideas? The whole premise of western capitalism is that investors can better align with the needs of the society and the current technological reality.
If the investors aren't the gurus we make them to be, we might as well do with a planning committee. We could actually end up with more diversified research.
"Under socialism, a lot of experimental ideas get funded, the good ideas and the bad ideas. And the planning committee have a hard time in the middle of this excitement, distinguishing between the good ideas and the bad ideas. ... But that doesn't mean anything that is happening isn't real."
Committees' investment tends to be not very diversified and often too risk-averse because of how blame placing works. E. g. most of soviet's cloning of chip (and many other products) wasn't due to lack of engineering skill - it's just less risk for the bureaucrats running the show. R&D for an original chip is risky and timing it to the next November the seventh is likely to not work. Cloning is a guaranteed success.
The whole point of capitalism is that one is entitled to the consequences of their own stupidity or the lack thereof. The investors are more willing to take the risks because their losses are bounded - they are risking only as much as they are willing, rather than their status in an organization. Of course once all investors ends up investing into the same bubble there is no real advantage over a committee.
> Remind me again why we need investors to fund bad ideas?
Early stage investors generally fund a portfolio of multiple ideas, where each idea faces great uncertainty - some investments will do tremendously well, some won't. Invstors don't need every investment to do well due to the assymmetry of outcomes (a bad investment can at worst go down 100%, a good investment can go up 10,000%, paying off many bad investments).
> The whole premise of western capitalism is that investors can better align with the needs of the society and the current technological reality.
This is not the premise of capitalism, it's the justification for it - it's generally believed that capitalism leads to better outcomes over time than communism, but that doesn't mean capitalism has 0 wasteage or results in 0 bad decisions.
There is a very important difference: real investors risk their own money, the money they saved over their entire life making decisions.
Under socialism bureaucrats risk someone else's money.
We are not in a pure capitalistic society, we also have States, central Banks with central planning expending over half the money in Europe and USA and more than half in Asia.
As a European myself that see the public money being wasted by incompetent people and filling the pockets of politicians, specially marxist ones. For example, the money Spain received after COVID filled so many socialist pockets and has not given information back to Europe as of how it was spent(it was spent on their own companies of friend and family).
Even then, the people the money came from voluntarily chose those institutions. If you don't like risky investments, there are lower risk institutions or products you can put your money in. It's still ultimately the will of the individual what they do with their money, and the consequences of bad choices are still mostly contained to the individuals who make them. But when it's done with tax money, everyone's dragged into it and has no personal choice in the matter. Even worse, when it's tax money without democracy, even if the majority of people don't like how it's used, they even collectively have no choice in the matter.
Do I have this right that there have been no or at least very few pure AI IPO's during this cycle (I can't actually think of a single one)? So it's dissimilar to dotcom in that regard because during that time countless dotcoms went public with sky-high valuations and then failed. A bunch of reputable companies also went or were already public during that time and those saw huge valuation drops so that's more analogous to what could happen in the public market (NVDA, for instance, could pull a Cisco and drop "catastrophically," but survive just fine).
That would cause a lot of pain for those shareholders, but would that be somewhat contained given the public "AI" companies for the most part have strong businesses outside of AI? Or are these market caps at this point so large for some of these AI public companies that anything that happens to them will cause some kind of contagion? And then the follow up is if the private AI companies collapse en masse is that market now also so big that it would cause contagion beyond venture capital and their investors (fully aware that pensions and the such are material investors in VC funds, but they're diversified so even though they'd see those losses maybe the broader market would keep them from taking major hits).
Not giving an opinion here, though my knee jerk is to think we're due for a massive drop, but I've literally been saying that for so long that I'm starting to (stupidly) think this time is different (which typically is when all hell breaks loose of course).
There aren't very many IPOs in general. There were about 8000 publicly traded companies in the US in Jan 2000. Today there are about 3950. A lot of the AI related IPOs have been the infra like CoreWeave and Nebius.
However, it is different from the internet bubble partially for the reason you describe.
There have been a few IPOs, but they perhaps happened earlier in the cycle, or companies are pivoting into AI. I'm thinking companies like Palantir, which was always AI, or Salesforce which is making a big AI pivot.
Most of the funding is not coming from public sectors. There is so much private capital available that it isn't necessary. I believe the bubble is in VC, which some would think is find because it protects public markets from the crash, but I'm not sure that is correct.
When the VC money stops flowing into AI, I think it will send a shockwave through the public markets. The huge valuations of companies like OpenAI, Anthropic, etc will be repriced, which will probably force a re-pricing of public darlings like Palantir, Microsoft, NVIDIA.
If VC funds aren't buying NVIDIA chips and building data centers, everyone will feel the need to re-price.
It may be true that OAI et al are raising money in private markets, but does that matter? Ultimately they are still just raising money. Ultimately returns need to show up. You cannot escape that. If you cannot do that nobody will eventually supply the funds to keep operating.
The big advantage of staying private is controlling the narrative.
Because the comment was specifically pointing out that this doesn't seem like a bubble because there aren't many IPOs or public pure play AI companies.
Historically, the worst busts following the bursting of an asset price bubble, in terms of real economic impact, have been from debt fueled bubbles (Great Depression, Global Financial Crisis). You can read Hyman Minsky and Irving Fisher for a detailed analysis of why, but it mainly comes down to the fact that the financial obligations remain once prices and expectations have reset.
Then you have the busts that follow public equity fueled bubbles (Dotcom crash). Nowhere near as bad as the former, but still a moderate impact on the economy due to the widely dispersed nature of the equity holdings and the resulting wealth effect.
What we have now is more of a narrowly held private equity bubble (acknowledging that there's still an impact through the SP500 given widespread index investing). If OpenAI, Anthropic, Perplexity, and a bunch of AI startups go bust, who loses money and what impact does it have on the rest of the economy?
IPOs aren't what they once were. The burden of being a public company has increased (SOX and related public company costs are $5-10M/year), so companies are far more likely to stay private. That has created a positive feedback cycle as the private funding ecosystem has become increasingly robust, which is why you see so many $100B+ private companies.
Also keep in mind that the biggest companies during that bubble had peak market caps of ~500B and then lost ~90%, so 400-500B in losses each and total internet related losses of a couple trillion. If NVDA lost 90%, it would be down 4 trillion dollars, or twice that total just by itself.
AI company valuations collapsing would have meaningful impacts on the broader market. Big pension/mutual funds are important sources of capital across every sector, and if they're taking big losses on NVDA, GOOG, and a portfolio of privates, it will have a chilling effect on their other activity.
The costs are a weak argument. The more stronger argument for why they arent going public any time soon is that OAI in particular is a corporate governance nightmare, in which the way they transmit information about their firm and financials will have to completely change.
Theres also plenty of money washing around in private markets so no need to go public. Staying private is an advantage.
there is no way they could raise that much money from public markets.
Also, there s no need to invent new tech names anymore. Marketing can add "AI" to the company name, or (as they say), change the wording from "Loading..." to "Thinking.."
I'd love to see those "gigantic benefits". Currently, the only positive thing about LLMs I hear regularly is that it made people more productive. They don't get paid more (of course), they're just more productive.
I'm not arguing it's a bubble. I'm arguing it's not going to be a "gigantic benefit" for society.
> Will also be good for consumers in the long-term: much faster pace of drug discovery and new tech generally.
Medicine and tech that knowledge workers won't be able to pay for without a job. I also don't think "new tech" is necessarily a good thing societally.
The calculus is easy, really: AI makes 100% of my income worth less but only decreases the cost of a fraction of my expenditures. AI is bad for anyone that works for a living. That's pretty much all of society except for the top x%.
People in the comments seem to have forgotten about, or never lived through, the dot com bubble. Amazon’s shares fell from $113 to $6 in 2000.
So yes, the internet survived and some companies did well after, but a lot of people were hurt badly.
I see that my Echos want me to enable the new Alexa. I confess a great deal of trepidation on these. Fairly confident that this is not going to go smoothly.
And again I'm baffled on how they would light such good will and functionality on fire.
This shouldn't be a surprise - capitalism always overshoots. Anything worth investing in will generally receive too much investment in because very few people can tell where the line is.
And that's what causes bubbles but at this point it should be clear that AI will make a substantial impact - at least as great as the internet, likely larger
You made the point for me. That 100bn doubled every 2-3 years. It wasn’t a bubble, but it absolutely looked like one. This will be a bitter lesson too.
Gotta say I'm pessimistic about the future of AI. At least until its adopted by public sector, schools and societies. Right now its just enhancing what we already have. More "efficient" meetings with auto-note-taking so we can have more meetings. More productivity so we can make less people do more, increase the workload and make them burn out faster. Better and more sophisticated scammers. More sophisticated propaganda by various legal and illegal actors. Cheaper and more grotesque looking vidz and music. More software slop. Not to mention all the "cool" improvements coming our way in the form of smart rockets, bombs and drones and of course I drool for all the smart improvements in surveillance capitalism 2.0. They call it a revolution but for now its just a catalyst for more of all the things we "love".
If you're asking yourself why hasn't the bubble burst when everyone is calling it a bubble it's because no one wants to stop dancing until the music stops. If you told an investor the market will collapse tomorrow with 100% certainty they will invest today like there is 0% certainty of it happening tomorrow.
Yesterday someone's uncle tried the same thing and now his bank account's drained because the 2% that wasn't working was the 2% that prevented his password store from being posted online.
See how that works? A few nerds think it's great while everyone else gets screwed by it.
It's very hard to find anything other than half saying AI is in a bubble, and we will pop any day now, and the other half declaring AGI by 2029, when a new revolution will begin. If you follow the hard science and not the money, you can see we're somewhere in-between these two takes. AI datacenters requiring new power plants is unsustainable for long-term growth. Meanwhile, we have LLMs accepting bullshit tasks and completing them. This is very hard to ignore.
> “That is what is going to happen here too. This is real, the benefits to society from AI are going to be gigantic.”
As an owner of a web host that probably sees advantage to increased bot traffic, this statement is just more “just wait AI will be gigantic any minute now, keep investing in it for me so my
investments stay valuable”.
I think most level-headed people can see this is a giant bubble which will eventually burst like the dot-com crash. And AI is technology that's hard to understand to non-technical (and even some technical) investors.
But of course, every company needs to slap AI on their product now just to be seen as a viable product.
Personally, I look forward to seeing the bubble burst and being left with a more rational view of AI and what it can (and can not) do.
I too am waiting for the bubble to burst. Particularly because I think it's doing real harm to the industry.
Every company seems to be putting all their eggs in the AI basket. And that is causing basic usability and feature work to be neglected. Nobody cares because they are betting that AI agents will replace all that. But it won't and meanwhile everything else about these products will stagnate.
It's a disasterous strategy and when it comes crashing down and the layoffs start, every CEO will get a pass on leading this failure because they were just doing what everyone else is doing.
OpenAI has a reasonable path to reduce their costs by 10-100x over the next 5 years if they stop improving the models. That would make them an extremely profitable company with their only real risk being “local ai”. However customers have wanted their data in the cloud for years, local inference would likely just mean 0 cost tokens for OpenAI.
The challenge is the rest of the industry funding dead companies with billions of dollars on the off chance they replicate OpenAI’s success.
I don't see how this works though. OpenAI doesn't exist in a vacuum, it has competitors, and the first company to stop improving their model will get obliterated by the others. It seems like they are doomed to keep retraining right up until the VC funding runs out, at which point they go bankrupt.
Some other company, that doesn't have a giant pile of debt will then pick up the pieces and make some money though. Once we dig out of the resulting market crash.
The winner takes all thesis would be that like TSMC, the capex of competing in this field keeps growing until only one vendor can both raise sufficient capital to compete and effectively execute with that capital. OpenAI doesn't need to be the first to stop raising money and go profitable, they need to be the last vendor to go profits first.
The problem is OAI has very firece competition - folks who are willing to absorb losses to put them out of business.
Uber and Amazon are really bad examples. Who was Amazons competition? Nobody. By the time anyone woke up and took them seriously it was too late.
Uber only had to contend with Lyft and a few other less funded firms. Less funded being a really important thing to consider. Not to mention the easy access to immense amounts of funding Uber had.
OpenAI is trying to launch a hardware product with Jony Ive, an ads company, a AI slop-native version of TikTok and several other "businesses". They look well on their way to turning into a Yahoo! than a Cisco or VMWare.
Yeah they are all over the place and this should be a huge red flag.
Some marginal investors know this but they are okay because the music is still playing - but when they think its time to leave the bubble will pop.
People seem to forget that its not about whether or not its actually a bubble, its really about when will certain people who set these stock prices for valuations, decide its time to exit and take their profit.
The difference now is that this is all (or mostly) idle cash being invested. The massive warchests built up by FAANG over the last decade are finally being deployed meaningfully rather than sitting in bonds or buying back stock. Much different scenario than companies with non-viable business models going IPO on a wish and a dream.
It's a question of how far the contagion goes - the Dotcom bubble trashed the NASDAQ and rumbled the world a bit, but the "big stocks" weren't directly involved or affected much; the subprime mortgage bubble shook the very foundations of the banking world.
Will the contagion be limited to a few companies, or will the S&P crumble under the bubble's collapse?
The part that’s hard to understand is how the sunk costs of hundreds of billions in capex gets repaid by all those people in cafes paying hundreds of dollars per month to use those LLMs.
People have no idea how much concern there was around whether FB would ever be able to monetize social media. That company went public at $15, and nearly closed below that on IPO day.
AI is more useful than social media. This is not financial advice, but I lean more toward not a bubble.
Not hard to use. I meant hard to understand what the limitations are for non tech users. E.g people who think AGI is just around the corner because we now have stochastic parrots.
Every bubble looks obvious in hindsight. The dot-com crash left behind Amazon and Google. The crypto crash left behind Coinbase and a few real revenue generating companies. If this is the AI bubble, then the survivors are going to look very obvious in a decade, we just don’t know which ones yet.
Crypto isn't bullshit well most of it is but the utility is still used by millions of people around the world, specifically USDT and USDC which they proven a good way to move your assets without too much regulations.
So it is regulatory arbitrage. I think many of the critics never contested that crypto is good for doing crime. The critics just also think that either those crimes should continue to be prevented via the financial system or the financial system should be deregulated for all without a crypto backdoor.
A Money Market Fund gives you interest if you are able to access it.
This is kind of a pattern:
1. There is some regulation that is inefficient ( e.g. taxi medallions, KYC, copyright protection ...)
2. New technology comes about which allows startups to claim that they have invented a new area that should be regulated differently
3. Turns out (2) is not true and new technology can easily be mapped to existing regulation but it would look bad for the regulator to take away the punchbowl
4. There is some down-turn (bubble pops) and the regulator takes away the punchbowl OR investors have accumulated so much money/power that they corrupt the government to have new rules for their businesses
Gmail's beta started in 2004, and Google Maps launched in 2005. Google's IPO was in 2004, so Web 2.0 products played no part in its enormous early growth.
> The term "Web 2.0" was coined by Darcy DiNucci, an information architecture consultant, in her January 1999 article "Fragmented Future" [...] her "2.0" designation refers to the next version of the Web that does not directly relate to the term's current use.
> The term Web 2.0 did not resurface until 2002.
Google's first big Web 2.0 products were GMail (beta launched in 2004, just before Google's IPO) and Google Maps (2005).
What is going on with AI right now could be a bubble just like there was the dotcom bubble. But it isn't like the internet went away after the dotcom burst. The largest companies in existence today are internet companies or have products that wouldn't make sense without the internet.
Sure, many of these "thin prompt wrapper around the OpenAI API" product "businesses" will all be gone within a few years. But AI? That is going to be here indefinitely.
i have to say I'm a little disgusted by these statements. LLMs are useful for many problems, but is there really a conceivable path of them making progress into fighting the countless cancers tormenting humanity?
My framing device: if you had LLMs in the 1500s, how would that help Copernicus determine the orbits of the planets? Maybe through dumb chance, but creating a well reasoned model of the universe required new observations and the ability to interpret the data from a different point of view.
All the 1500s and earlier data that such an LLM would have to have been trained on would lead to an LLM that wouldn’t ever suggest a heliocentric solar system. That LLM might even say he was heretical or refuse to give an answer to anything that led to it saying that the earth wasn’t the centre of the universe. So no help at all.
Interesting framing. Although I assume all the observations had been done already. It was more about being bold enough to investigate a line of thought that wasn't obvious or popular at the time and proving it convincingly.
They already had many "explanations" and models for why the planets were seemingly moving back and forth in the sky during the year. Their models were more complicated than necessary simply because they didn't want to consider the different premise.
The technology - for what it is being used vs what is invested - does not match up at all. This is what happened to the dot-com bubble. Theres was a whole bunch of innovation that was needed to come to bring a delightful UX to bring swathes of people onto the internet.
So far this is true about LLMs. Could this change? Sure. Will it change meaningful? Personally I dont believe so.
The internet at its core was all about hooking up computers so they they could transform from just computational beasts to communication. There was a tremendous amount of potentitial that was very very real. It just so happens if computers can communicate we can do a whole bunch of stuff - as is going on today.
What are LLMS? Can someone please explain in a succint way...? Im yet to see something super crystal clear.
Things like recommendations, ads, and search will always be around because they were money printers before VCs found out about AI and they will continue to be long after.
The dotcom bubble was not about "the internet" itself. The Internet was fine and pretty much already proven as a very useful communication tool. It was about business that made absolutely no sense getting extremely high valuations just because they operated - however vaguely - over the internet.
Generative AI have never reached the level of usability of the Internet itself, and likely never will.
By society, he means the oligarchy will get more power and control.
Yeah, sure some side benefits to people. AI is still a nuclear weapon against labor in the capital - labor (haves - haves not), and will start pushing wealth inequality to the Egyptian pharoah.
The only good news for plebians is that virtual reality entertainment means you just need a little closet to live in.
Overall, this just leads up to further demographic decline, which as an environmental malthusian I would welcome in the initial stages to get us down from our current level, but I also suspect it would turn into an economic downward spiral, especially with AI, where the oligarchs have such total authoritarian control and monopoly on resources that humanity basically stops having kids at all.
"AI is in a bubble but billionaires will get 'gigantic' benefits"
I see no benefit to anyone unless you can live off your stock portfolio and can easily ride through periods where your portfolio can suffer a 50% loss.
Honestly, during the dotcom bubble at least workers were getting paid and jobs were abundant. Things didn't start getting bad for workers until it popped. We're supposed to be in the 'positive' part of the AI bubble and people already seem desperate and out of hope.
Everyone not directly involved seems to want AI to pop. I'm not sure if that says anything about its longevity. Not very fun to have a bubble that feels bad on both sides.
He may be right, everything points to that conclusion. My main issue is - why the fuck do we care what Bezos thinks on this matter? His ML efforts all lag behind the competition and he’s certainly not an expert in the field of deep learning. Why?
reply