It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted.
A couple of years ago, I asked a financial investment person about AI as a trick question. She did well by recommending investing in companies that invest in AI (like MS) but who had other profitable businesses (like Azure). I was waiting for her to put her foot in her mouth and buy into the hype.She skillfully navigated the question in a way that won my respect.
I personally believe that a lot of investment money is going to evaporate before the market resets. What we're calling AI will continue to have certain uses, but investors will realize that the moonshot being promised is undeliverable and a lot of jobs will disappear. This will hurt the wider industry, and the economy by extension.
I have an overwhelming feeling that what we're trying to do here is "Netflix over DialUp."
We're clearly seeing what AI will eventually be able to do, just like many VOD, smartphone and grocery delivery companies of the 90s did with the internet. The groundwork has been laid, and it's not too hard to see the shape of things to come.
This tech, however, is still far too immature for a lot of use cases. There's enough of it available that things feel like they ought to work, but we aren't quite there yet. It's not quite useless, there's a lot you can do with AI already, but a lot of use cases that are obvious not only in retrospect will only be possible once it matures.
Some people even figured it out in the 80's. Sears founded and ran Prodigy, a large BBS and eventually ISP. They were trying to set themselves up to become Amazon. Not only that, Prodigy's thing (for a while) was using advertising revenue to lower subscription prices.
Your "Netflix over dialup" analogy is more accessible to this readership, but Sears+Prodigy is my favorite example of trying to make the future happen too early. There are countless others.
Amazing how far that company has fallen; they were sort of a force to be reckoned with in the 70's and 80's with Craftsman and Allstate and Discover and Kenmore and a bunch of other things, and now they're basically dead as far as I can tell.
On the topic of how Sears used to be high-tech: back in 1981, when IBM introduced the IBM PC, it was the first time that they needed to sell computers through retail. So they partnered with Sears, along with the Computerland chain of computer stores, since Sears was considered a reasonable place for a businessperson to buy a computer. To plan this, meetings were held at the Sears Tower, which was the world's tallest building at the time.
My favorite anecdote about Sears is from Starbucks current HQs - the HQs used to be a warehouse for Sears. Before renovation the first floor walls next to the elevators used to have Sears' "commitment to customers" (or something like that).
To me it read like it was written by Amazon decades earlier. Something about how Sears promises that customers will be 100% satisfied with the purchase, and if for whatever reason that is not the case customers can return the purchase back to Sears and Sears will pay for the return transportation charges.
Craftsman tools have almost felt like a life-hack sometimes; their no-questions-asked warranties were just incredible.
My dad broke a Craftsman shovel once that he had owned for four years, took it to Sears, and it was replaced immediately, no questions asked. I broke a socket wrench that I had owned for a year and had the same story.
I haven't tested these warranties since Craftsman was sold to Black and Decker, but when it was still owned by Sears I almost exclusively bought Craftsman tools as a result of their wonderful warranties.
FWIW, I bought a Craftsman 1/4" drive ratchet/socket set at a Lowes Home Improvement store last year, and when I got it home and started messing with it, the ratchet jammed up immediately (before even being used on any actual fastener). I drove back over there the next day and the lady at the service desk took a quick look, said "go get another one off the shelf and come back here." I did, and by the time I got back she'd finished whatever paperwork needed to be done, handed me some $whatever and said "have a nice day."
Maybe not quite as hassle free as in years past, but I found the experience acceptable enough.
> My dad broke a Craftsman shovel once that he had owned for four years, took it to Sears, and it was replaced immediately, no questions asked. I broke a socket wrench that I had owned for a year and had the same story.
This is covered by consumer protection laws in some places. 4 years on a spade would be pushing it, but I’d try with a good one.
Here in New Zealand it’s called ‘The Consumer Guarantees Act’. We pay more at purchase time, but we do get something for it.
Lots of tools have lifetime warranties. Harbor Freight's swap process is probably fastest, these days, for folks with one nearby. Tekton's process is also painless, but slower: Send them a photo of the broken tool, and they deliver a new tool to your door.
But I'm not old enough to remember a time when lifetime warranties were unusual. In my lifetimes, a warranty on handtools has always seemed more common than not outside of the bottom-most cheese-grade stuff.
I mean: The Lowes house-brand diagonal cutters I bought for my first real job had a lifetime warranty.
And before my time of being aware of the world, JC Penney sold tools with lifetime warranties.
(I remember being at the mall with my dad when he took a JC Penney-branded screwdriver back to JC Penney -- probably 35 years ago.
He got some pushback from people who insisted that they had never sold tools, and then from people who insisted that they never had warranties, and then he finally found the fellow old person who had worked there long enough to know what to do. Without any hesitation at all, she told us to walk over to Sears, buy a similar Craftsman screwdriver, and come back with a receipt.
Prodigy predates ISPs. Before the web had matured a little about 1993 the internet was too technically challenging to interest most consumers except maybe for email, and Prodigy was formed in 1984 -- and although it offered email, it was walled-garden email: a user could not exchange email with the internet till the mid-1990s at which time Prodigy might have become an ISP for a few years before going out of business.
They weren't wrong. Its core business in what is still a viable-enough sector collapsed. Or if it were truly well-managed, running an ISP and a retailer should have been enough insight to be Amazon.
It wasn't possible for them to be well managed at the time it mattered. Sears was loaded with debt by private equity ghouls; same story for almost all defunct brick and mortar businesses; Amazon was a factor, but private equity is what actually destroyed them.
And, knowing Jeff Bezos' private equity origins, one could be forgiven for entertaining the thought that none of this was an accident. Just don't be an idiot and, you know, give voice to that thought or anything.
Are you suggesting that Jeff Bezos somehow convinced all his PE buddies to tank Sears (and their own loans to it) in order for him to build Amazon with less competition? Because, well, no offense, but that seems like a remarkably naive understanding of capital markets and individual motivations. Especially when it's well documented how Eddie Lampert's libertarian beliefs caused him to run it into the ground.
> We're clearly seeing what AI will eventually be able to do
Are we though? Aside from a narrow set of tasks like translation, grammar, and tone-shifting, LLMs are a dead end. Code generation sucks. Agents suck. They still hallucinate. If you wouldn't trust its medical advice without review from an actual doctor, why would you trust its advice on anything else?
Also, the companies trying to "fix" issues with LLMs with more training data will just rediscover the "long-tail" problem... there is an infinite number of new things that need to be put into the dataset, and that's just going to reduce the quality of responses.
For example: the "there are three 'b's in blueberry" problem was caused by so much training data in response to "there are two r's in strawberry". it's a systemic issue. no amount of data will solve it because LLMs will -never- be sentient.
Finally, I'm convinced that any AI company promising they are on the path to General AI should be sued for fraud. LLMs are not it.
I have a feeling that you believe "translation, grammar, and tone-shifting" works but "code generation sucks" for LLMs because you're good at coding and hence you see its flaws, and you're not in the business of doing translation etc.
Pretty sure if you're going to use LLMs for translating anything non-trivial, you'd have to carefully review the outputs, just like if you're using LLMs to write code.
Exactly. Books are still being translated by human translators.
I have a text on my computer, the first couple of paragraphs from the Dutch novel "De aanslag", and every few years I feed it to the leading machine translation sites, and invariably, the results are atrocious. Don't get me wrong, the translation is quite understandable, but the text is wooden, and the translation contains 3 or 4 translation blunders.
GPT-5 output for example:
Far, far away in the Second World War, a certain Anton Steenwijk lived with his parents and his brother on the edge of Haarlem.
Along a quay, which ran for a hundred meters beside the water and then, with a gentle curve, turned back into an ordinary street, stood four houses not far apart.
Each surrounded by a garden, with their small balconies, bay windows, and steep roofs, they had the appearance of villas, although they were more small than large; in the upstairs rooms, all the walls slanted.
They stood there with peeling paint and somewhat dilapidated, for even in the thirties little had been done to them.
Each bore a respectable, bourgeois name from more carefree days:
Welgelegen Buitenrust Nooitgedacht Rustenburg
Anton lived in the second house from the left: the one with the thatched roof. It already had that name when his parents rented it shortly before the war; his father had first called it Eleutheria or something like that, but then written in Greek letters. Even before the catastrophe occurred, Anton had not understood the name Buitenrust as the calm of being outside, but rather as something that was outside rest—just as extraordinary does not refer to the ordinary nature of the outside (and still less to living outside in general), but to something that is precisely not ordinary.
what I find funny is that you can ask the LLMs to write a python program to count the b's in blueberry and it works fine.
If you want the llms to "calculate" something, we should ask it to use a calculator, or if we want it to look up a fact, we should get it to find it for us in a database of facts.
Or maybe not. Scaling AI will require an exponential increase in compute and processing power, and even the current LLM models take up a lot of resources. We are already at the limit of how small we can scale chips and Moore’s law is already dead.
So newer chips will not be exponentially better but will be more of incremental improvements, so unless the price of electricity comes down exponentially we might never see AGI at a price point that’s cheaper than hiring a human.
Most companies are already running AI models at a loss, scaling the models to be bigger(like GPT 4.5) only makes them more expensive to run.
The reason why internet, smartphones and computers have seen exponential growth from the 90s is due to underlying increase in computing power. I personally used a 50Mhz 486 in the 90s and now use a 8c/16t 5Ghz CPU. I highly doubt if we will see the same form of increase in the next 40 years
> Scaling AI will require an exponential increase in compute and processing power,
I think there is something more happening with AI scaling; I think the scaling factor per user is a lot higher and a lot more expensive. Compare to the big initial internet companies. You added one server you could handle thousands more users; incremental cost was very low, not to mention the revenue captured through whatever adtech means. Not so with AI workloads; they are so much more expensive than ad revenue it's hard to break even even with an actual paid subscription.
Scaling AI will require an exponential increase in compute and processing power,
A small quibble... I'd say that's true only if you accept as an axiom that current approaches to AI are "the" approach and reject the possibility of radical algorithmic advances that completely change the game. For my part, I have a strongly held belief that there is such an algorithmic advancement "out there" waiting to be discovered, that will enable AI at current "intelligence" levels, if not outright Strong AI / AGI, without the absurd demands on computational resources and energy. I can't prove that of course, but I take the existence of the human brain as an existence proof that some kind of machine can provide human level intelligence without needing gigawatts of power and massive datacenters filled with racks of GPU's.
If we suppose that ANNs are more or less accurate models of real neural networks, the reason why they're so inefficient is not algorithmic, but purely architectural. They're just software. We have these huge tables of numbers and we're trying to squeeze them as hard as possible through a relatively small number of multipliers and adders. Meanwhile, a brain can perform a trillion fundamental simultaneously because every neuron is a complete processing element independent of every other one. To bring that back into more concrete terms, if we took an arbitrary model and turned it into a bespoke piece of hardware, it would certainly be at least one or two orders of magnitude faster and more efficient, with the downside that since it's dead silicon it could not be changed and iterated on.
> If we suppose that ANNs are more or less accurate models of real neural networks
i believe the problem is we don't understand actual neurons let alone actual networks of neurons to even know if any model is accurate or not. The AI folks cleverly named their data structures "neuron" and "neural network" to make it seem like we do.
If you account for the fact that biological neurons operate at a much lower frequency than silicon processors, then the raw performance gets much closer. From what I can find, neuron membrane time constant is around 10ms [1], meaning 10 billion neurons could have 1 trillion activations per second, which is in the realm of modern hardware.
People mentioned in [2] have done the calculations from a more informed position than I have, and reach numbers like 10^17 FLOPS when doing a calculation that resembles this one.
the reason why they're so inefficient is not algorithmic, but purely architectural.
I would agree with that, with the caveat that in my mind "the architecture" and "the algorithm" are sort of bound up with each other. That is, one implies the other -- to some extent.
And yes, fair point that building dedicated hardware might just be part of the solution to making something that runs much more efficiently.
The only other thing I would add, is that - relative to what I said in the post above - when I talk about "algorithmic advances" I'm looking at everything as potentially being on the table - including maybe something different from ANN's altogether.
> If we suppose that ANNs are more or less accurate models of real neural networks [..]
IANNs were inspired by biological neural structures and that's it. They are not representative models at all, even of the "less" variety. Dedicated hardware will certainly help, but no insights into how much it can help will come from this sort of comparison.
Could you explain your claim that ANNs are nothing like real neural networks beyond their initial inspiration (if you'll accept my paraphrasing). I've seen it a few times on HN, and I'm not sure what people mean by it.
By my very limited understanding of neural biology, neurons activate according to inputs that are mostly activations of other neurons. A dot product of weights and inputs (i.e. one part of matrix multiplication) together with a threshold-like function doesn't seem like a horrible way to model this. On the other hand, neurons can get a bit fancier than a linear combination of inputs, and I haven't heard anything about biological systems doing something comparable to backpropogation, but I'd like to know whether we understand enough to say for sure that they don't.
Having AI agents learn to see, navigate and complete tasks in a 3d environment. I feel like it had more potential than LLMs to become an AGI (if that is possible).
They haven't touched it in a long time though. But Genie 3 makes me think they haven't completely dropped it.
We know for a fact that human level general intelligence can be achieved on a relatively modest power budget. A human brain runs on somewhere from about 20-100W, depending on how much of the rest of the body's metabolism you attribute to supporting it.
> The groundwork has been laid, and it's not too hard to see the shape of things to come.
The groundwork for VR has also been laid and it's not too hard to see the shape of things to come. Yet VR hasn't moved far beyond the previous hype cycle 10 years ago, because some problems are just really, really hard to solve.
As someone who was a customer of Netflix from the dialup to broadband world, I can tell you that this stuff happens much faster than you expect. With AI we're clearly in the "it really works, but there are kinks and scaling problems" of, say, streaming video in 2001 -- whereas I think you mean to indicate we're trying to do Netflix back in the 1980s where the tech for widespread broadband was just fundamentally not available.
It's a logical fallacy that just because some technology experienced some period of exponential growth, all technology will always experience constant exponential growth.
There are plenty of counter-examples to the scaling of computers that occurred from the 1970s-2010s.
We thought that humans would be traveling the stars, or at least the solar system, after the space race of the 1960s, but we ended up stuck orbiting the earth.
Going back further, little has changed daily life more than technologies like indoor plumbing and electric lighting did in the late 19th century.
The ancient Romans came up with technologies like concrete that were then lost for hundreds of years.
"Progress" moves in fits and starts. It is the furthest thing from inevitable.
I think the image, video, audio, world model, diffusion domains should be treated 100% separately from LLMs. They are not the same thing.
Image and video AI is nothing short of revolutionary. It's already having huge impact and it's disrupting every single business it touches.
I've spoken with hundreds of medium and large businesses about it. They're changing how they bill clients and budget projects. It's already here and real.
For example, a studio that does over ten million in revenue annually used to bill ~$300k for commercial spots. Pharmaceutical, P&G, etc. Or HBO title sequences. They're now bidding ~$50k and winning almost everything they bid on. They're taking ten times the workload.
You're looking at individual generations. These tools aren't for casual users expecting to 1-shot things.
The value is in having a director, editor, VFX compositor pick and choose from amongst the outputs. Each generation is a single take or simulation, and you're going to do hundreds or thousands. You sift through that and explore the latent space, and that's where you find your 5-person Pixar.
Human curated AI is an exoskeleton that enables small teams to replace huge studios.
Is there any example of an AI generated film like this that is actually coherent? I've seen a couple short ones that are basically just vibe based non-linear things.
Some of the festival winners purposely stay away from talking since AI voices and lipsync are terrible, eg. "Poof" by the infamous "Pizza Later" (who is responsible for "Pepperoni Hug Spot") :
Fwiw LLMs are also revolutionary. There's currently more anti-AI hype than AI hype imho. As in there's literally people claiming it's completely useless and not going to change a thing. Which is crazy.
That’s an anecdote about intensity, not volume. The extremes on both sides are indeed very extreme (no value, replacing most white collar jobs next year).
IME the volume is overwhelming on the pro-LLM side.
Yeah the conversation on both extremes feels almost religious at times. The pro LLM hype feels more disconcerting sometimes because there are literally billions if not trillions of dollars riding on this thing, so people like Sam Altman have a strong incentive to hype the shit out of it.
You're right, and I also think LLMs have an impact.
The issue is the way the market is investing they are looking for massive growth, in the multiples.
That growth can't really come from trading cost. It has to come from creating new demand for new things.
I think that's what not happened yet.
Are diffusion models increasing the demand for video and image content? Is it having customers spend more on shows, games, and so on? Is it going to lead to the creation of a whole new consumption medium ?
This ad was purposefully playing off the fact that it was AI though, it was a large amount of short bizarre things like two old women selling Fresh Manatee out of the back of a truck. You couldn't replace a regular ad with this.
> Kalshi's Jack Such declined to disclose Accetturo's fee for creating the ad. But, he added, "the actual cost of prompting the AI — what is being used in lieu of studios, directors, actors, etc. — was under $2,000."
So in other words, if you ignore the costs of paying people to create the ad, it barely costs anything. A true accounting miracle!
How about harvesting your whale blubber to power your oil lamp at night?
The nature of work changes all the time.
If an ad can be made with one person, that's it. We're done. There's no going back to hiring teams of 50 people.
It's stupid to say we must hire teams of 50 to make an advertisement just because. There's no reason for that. It's busy work. The job is to make the ad, not to give 50 people meaningless busy work.
And you know what? The economy is going to grow to accommodate this. Every single business is now going to need animated ads. The market for video is going to grow larger than we've ever before imagined, and in ways we still haven't predicted.
Your local plumber is going to want a funny action movie trailer slash plumbing advertisement to advertise their services. They wouldn't have even been in the market before.
You're going to have silly videos for corporate functions. Independent filmmakers will be making their own Miyazaki and Spielberg epics that cater to the most niche of audiences - no more mass market Marvel that has to satisfy everybody, you're going to see fictional fantasy biopic reimaginings of Grace Hopper fighting the vampire Nazis. Whatever. There'll be a market for everything, and 100,000 times as many creators with actual autonomy.
In some number of years, there is going to be so much more content being produced. More content in single months than in all human history up to this point. Content that caters to the very long tail.
And you know what that means?
Jobs out the wazoo.
More jobs than ever before.
They're just going to look different and people will be doing more.
It's quite incredible how fast the generative media stuff is moving.
The self-hostable models are improving rapidly. How capable and accessible WAN 2.2 (text+image to video; fully local if you have the VRAM) is feels unimaginable from last year when OpenAI released Sora (closed/hosted).
There's also no evidence that it won't, so your opinion carries exactly the same weight as theirs.
> Progress in AI has always been a step function.
There's decisively no evidence of that, since whatever measure you use to rate "progress in AI" is bound to be entirely subjective, especially with such a broad statement.
There are signs, though. Every "AI" cycle, ever, has revolved around some algorithmic discovery, followed by a winter in search for the next one. This one is no different and propped up by LLMs, whose limitations we know quite well by now: "intelligence" is elusive, throwing more compute at them produces vastly diminishing returns, throwing more training data at them is no longer feasible (we came short of it even before the well got poisoned). Now the competitors are stuck at the same level, within percent points of one another, with the difference explained by fine-tuning techniques and not by technical prowess. Unless a cool new technique come yesterday to dislodge LLMs, we are in for a new winter.
What is your definition of "evidence" here? The evidence, in my view, are physical (as in, available computing power) and algorithmic limitations.
We don't expect steel to suddenly have new properties, and we don't expect bubble sort to suddenly run in O(n) time. You could ask -- well what is the evidence they won't, but it's a silly question -- the evidence is our knowledge of how things work.
Saying that improvement in AI is inevitable depends on the assumption of new discoveries and new algorithms beyond the current corpus of machine learning. They may happen, or they may not, but I think the burden of proof is higher on those spending money in a way that assumes it will happen.
I'm not convinced that the immaturity of the tech is what's holding back the profits. The impact and adoption of the tech are through the roof. It has shaken the job market across sectors like I've never seen before. My thinking is that if the bubble bursts, it won't be because the technology failed to deliver functionally; it will be because the technology simply does not become as profitable to operate as everyone is betting right now.
What will it mean if the cutting edge models are open source, and being OpenAI effectively boils down to running those models in your data center? Your business model is suddenly not that different from any cloud service provider; you might as well be Digital Ocean.
I'm starting to agree with this viewpoint. As the technology seems to solidify to roughly what we can do now, the aspirations are going to have to get cut back until there's a couple more breakthroughs.
> We're clearly seeing what AI will eventually be able to do
I think this is one of the major mistakes of this cycle. People assume that AI will scale and improve like many computing things before it, but there is already evidence scaling isn't working and people are putting a lot of faith in models (LLMs) structurally unsuited to the task.
Of course that doesn't mean that people won't keep exploiting the hype with hand-wavy claims.
> A couple of years ago, I asked a financial investment person about AI as a trick question. She did well by recommending investing in companies that invest in AI (like MS) but who had other profitable businesses (like Azure)
If you had actually invested in AI pure players and Nvidia, the shovel seller, a couple years ago and were selling today, you would have made a pretty penny.
The hard thing with potential bubbles is not entirely avoiding them, it’s being there early enough and not being left at the end holding the bag.
Financial advisors usually work on wholistic plans not short term ones. It isn't about timing markets its about a steady hand that doesn't panic and makes sure you don't get caught with your pants down when you need cash.
Are you bearish on the shovel seller? Is now the time to sell out? I'm still +40% on nvda - quite late to the game but people still seem to be buying the shovels.
I have absolutely no clue whatsoever. I have zero insider information. For all I know, the bubble could pop tomorrow or we might be at the beginning of a shift of a similar magnitude to the industrial revolution. If I could reliably tell, I wouldn’t tell you anyway. I would be getting rich.
I’m just amused by people who think they are financially more clever by taking conservative positions. At that point, just buy ETF. That’s even more diversification that buying Microsoft.
It boggles the mind that this kind of management is what it takes to create one of the most valuable companies in the world (and becoming one of the world's richest in the process).
People also underestimate the value of maximizing opportunities for luck. If we think of luck as random external chance that we can't control, then what can we control? Doing things that increase your exposure to opportunities without spreading yourself too thin is the key. Easier said than done to strike that balance, but getting out there and trying a lot of things is a viable strategy even if only a few of them pay off. The trick is deciding how long to stick with something that doesn't appear to be working out.
The ascents of the era all feel like examples of anti-markets, of having gotten yourself into an intermediary position where you control both side's access.
Ability vastly increases your luck surface area. A single poker hand has a lot of luck, and even a game, but over long periods, ability starts to strongly differentiate peoples' results.
This might be true for a normal definition of success, but not lottery-winner style success like Facebook. If you look at Microsoft, Netflix, Apple, Amazon, Google, and so on, the founders all have few or zero previous attempts at starting a business. My theory is that this leads them to pursue risky behavior that more experienced leaders wouldn't try, and because they were in the right place at the right time, that earned them the largest rewards.
When you are still one of the top 3 richest people in the world after your mistake, that is not a "failure" in the way normal people experience it. That is just passing the time.
This is just cope for people with a massive string of failed attempts and no successes.
Daddy's giving you another $50,000 because he loves you, not because he expects your seventh business (blockchain for yoga studio class bookings) is going to go any better than the last six.
Past a certain point, skill doesn't contribute to the magnitude of success and it becomes all luck. There are plenty of smart people on earth, but there can only be 1 founder of facebook.
Once you go deep enough into a personal passion project like that, you run a serious risk of flunking out of school. For most people that feels like a big deal. And for those of us with fewer alternatives in life, it's usually enough to keep us on the straight and narrow path.
People from wealthy backgrounds often have less fear of failure, which is a big reason why success disproportionately favors that clique. But frankly, most people in that position are more likely to abuse it or ignore it than to take advantage of it. For people like Zuckerberg and Dell and Gates, the easiest thing to do would have been to slack off, chill out, play their expected role and coast through life... just like most of their peers did.
Metaverse and this AI turnaround are characterized by the LACK of perseverance, though. They remind me of the time I bought a guitar and played it for three months.
When you put the guitar down after three months it's one thing, but when you reverse course on an entire line of development in a way that might affect hundreds or thousands of employees it's a failure of integrity.
Or you can just have rich parents and do nothing, and still be considered successful. What you say only applies to people who start from zero, and even then I'd call luck the dominant factor (based on observing my skillful and hardworking but not really successful friends).
Another key component is knowing the right people or the network you're in. I've known a few people that lacked 2 of those 3 things and yet somehow succeeded. Simply because of the people they knew.
No. Nothing of that scale. I was replying to OP's take on the 3 factors that lead to success in general. I was simply pointing out a 4th factor that plays a big role.
When you start to think about who exactly determines what makes a valuable company, and if you believe in the buffalo herd theory, then it makes a little bit of sense.
Giving 1.5 million salary is nothing for these people.
It shouldn’t be mind boggling. They see revolutionary technology that has potential to change the world and is changing the world already. Making a gamble like that is worth it because losing is trivial compared to the upside of success.
You are where you are and not where they are because your mind is boggled by winning strategies that are designed to arrive at success through losing and dancing around the risk of losing.
Obviously mark is where he is also because of luck. But he’s not an idiot and clearly it’s not all luck.
>But how is it worth for meta, since they won't really monetize it.
Meta needs growth as there main platform is slowing down. To move forward they need to gamble on potential growth. VR was a gamble. They bombed that one. This is another gamble.
They're not stupid. Like all the risks you're aware of, they're also aware of. They were aware of the risks for VR too. They need to find a new high growth niche. Gambling on something with even a 40% chance of exploding into success is a good bet for them given there massive resources.
It all makes much more sense when you start to realize that capitalism is a casino in which the already rich have a lot more chips to bet and meritocracy is a comforting lie.
I'll differ from the siblingposters who compare it to the luck of the draw, essentially explaining this away as the excusable randomness of confusion rather than the insidious evil of stupidity; while the "it's fraud" perspective presumes a solid grasp of which things out there are not fraud besides those which are coercion, but that's not a subject I'm interested in having an opinion about.
Instead, think of whales for a sec. Think elephants - remember those? Think of Pando the tree, the largest organism alive. Then compare with one of the most valuable companies in the world. To a regular person's senses, the latter is a vaster and more complex entity than any tree or whale or elephant.
Gee, what makes it grow so big though? The power of human ambition?
And here's where I say, no, it needs to be this big, because at smaller scales it would be too dumb to exist.
To you and me it may all look like the fuckup of some Leadership or Management, a convenient concept beca corresponding to a mental image of a human or group of humans. That's some sort of default framing, such as can only be provided to boggle the mind; considering that they'll keep doing this and probably have for longer than I've been around. The entire Internet is laughing at Zuckerberg for not looking like their idea of "a person" but he's not the one with the impostor syndrome.
For ours are human minds, optimized to view things in term of person-terms and Dunbar-counts; even the Invisible Hand of the market is hand-shaped. But last time I checked my hand wasn't shaped anything like the invisible network of cause and effect that the metaphor represents; instead
I would posit that for an entity like Facebook, to perform an action that does not look completely ridiculous from the viewpoint of an individual observer, is the equivalent an anatomical impossibility. It did evolve after all from American college students
See also: "Beyond Power / Knowledge", Graeber 2006.
why is there so much of this on HN? I'm on a few social networks, but this is the only one where I find this kind of quasi-spiritual, stream of consciousness, word length steadily increasing, pseudo-technical, word salad diatribes?
It's very unique to this site and these type of comments all have an eerily similar vibe.
This is pretty common on HN but not unique to it. Lots of rationalist adjacent content (like stuff on LessWrong, replies to Scott Alexander's substack, etc) has it also. Here I think it comes from users that try to intellectualize their not-very-intellectual, stream of consciousness style thoughts, as if using technical jargon to convey your feelings makes them more rational and less emotional.
Unfortunately this kind of talk really gets under my skin and has made me have to limit my time on this site because it's only gotten more prevalent as the site has gotten more popular. I'm just baffled that so much content on this forum is people who seem to think their feelings-oriented reactions are in fact rational truths.
The answer is fairly straightforward. It's fraud, and lots of it.
A honest businessman wouldn't put their company into a stock bubble like this. Zuckerberg runs his mouth and tells investors what they want to hear, even if it's unbacked.
A honest businessman would never have gotten Facebook this valuable because so much of the value is derived from ad-fraud that Facebook is both party to and knows about.
A honest businessman would never have gotten Facebook this big because it's growth relied extensively on crushing all competition through predatory pricing, illegal both within the US and internationally as "dumping".
Bear in mind that these are all bad as they're unsustainable. The AI bubble will burst
and seriously harm Meta. They would have to fall back on the social media products they've been filling up with AI slop. If it takes too long for the bubble to burst, if zuckerberg gets too much time to shit up Facebook, too much time for advertisers to wisen up to how many of their impressions are bots, they might collapse entirely.
The rest of Big Tech is not much better. Microsoft and Google's CEOs are fools who run their mouth. OpenAI's new "CEO of apps" is Facebook's pivot-to-video ghoul.
As I've said in other comments - expecting honesty and ethical behavior from Mark Zuckerberg is a fool's errand at best. He has unchecked power and cannot be voted out by shareholders.
He will say whatever he wants and because the returns have been pretty decent so far, people will just take his word for it. There's not enough class A shares to actually force his hand to do anything he doesn't want to do.
Zuckerberg started as a sex pest and got not an iota better.
But we could, as a society, stop rewarding him for this shit. He'd be an irrelevant fool if we had appropriate regulations around the most severe of his misdeeds.
And since we live in the era of the real golden rule (i.e "he who has the gold makes the rules), there's no chance that we'll ever get the chance to catch the ship. Mark lives in his own world, because we gave him a quarter trillion dollars and never so much as slapped him on the wrist.
I used to work in adtech. I don't have any direct information but, I assume this relates to the persistent rumours that Facebook inflates impressions and turns a blind eye to bot activity.
>It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted.
Everything zuck has done since the "dawn of AI" has been to intentionally subvert and sabotage existing AI players, because otherwise Meta would be too far behind. In the same way that AI threatens Search, we are seeing emergently that AI is also threatening social networks -- you can get companionship, advice, information, emotional validation, etc. directly from an AI. People are forming serious relationships with these things in as much a real way as you would with anyone else on Facebook or Instagram. Not to mention, how long before most of the "people" on those platforms are AI themselves?
I believe exactly 0 percent of the decision to make Llama open-source and free was done altruistically as much as it was simply to try and push the margins of Anthropic, OpenAI, etc. downward. Indeed, I feel like even the fearmongering of this article is also strategically intended to devalue AI incumbents. AI is very much an existential threat to Meta.
Is AI currently fulfilling the immense hype around it? In my opinion, maybe not, but the potential value is obvious. Much more obvious than, for example, NFTs and crypto just a few years ago.
The line was to buy Amazon as it was undervalued a la IBM or Apple based on its cloud computing capabilities relative to the future (projected) needs of AI.
I think we will see the opposite. If we made no progress with LLMs we'd still have huge advancements and growth opportunities enhancing the workflows and tuning them to domain specific tasks.
I agree with this, but how will these companies make money? Short of a breakthrough, the consumer isn't ready to pay for it, and even if they were, open source models just catch up.
My feelings are that most of the "huge advancements" are not going to benefit the people selling AI.
I'd put my money on those who sell the pickaxes, and the companies who have a way to use this new tech to deliver more value.
Yeah, I've always found it a bit puzzling how companies like OpenAI/Anthropic have such high valuations. Like what is the actual business model? You can sell inference-as-a-service of course but given that there are a half-dozen SOTA frontier models and the compute cost of inference is still very high it just seems like there is no margin in it. Nvidia captures so much value on the compute infrastructure and competition pushes prices down for inference and what is left?
The people who make money serving in users will be the one with the best integrations. Those are harder to do, require business relationships, and are massively differentiating.
You'll probably have a player that sells privacy as well.
I think you could both be right at the same time. We will see a large number of VC funded AI startup companies and feature clones vanish soon, and we will also see current or future LLMs continue to make inroads into existing business processes and increase productivity and profitability.
Personally, I think what we will witness is consolidation and winner-takes-all scenarios. There just isn't a sustainable market for 15 VS Code forks all copying each other along with all other non-VS Code IDEs cloning those features in as fast as possible. There isn't space for Claude Code, Gemini CLI, Qwen Code, Opencode all doing basically the same thing with their special branding when the thing they're actually selling is a commoditized LLM API. Hell, there _probably_ isn't space for OpenAI and Anthropic and Google and Mistral and DeepSeek and Alibaba and whoever else, all fundamentally creating and doing the same thing globally. Every single software vendor can't innovate and integrate AI features faster than AI companies themselves can build better tooling to automate that company's tools for them. It reeks of the 90's when there were a dozen totally viable but roughly equal search engines. One vendor will eventually pull ahead or have a slightly longer runway and claim the whole thing.
I don't see how this works, as the costs of running inference is so much higher than the revenues earned by the frontier labs. Anthropic and OpenAI don't continue to exist long-term in a world where GPT-5 and Claude 4.1 cost-quality models are SOTA.
With gpt5 I’m not sure this is true. Certainly openAI is still losing money but if they stopped research and just focused on productionizing inference use cases I think they’d be profitable.
But would they be profitable enough? They've taken on more than $50 billion of investment.
I think it's relatively easy for Meta to plow billions into AI. Last quarter their revenue was something like $15 billion. Open AI will be lucky to generate that over the next year.
Or, this knowingly could not be sustained. So they scooped up all the talent they wanted before anybody could react, all at once, with big carrots. And then hit pause button to let all that new talent figure out the next step.
As someone using LLMs daily, it's always interesting to read something about AI being a bubble or just hype. I think you're going to miss the train, I am personally convinced this is the technology of our lifetime.
You are welcome to share how AI has transformed a revenue generating role. Personally, I have never seen a durable example of it, despite my excitement with the tech.
In my world, AI has been little more than a productivity boost in very narrowly scoped areas. For instance, generating an initial data mapping of source data against a manually built schema for the individual to then review and clean up. In this case, AI is helping the individual get results faster, but they're still "doing" data migrations themselves. AI is simply a tool in their toolbox.
What you've described is reasonable and a clear takeaway is that AI is a timesaving tool you should learn.
Where i share concern with the parent is the claims that AI is useless which isn't coming from your post at all but i have definitely seen instances of it in the programmer community still to this day. As in the parents concern that some programmers are missing the train is unfortunately completely warranted.
I went through the parents, looking for a claim somewhere that AI was "useless." I couldn't find it.
Yes there are lots of skeptics amongst programmers when it comes to AI. I was one myself (and still am depending on what we're talking about). My skepticism was rooted in the fact that AI is trained on human-generated output. Most human written code is not very good, and so AI is going to produce not very good code by design because that's what it was trained on.
Then you add to that the context problem. AI is not very good at understanding your business goals, or the nuanced intricacies of your problem domain.
All of this pointed to the fact, very early on, that AI would not be a good tool to replace programmers. And THAT'S the crux of why so many programmers pushed back. Because the hype was claiming that automation was coming for engineering jobs.
I have started to use LLMs regularly for a variety of tasks. Including some with engineering. But I always end up spending a lot of time refactoring what LLMs produce for me, code-wise. And much of the time I find that I"m still learning what the LLMs can do for me that truly saves me time, vs what would have been faster to just write myself in the first place.
LLMs are not useless. But if only 20% of a programmer's time is actually spent writing code on average then even if you can net a 50% increase in coding productivity... you're only netting a 10% overall productivity optimization for an engineer BEST CASE SCENARIO.
And that's not "useless" but compared to the hype and bullshit coming out of the mouths of CEOs, it's as good as useless. It's as good as the MIT study finding that only 5% of generative AI projects have netted ANY measurable returns for the business.
I know a company that replaced their sales call center with an AI calling bot instead. The bot got better sales and higher feedback scores from customers.
I'll say it again since I've said it a million times, it can be useful and a bubble. The logic of investors before the last market crash was something like "houses are useful, so no amount of hype around the housing market could be a bubble"
Or, quite similarly, the internet bubble of the large ‘90s
Very obviously the internet is useful, and has radically changed our lives. Also obviously, most of the high stock valuations of the ‘90s didn’t pan out.
How are you using it? The execs and investors believe the road to profit is by getting rid of your role in the process. Do you think that’d be possible?
If you really think this, `baby` is an apt name! Internet, Smartphones, and social media will all be more impactful than LLMs could possibly be... but hey, if you're like 18 y/o then sure, maybe LLMs is the biggest.
Also disagree with missing the train, these tools are so easy to use a monkey (not even a smart one like an ape, more like a Howler) can effectively use them. Add in that the tooling landscape is changing rapidly; ex: everyone loved Cursor, but now it's fallen behind and everyone loves Claude Code. There's some sense in waiting for this to calm down and become more open. (Why are users so OK with vendor lock-in??? It's bothersome)
The hard parts are running LLMs locally (what quant do I use? K/V quant? Tradeoffs? Llama.cpp or ollama or vllm? What model? How much context can I cram in my vram? What if I do CPU inference? Fine tuning? etc..) and creating/training them.
> It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted.
If AI is going to be integral to society going forward, how is it shortsighted?
> She did well by recommending investing in companies that invest in AI (like MS) but who had other profitable businesses (like Azure).
So you prefer a 2x gain rather than 10X gain from the likes of Nvidia or Broadcom? You should check how much better META has done compared to MSFT the past few years. Also a "financial investment person"? The anecdote feels made up.
> She skillfully navigated the question in a way that won my respect.
She won your respect by giving you advice that led to far less returns than you could have gotten otherwise?
> I personally believe that a lot of investment money is going to evaporate before the market resets.
But you believe investing in MSFT was a better AI play than going with the "hype" even when objective facts show otherwise. Why should any care what you think about AI, investments and the market when you clearly know nothing about it?
The financials from the link to not specifically call out Depreciation Expense. But Operating Income should take into account Depreciation Expense.
The financials have a line below Net Income Line called "Reconciled depreciation" with about $16.7 billion. I do not know what that means (maybe this is how they get to the EBITDA metric) but maybe this is the metric you are looking for.
These changes in direction (spending billions, freezing hiring) over just a few months show that these people are as clueless as to what's going to happen with AI, as everyone else. They just have the billions and therefore dictate where money goes, but that's it.
I really do wonder if any of those rock star $100m++ hires managed to get a 9-figure sign-on bonus, or if the majority have year(s) long performance clauses.
Imagine being paid generational wealth, and then the house of cards comes crashing down a couple of months later.
I'm sure everyone is doing just fine financially, but I think it's common knowledge that these kind of comp packages are usually a mix of equity and cash earned out over multiple years with bonuses contingent on milestones, etc. The eye-popping top-line number is insane but it's also unlikely to be fully realized.
The comment I was responding to was implying that it would be better for the collective if Meta was not paying these exorbitant salaries. You said “it [paying high salaries] is a great way to kneecap collective growth and development.”
In other words, you’re suggesting that _not_ paying high salaries would be good for collective growth and development.
And if Meta is currently willing to pay these salaries, but didn’t for some reason, that would be the definition of wage suppression.
Oh ya? If I am willing to pay my cleaner $350, but she only charges and accepted an offer of $200, I am engaging in the definition of wage suppression?
Supposedly, all people that join meta are on the same contract. They also supposedly all have the same RSU vesting schedules as well.
That means that these "rockstars" will get a big sign on bonus (but its payable back inside 12 months if they leave) then ~$2m every 3 months in shares
I have never heard of anyone getting a sign on bonus that was unconditional. When I have had signing bonuses they were owed back prorated if my employment ended for any reason in the first year.
I was a startup where someone got an unconditional signing bonus. It wasn't deliberate, they just kept it simple because it was a startup and they thought they trusted the guy because he was an old friend of the CEO.
The guy immediately took leave to get some medical procedure done with a recovery time, then when he returned he quit for another job. He barely worked, collected a big signing bonus, used the company's insurance plan for a very expensive procedure, and then disappeared.
From that point forward, signing bonuses had the standard conditions attached.
Are most people that money hungry? I wouldn't expect someone like Zuckerberg to understand, but if I ever got to more than a couple million dollars, I'm never doing anything else for the sake of making more money again.
This is a very weird take. Lots of people want to actively work on things that are interesting to them or impactful to the world. Places like Meta potentially give the opportunity to work on the most impactful and interesting things, potentially in human history.
Setting that aside, even if the work was boring, I would jump at the chance to earn $100M for several years of white collar, cushy work, purely for the impact I could have on the world with that money.
If we're actually headed for a "house of cards" AI crash in a couple months, that actually makes their arrangement with Meta likely more valuable, not less. Meta is a much more diversified company than the AI companies that these folks were poached from. Meta stock will likely be more resilient than AI-company stock in the event of an AI bubble bursting. Moreover, they were offered so much of it that even if it were to crash 50%, they'd still be sitting on $50M-$100M+ of stock.
Is it imminent? Reading the article, the only thing that's actually changed is that the CEO has stopped hand-picking AI hires and has placed that responsibility on Alexandr Wang instead. The rest is just fluff to turn it into an article. The tech sector being down is happening in concert with the non-tech sector sliding too.
Mission accomplished: who'd tell disrupting your competition poaching their talent and erasing value (giving it away for free) would make people realize there is no long term value in the core technology itself.
Don't get me wrong, we are moving to commoditization, as any new tech it'd be transparent to our lifestyle and a lot of money will be done as an industry, but it'd be hard to compete as a core business competence w/o cheating (and by cheating I mean your FANG company already has a competitive advantage)
Whoa that's actually a brilliant strategy: accelerate the hype first by offering 100M comp packages, then stop hiring and strategically drop a few "yeah bubble's gonna pop soon" rumours. Great way to fuck with your competition, especially if you're meta and you're not in the lead yourself
But if Meta believe it's a bubble then why not let the competition continue to waste their money pumping it up? How does popping it early benefit Meta?
A trillion dollars of value disappearing in 2 days. We've still got our NFT metaverse shipping waybill project going on somewhere in the org chart, right? Phew!
Since "AI bubble" has become part of the discourse, people are watching for any signs of trouble. Up to this point, we have seen lots of AI hype. Now, more than in the past, we are going to see extra attention paid to "man bites dog" stories about how AI investment is going to collapse.
So it's not clickbait, even though the headline does not reflect the contents of the article, because you believe the headline is plausible?
I think AI is a bubble, but there's nothing in here that indicates they have frozen hiring or that Zuckerberg is cautious of a bubble. Sounds like they are spending even more money per researcher.
Make a mistake once, it’s misjudgment. Repeat it, it’s incompetence?
Meta nearly doubled its headcount in 2020 and 2021, assuming the pandemic growth would continue. However, Zuckerberg later admitted this was a mistake.
Scaling AI will require an exponential increase in compute and processing power, and even the current LLM models take up a lot of resources. We are already at the limit of how small we can scale chips and Moore’s law is already dead.
So newer chips will not be exponentially better but will be more of incremental improvements, so unless the price of electricity comes down exponentially we might never see AGI at a price point that’s cheaper than hiring a human.
Most companies are already running AI models at a loss, scaling the models to be bigger(like GPT 4.5) only makes them more expensive to run.
The reason why internet, smartphones and computers have seen exponential growth from the 90s is due to underlying increase in computing power. I personally used a 50Mhz 486 in the 90s and now use a 8c/16t 5Ghz CPU. I highly doubt if we will see the same form of increase in the next 40 years
We are either limited by compute, available training data, or algorithms. You seem to believe we are limited by compute. I've seen other people argue that we are limited by training data. It is my totally inexpert belief that we are substantially limited by algorithms at this point.
I think algorithms is a unique limit because it changes how much data or compute you need. For instance, we probably have the algorithms we need to brute force solving more problems today, but they require infeasible compute or data. We can almost
certainly train a new 10T parameter mixture of experts that continues to make progress in benchmarks, but it will cost so much to train and be completely undeployable with today’s chips, data, and algorithms.
So I think the truth is likely we are both compute limited and we need better algorithms.
There are a few "hints" that suggest to me algorithms will bear a lot more fruit than compute (in terms of flops):
1) there already exist very efficient algorithms for rigorous problems that LLMs perform terribly at!
2) learning is too slow and is largely offline
3) "llms aren't world models"
We are a few months into our $bigco AI push and we are already getting token constrained. I believe we really will need massive datacenter rollouts in order to get to the ubiquity everyone says will happen.
cool fun concepts/technology fucked by the worlds most boring people who only have desire to dominate markets and attention.. god forbid anything happen slowly/gradually without it being about them
Fucked? Have you tried the latest Quest3 experience? It would be nowhere near this if it was not for Meta and other big corps.
Second, did you see the amount of fun content on the store? It's insane. People who are commenting on the Quest have obviously never even opened the app store there.
> 1000 people can't get a woman to have a child faster than 1 person.
I always get slightly miffed about business comparisons to gestation: getting 9 women pregnant won't get you a child in 1 month.
Sure, if you want one child. But that's not what business is often doing, now is it?
The target is never "one child". The target is "10 children", or "100 children" or "1000 children".
You are definitely going to overrun your ETA if your target is 100 children in 9 months using only 100 women.
IOW, this is a facile comparison not worthy of consideration.[1]
> So it depends on the type of problem you're trying to solve.
This[1] is not the type of problem where the analogy applies.
=====================================
[1] It's even more facile in this context: you're looking to strike gold (AGI), so the analogy is trying to get one genius (160+ IQ) child. Good luck getting there by getting 1 woman pregnant at a time!
>> Sure, if you want one child. But that's not what business is often doing, now is it?
Your designing one thing. You're building one plant. Yes, you'll make and sell millions of widgets in the end but the system that produces them? Just one.
Engineering teams do become less efficient above some size.
You might well be making 100 AI babies, and seeing which one turns out to be the genius.
We shouldn’t assume that the best way to do research is just through careful, linear planning and design. Sometimes you need to run a hundred experiments before figuring out which one will work. Smart and well-designed experiments, yes, but brute force + decent theory can often solve problems faster than just good theory alone.
The analogy is a good analogy. It is used to demonstrate that a larger workforce doesn’t always automatically give you better results, and that there is a set of problems that are clear to identify a priori where that applies. For some problems, quality is more important than quantity, and you structure your org respectively. See sports teams, for example.
In this case, you want one foundation model, not 100 or 1000. You can’t afford to build 1000. That’s the one baby the company wants.
> In this case, you want one foundation model, not 100 or 1000. You can’t afford to build 1000. That’s the one baby the company wants.
I am going to repeat the footnote in my comment:
>> [1] It's even more facile in this context: you're looking to strike gold (AGI), so the analogy is trying to get one genius (160+ IQ) child. Good luck getting there by getting 1 woman pregnant at a time!
IOW, if you're looking for specifically for quality, you can't bet everything on one horse.
In re Wendy’s, it depends on whether you have a standard plan for building the Wendy’s and know what skills you need to hire for. If you just hire 10,000 random construction workers and send them out with instructions to “build 100 Wendy’s”, you are not going to succeed.
What I don't get is that they are gunning for the people that brought us the innovations we are working with right now. How often does it happen that someone really strikes gold a second time in research at such a high level? It's not a sport.
Even if they do not strike gold the second time, there can still be a multitude of reasons:
1. The innovators will know a lot about the details, limitations and potential improvements concerning the thing they invented.
2. Having a big name in your research team will attract other people to work with you.
3. I assume the people who discovered something still have a higher chance to discover something big compared to "average" researchers.
4. That person will not be hired by your competition.
5. Having a lot of very publicly extremely highly paid people will make people assume anyone working on AI there is highly paid, if not quite as extreme. What most people who make a lot of money spend it on is wealth signalling, and now they can get a form of that without the company having to pay them as much.
You're falling victim to the Gambler's Fallacy - it's like saying "the coin just flipped heads, so I choose tails, it's unlikely this coin flips heads twice in a row".
Realistically they have to draw from a small pool of people with expertise in the field. It is unlikely _anyone_ they hire will "strike gold", but past success doesn't make future success _less_ likely. At a minimum I would assume past success is uncorrelated with future success, and at best there's a weak positive correlation because of reputation, social factors, etc.
Who else would you hire? With a topic as complex as this, it seems most likely that the people who have been working at the bleeding edge for years will be able to continue to innovate. At the very least, they are a much safer bet than some unproven randos.
Exactly this - people that understood the field well enough to add new knowledge to it has to be a pretty decent signal for a research-level engineer.
At the research level it’s not just about being smart enough, or being a good programmer, or even completely understanding the field - it’s also about having an intuitive understanding of the field where you can self pursue research directions that are novel enough and yield results. Hard to prove that without having done it before.
Reworded from [1]: Earlier this year Meta tried to acquire Safe Superintelligence. Sutskever rebuffed Meta’s efforts, as well as the company’s attempt to hire him
> Didn't he say their goal is AGI and they will not produce any products until then.
Did he specify what AGI is? xD
> I admire that, in this era where CEOs tend to HYPE!! To increase funding (looking at a particular AI company...)
I think he was probably hyping too, it's just that he appealed to a different audience. IIRC they had a really plain website, which, I think, they thought "hackers" would like.
They didn't just invest they made it core to their identity with the name change and it just fell so so flat because the claims were nonsense hype for crypto pumps. We already had stuff like VR Chat (still going pretty strong) it just wasn't corporate and sanitized for sale and mass monetization.
They're still on it though. The new headset prototypes with high FOV sound amazing, and they are iterating on many designs.
They're already doing something like ~$500M/year in Meta Quest app sales. Granted not huge yet after their 30% cut, but sales should keep increasing as the headsets get better.
Metaverse (especially) or AI might make more sense if you could actually see your friend's posts (and vice versa), if the feed made sense (which it hasn't for years now) and if you could message people who you aren't friends with yet without it getting lost in some 'other' folder you won't discover until 3 years from now (Gmail has a Spam folder problem... but the difference is you can see you have messages there and you can at least check it out for yourself)
What I'm trying to say is make your product the barest minimum usable first maybe? (Also, don't act like, as Jason Calacanis has called it, a marauder, like copying everything from everyone all the time. What he's done with Snapchat is absolutely tasteless and in the case of spying on them - which he's done - very likely criminal)
Its almost like nobody asked for the dramatic push of ai, and it was all created by billionaires trying to become even richer at the cost of people's health and the environment.
How did he run out of money so fast? Think Zuck is one of those guys who get sucked into hype cycles and no one around him will tell him so. Even investors.
I don't want to come across as a shill, but I think superintelligence is being used here because the end result is murky and ill-defined at this point.
I think the concept is like: "a tool that has the utility of a 'personal assistant' so much so that you wouldn't have to hire one of those." (Not so much that the "superintelligence" will mimicry a human personal assistant).
“ In any case, how's that going to work? Is everyone going to start wearing glasses? What happens if someone doesn't want to wear glasses?”
People probably said the same thing about “what if someone doesn’t want to carry a phone with them everywhere”. If it’s useful enough the culture will change (which, I unequivocally think they won’t be, but I digress)
Last night I had a technical conversation with ChatGPT that was so full of wild hallucinations at every step, it left me wondering if the main draw of "AI" is better thought of as entertainment. And whether using it for even just rough discovery actually serves as a black hole for the motivation to get things done.
I'm actually a little shocked that AI hasn't been integrated into games more deeply at this point.
Between whisper and lightweight tuned models, it wouldn't be super hard to have onboard AI models that you can interact with in much more meaningful ways that we have traditionally interacted with NPCs.
When I meet an NPC castle guard, it would be awesome if they had an LLM behind it that was instructed to not allow me to pass unless I mention my Norse heritage or whatever.
To me AI is a like phone business. A few companies (Apple,Samsung) will manage to score a homerun and the rest will be destined to offer commoditized products.
I somewhat disagree here. Meta is a huge company with multiple products. Experimenting with AI and trying to capitalize on what's bound to be a larger user market, is a valid company angle to take.
It might not pan out, but it's worth trying from a pure business point of view.
Meta's business model is to capture attention - largely with "content" - so they can charge lots of money to sprinkle ads amongst that content.
I can see a lot of utility for Meta to get deeply involved in the unlimited custom content generating machine. They have a lot of data about what sort of content gets people to spend more time with them. They now have the ability to endlessly create exactly what it is that keeps you most engaged and looking at ads.
Frankly, content businesses that get their revenue from ads are one of the most easily monetizable ways to use the outputs of AI.
Yes, it will pollute the internet to the point of making almost all information untrustable, but think of how much money can be extracted along the way!
The whole point is novelty/authenticity/scarcity though, if you just have a machine that generates infinite infinitely cute cat videos then people will cease to be interested in cat videos. And its not like they pay content creators anyway.
It's Spain sinking their own economy by importing tons of silver.
Yeah, it truly IS transformative for industries, no denying anymore at this point. What we have will remain even after a pop. But I think AI was special in how there were massive improvements the more compute you threw at it for years. But then we ran out of training material and suddenly things got much harder. It’s this ramping up of investments to spearhead transformative tech and suddenly someone turns off the tap that makes this so conflicted. I think.
Personally, I think it's both! It's a bubble, but it's also going to be something that slowly but steadily transforms the world in the next 10-20 years.
We might see another AI winter first, is my assumption. I believe that LLMs are fundamentally the wrong approach to AGI, and that bubble is going to burst until we have a better methodology for AGI.
Unfortunately, the major players seem focused on 'getting to AGI pretention through LLM'.
Dot-com was the same way... the Internet did end up having the potential everyone thought it would, businesses just didn't handle the influx of investment well.
... people said the same thing about the "metaverse" just a few years ago. "You know people are gonna live their entire lives in there! It's gonna change everything!" And 99% of people who heard that laughed and said "what are you smoking?" And I say the same thing when I hear people talk about "the AI machine god!"
The bubble narrative is coming from the outside. More likely is that the /acquisition/ of Scale has led to an abundance of talent that is being underutilised. If you give managers the option to hire, they will. Freezing hiring while reorganising is a sane strategy regardless of how well you are or are not doing.
Perhaps. But more like, there's a new boss who wants to understand the biz before doing any action. I've done this personally at a much smaller scale of course.
You could just keep having interviews yet never actually hire anyone based on the talent pool is wide but shallow. It results in the same as a freeze, but without the negative connotation to the company while shifting it to the workforce
wow there's really _zero_ sense of mutual respect in this industry isn't there. it's all just "let's make a buck by being total assholes to everyone around us".
Maybe they are poisoning the well to slow their competitors? Get the funding you need secured for the data centers and the hiring, hire everyone you need and then put out signals that there is another AI winter.
Zuckerberg's leadership style feels very reactionary and arrogant, defined by flailing around for the new fad and new hyped thing, scrapping everything that when the current obsession doesn't work out and then sticking head in the sand about abandoned projects and ignoring subsequent whiplash.
Remember when he pivoted the entire company to the meta-verse and it was all about avatars with no legs? And how proud they trumpeted when the avatars were "now with legs!!" but still looked so pathetic to everyone not in his bubble. Then for a while it was all about Meta glasses and he was spamming those goofy cringe glasses no one wants in all his instagram posts- seriously if you check out his insta he wears them constantly.
Then this spring/summer it was all about AI and stealing rockstar ai coders from competitors and pouring endless money into flirty chatbots for lonely seniors. Now we have some bad press from that and realizing that isn't the panacea we thought it was so we're in the the phase where this is languishing so in about 6 months we'll abandon this and roll out a new obsession that will be endlessly hyped.
Anything to distract from actually giving good stewardship and fixing the neglect and stagnation of Meta's fundamental products like facebook and insta. Wish they would just focus on increasing user functionality and enjoyment and trying to resolve the privacy issues, disinformation, ethical failures, social harm and political polarization caused by his continued poor management.
> Anything to distract from actually giving good stewardship and fixing the neglect and stagnation of Meta's fundamental products like facebook and insta.
> Zuckerberg's leadership style feels very reactionary and arrogant, defined by flailing around for the new fad and new hyped thing, scrapping everything that when the current obsession doesn't work out and then sticking head in the sand about abandoned projects and ignoring subsequent whiplash.
Maybe he's like this because the first few times he tried it, it worked.
Insta threatening the empire? Buy Insta, no one really complains.
Snapchat threatening Insta? Knock off their feature and put it in Insta. Snap almost died.
The first couple times Zuckerberg threw elbows he got what he wanted and no one stopped him. That probably influenced his current mindset, maybe he thinks he's God and all tech industry trends revolve around his company.
By Amara's Law and Gartner Hype cycle every technological breakthrough looks like a bubble. Investors and technologist should already know that. I don't know why they're acting like altcoins in 2021.
1 breakthrough per 99 bubbles would make anyone cautious. The rule should be to assume a bubble is happening by default until proven otherwise by time.
That's actually how you create a death spiral for your company. You have to assume 'growth' and not 'death'. 'life' over 'lost'. 'flourishing' over 'withering'. That you're strong enough to survive.
That's not playing into a bubble, that's creating a product for a market. You could also argue the Apple Vision is a misplay, or at least premature.
They've also arrogantly gone against consumer direction time and time again (PowerPC, Lightning Ports, no headphone jack, no replaceable battery, etc.)
And finally, sometimes their vision simply doesn't shake out (AirPower)
Oh, yeah — Apple Vision is a complete joke. I'm an Apple apologist to a degree though so I can rationalize all their missteps. I won't deny though they have had many though.
IMHO Mark Zuckerberg is a textbook case of someone who got lucky once by basically being in the right place at the right time, but who attributes his success to his skill. There’s probably a proper term for this.
I think that meta is bad for the world and that zuck has made a lot of huge mistakes but calling him a one hit wonder doesn't sit right with me.
Facebook made the transition to mobile faster than other competitors and successfully kept G+ from becoming competition.
The instagram purchase felt insane at the time ($1b to share photos) but facebook was able to convert it into a moneymaking juggernaut in time for the flattened growth of their flagship application.
Zuck hired Sheryl Sandburg and successfully turned a website with a ton of users into an ad-revenue machine. Plenty of other companies struggled to convert large user bases into dollars.
This obviously wasn't all based on him. He had other people around him working on this stuff and it isn't right to attribute all company success to the CEO. The metaverse play was obviously a legendary bust. But "he just got lucky" feels more like Myspace Tom than Zuckerberg in my mind.
No one else is adding the context of where things were at the time in tech...
> The instagram purchase felt insane at the time ($1b to share photos) but facebook was able to convert it into a moneymaking juggernaut in time for the flattened growth of their flagship application.
Facebook's API was incredibly open and accessible at the time and Instagram was overtaking users' news feeds. Zuckerberg wasn't happy that an external entity was growing so fast and onboarding users so easily that it was driving more content to news feeds than built-in tools. Buying Instagram was a defensive move, especially since the API became quite closed-off since then.
Your other points are largely valid, though. Another comment called the WhatsApp purchase "inspired", but I feel that also lacks context. Facebook bought a mobile VPN service used predominantly by younger smartphone users, Onavo(?), and realized the amount of traffic WhatsApp generated by analyzing the logs. Given the insight and growth they were monitoring, they likely anticipated that WhatsApp could usurp them if it added social features. Once again, a defensive purchase.
I don't think we can really call the instagram purchase purely defense. They didn't buy it and then slowly kill it. They bought it and turned it into a product of comparable size to their flagship with sustained large investment.
I hate pretty much everything about Facebook but Zuckerberg has been wildly successful as CEO of a publicly traded company. The market clearly has confidence in his leadership ability, he effectively has had sole executive control of Facebook since it started and it's done very well for like 20 years now.
>has been wildly successful as CEO of a publicly traded company.
That has a lot to do with the fact that it's a business centric company. His acumen has been in user growth, monetization of ads, acquisitions and so on. He's very similar to Altman.
The problems start when you try to venture into hard technological topics, like the Metaverse fiasco, where you have to have a sober and engineering oriented understanding of the practical limits of technology, like Carmack who left Meta pretty frustrated. You can't just bullshit infinitely when the tech and not the sales matter.
Contrast it with Gates who had a serious programming background, he never promised even a fraction of the cringe worthy stuff you hear from some CEOs nowadays because he would have known it's nonsense. Or take Apple, infinitely more sane on the AI topic because it isn't just a "more users, more growth, stonks go up" company.
Buying competitors is not insane or a weird business practice. He was probably advised to do so by the competent people under him
And what did he do to keep G+ from becoming a valid competitor? It killed itself. I signed up but there was no network effect and it kind of sucked. Google had a way of shutting down all their product attempts too
If you read Internal Tech Emails (on X), you’ll see that he was the driving force behind the key acquisitions (successes as well as failures such as Snap).
I am also not saying that zuck is a prescient genius who is more capable than other CEOs. I am just saying that it doesn't seem correct to me to say that he is "a textbook case of somebody who got lucky once."
He's really not. Facebook is an extremely well run organization. There's a lot to dislike about working there, and there's a lot to dislike about what they do, but you cannot deny they have been unbelievably successful at it. He really is good at his job, and part of that has been making bold bets and aggressively cutting unsuccessful bets.
Facebook can be well run without that being due to Zuck.
There are literally books that make this argument from insider perspectives (which doesn't mean it's true, but it is possible, and does happen regularly).
A basketball team can be great even if their coach sucks.
You can't attribute everything to the person at the top.
That is true but in Meta’s case, it is tightly managed by him. I remember a decade ago a friend was a mid-level manager and would have exec reviews to Zuck, who could absorb information very quickly and redirect feedback to align with his product strategy.
He is a very hands CEO, not one who is relying on experts to run things for him.
In contrast, I’ve heard that Elon has a very good senior management team and they sort of know how to show him shiny things that he can say he’s very hands on about while they focus on what they need to do.
He created the company, if it is well run it was thanks to him hiring the right people. Regardless how you slice it he is a big reason it didn't fail, most companies like that fails when they scale up and hire a lot of people but facebook didn't, hiring the right people is not luck.
I can’t tell if you’re being tongue in cheek or not, so I’ll respond as if you mean this.
It’s easy to cherry pick a few bets that flopped for every mega tech company: Amazon has them, Google has them, remember Windows Phone? etc.
I see the failures as a feature, not a bug - the guy is one of the only founder CEOs to have ever built a $2T company (trillion with a T). I imagine part of that is being willing to make big bets.
And it also seems like no individual product failure has endangered their company’s footing at all.
While I’m not a Meta or Zuck fan myself, using a relatively small product flop as an indication a $2T tech mega corp isn’t well run seems… either myopic or disingenuous.
Parent comment says "aggressively cutting unsuccessful bets" and Oculus is nothing like that.
Oculus Quest are decent products, but a complete flop compared to their investment and Zuck's vision of the metaverse. Remember they even renamed the company? You could say they're on betting on the long run, but I just don't see that happening in 5 or even 10 years.
As an owner of Quest 2 and 3, I'd love to be proven wrong though. I just don't see any evidence of this would change any time soon.
The VR venture can also be seen as a huge investment in hard tech and competency around issues such as location tracking and display tech for creating AI-integrated smartglasses, which many believe is the next gen AI interface. Even if the current headsets or form factor do not pay off, I think having this knowledge coud be very valuable soon.
I don’t think their “flops” of Oculus or Metaverse have endangered their company in any material way, judging by their stock’s performance and the absurd cash generating machine they have.
Even if they aren’t great products or just wither into nothing, I don’t think we will be see a HBS case study in 20 years saying, “Meta could have been a really successful company, but were it for their failure in these two product lines”
Absolutely, not everything they do will succeed but that's okay too, right? At this point their core products are used by 1 in 2 humans on earth. They need to get people to have more kids to expand their user base. They're gonna throw shit at the wall and not everything will stick, and they'll ship stuff that's not quite done, but they do have to keep trying; I can't bring myself to call that "failure."
I agree, but that does not make Oculus a commercially successful and viable product. They are still bleeding cash on it, and VR is not going mainstream any time soon.
But they were less “skill” and more “surveillance”. He had very good usage statistics (which he shouldn’t have had) of these apps through Onavo - a popular VPN app Facebook bought for the purpose of spying on what users are doing outside Facebook.
WhatsApp is certainly worth less today than what they paid for it plus the extra funding it has required over time. Let alone producing anything close to ROI. Has lost them more money than the metaverse stuff.
Insta was a huge hit for sure but since then Meta Capital allocation has been a disaster including a lot of badly timed buybacks
> IMHO Mark Zuckerberg is a textbook case of someone who got lucky once by basically being in the right place at the right time, but who attributes his success to his skill.
It is no secret that the person who turned Facebook into a money-printing machine is/was Sheryl Sandberg.
Thus, the evidence is clear that Mark Zuckerberg had the right idea at the right time (the question is whether this was because of his skills or because he got lucky), but turning his good idea(s) into a successful business was done by other people (lead by Sheryl Sandberg).
And isn’t the job of a good CEO to put the right people in the right seats? So if he found a superstar COO that took the company into the stratosphere and made them all gazillionaires…
Wouldn’t that indicate, at least a little bit, a great management move by Zuck?
You're probably going to get comments like "Social networking existed before. You can't steal it". Well, on top of derailing someone else's execution of said non-stole idea (or something) which makes you a jerk, in the case of those he 'stole'/stole from, for starters maybe it was existing code (I don't know if that was ever proven), but maybe it was also the Winklevosses idea of using .edu email addresses, and possibly other concepts
Do I think he stole it? Dunno. (Though Aaron Greenspan did log his houseSYSTEM server requests, which seems pretty damning) But given what he's done since (Whatsapp, copying every Snapchat feature)? I'd say the likelihood is non-zero
It is at least a little suspicious that one week he's hiring like crazy, then next week, right after Sam Altman states that we are in an AI bubble, Zuckerberg turns around and now fears the bubble.
Maybe he's just gambling that Altman is right, saving his money for now and will be able to pick up AI researcher and developers at a massive discount next year. Meta doesn't have much of a presence in the space market right now, and they have other businesses, so waiting a year or two might not matter.
Ehh. You don’t get FB to where it is by being incompetent. Maybe he is not the right leader for today. Maybe. But you have to be right way, way more often than not to create a FB and get it to where it is. To operate from where it started to where it is just isn’t an accident or Dunning-Kruger.
The term you're looking for is "billionaire". The amount of serendipity in these guys' lives is truly baffling, and only becomes more apparent the more you dig. It makes sense when you realize their fame is all survivorship bias. Afer all, there must be someone at the tail end of the bell curve.
Well that’s the incompetent piece. Setting out to write giant historical employment contracts without a plan is not something competent people do. And seemingly it’s not that they over extended a bit either since reports claimed the time availability of the contracts was extremely limited; under 30min in some cases.
Perhaps it was this: Lets hit the market fast, scoop up all the talent we can before anybody can react, then stop.
I don't think there is anybody that would expect they would 'continue' offering 250million packages. They would need to stop eventually. They just did it fast, all at once, and now stopped.
That's not really how that works in the corporate/big tech world. It's not as though Meta set out and said "Ok we're going to hire exactly 150 AI engineers and that will be our team and then we'll immediately freeze our recruiting efforts".
Maybe this time the top posters on HN should stop criticizing one of the top performing founder CEOs of the last 20 years who built an insane business, made many calls that were called stupid at the time (WhatsApp), and many that were actually stupid decisions.
Like do people here really think making some bad decisions is incompetence?
If you do, your perfectionism is probably something you need to think about.
Or please reply to me with your exact perfect predictions of how AI will play out in the next 5, 10, 20 years and then tell us how you would run a trillion dollar company. Oh and please revisit your comment in these timeframes
Sure, but society is full of fools. Plenty of people say social media is the primary way they get news. Social media platforms are super spreaders of lies and propaganda.
I don't think it's about perfect predictions. It's more about going all in on Metaverse and then on AI and backtracking on both. As a manager you need to use your resources wisely, even if they're as big as what Meta has at its disposal.
The other thing - Peter's principle is that people rise until they hit a level where they can't perform anymore. Zuck is up there as high as you can go, maybe no one is really ready to operate at that level? It seems both him and Elon made a lot of bad decisions lately. It doesn't erase their previous good decisions, but possibly some self-reflection is warranted?
> Like do people here really think making some bad decisions is incompetence?
> If you do, your perfectionism is probably something you need to think about.
> Or please reply to me with your exact perfect predictions of how AI will play out in the next 5, 10, 20 years and then tell us how you would run a trillion dollar company.
It's the effect of believing (and being sold) meritocracy, if you are making literal billions of dollars for your work then some will think it should be spotless.
Not saying I think that way but it's probably what a lot of people consider, being paid that much signals that your work should be absolutely exceptional, big failures just show they are also normal flawed people so perhaps they shouldn't be worth million times more than other normal flawed people.
He’s earned almost all his money through owning part of a company that millions of shareholders think is worth trillions, and does in fact generate a lot of profits.
A committee didn’t decide Zuckerberg is paid $30bn.
And id say his work is pretty exceptional. If it wasn’t then his company wouldn’t be growing. And he’d probably be pressured into resigning as CEO
I just did a phone screen with Meta, and the interviewer asked for Euclidean distance between two points; they definitely have some nerds in the building.
K closest points using Euclidean distance and a heap, is not 8th grade math, although any 8th grade math problem can be transformed into a difficult "adult" question. Sums are elementary, asking to find a window of prefix sums that add up to something is still addition, but a little more tricky
People saying it is a high school maths problem! I'd like to see you provide a general method for accurately measuring the distance between two arbitrary points in space...
I suppose the trick is to have an ipad running GPT-voice-mode off to the side, next to your monitor. Instruct it to answer every question it overhears. This way you'll ace all of the "humiliation ritual" questions.
there's a youtube channel made by a meta engineer, he said to memorize the top 75 LeetCode Meta questions and approaches. He doesn't say fluff like "recognize patterns. My interviewer was 3.88/4 GPA masters Comp Sci guy from Penn, I asked for feedback he said always be studying its useful if you want a career...
it wasn't just euclidean distance of course, it was this leetcode problem k closest points to origin https://leetcode.com/problems/k-closest-points-to-origin/des..., I thought if I needed a heap I would have to implement it myself didn't know I can use a library
its not a nearest neighbor problem that is incorrect, they expect candidates to have the heap solution on the first go, you have 10-15 minutes to answer, no time to optimize, cheaters get blacklisted, welcome to the new reality
Finding the k points closest to the origin (or any other point) is obviously the k-nearest neighbors problem. What algorithm and data structure you use does not change that.
edit: If you want to use a heap, the general solution is to define an appropriate cost function; e.g., the p-norm distance to a reference point. Use a union type with the distance (for the heap's comparisons) and the point itself.
true, I am thinking, Node and neighbors, this is a heap problem, it actually does matter what algorithm you use, I learn that the hard way today, trying to implement quickselect vs using a heap library (I didn't know you could do that) is much easier, don't make the same mistake!
The foundation, like every LeetCode problem, is a basic high school math problem, when the foundation of the problem is trigonometry, way harder than stack, arrays, linked list, bfs, dfs...
They're all bleeding money so yes it's inevitable.
It's always the same thing, uber, food delivery, escooter, &c. they bait you with cheap trials and stay cheap until the investors money run out, and once you're reliant on them they jack up the prices as high as they can.
Clickbait title and article. There was a large reorg of genai/msl and several other teams, so things have been shuffled around and they likely don't want to hire into the org while this is finalizing.
A freeze like this is common and basically just signals that they are ready to get to work with the current team they have. The whole point of the AI org is to be a smaller, more focused, and lean org, and they have been making several strategic hires for months at this point. All this says is that zuck thinks the org is in a good spot to start executing.
From talking with people at and outside of the company, I don't have much reason to believe that this is some kneejerk reaction to some supposed realization that "its all a bubble." I think people are conflating this with whatever Sam Altman said about a bubble.
The article got me thinking that there's some sort of bottle neck that makes scaling astronomical or the value just not really there.
1. Buy up top talent from other's working in this space
2. See what they produce over say, 6mo. to a year
3. Hire a corpus of regular ICs to see what _they_ produce
4. Open source the model to see if any programmer at all can produce something novel with a pretty robust model.
Observe that nothing amazing has really come out (besides a pattern-recognizing machine that placates the user to coerce them into using more tokens for more prompts), and potentially call it on hiring for a bubble.
> Observe that nothing amazing has really come out
I wouldn't say so. The problem is rather that some actually successful applications of such AI models are not what companies like Meta want to be associated with. Think into directions like AI boyfriend/girlfriend (a very active scene, and common usage of locally hosted LLMs), or roleplaying (in a very broad sense). For such applications, it matters a lot less if in some boundary cases the LLM produces strange results.
If you want to get an impression of such scenes, google "character.ai" (roleplaying), or for AI boyfriend/girlfriend have a look at https://old.reddit.com/r/MyBoyfriendIsAI/
Similar to my view of AI, there is a huge bubble in current AI. Current AI is nothing more than a second-hand information processing model, with inherent cognitive biases, lagging behind environmental changes, and other limitations or shortcomings.
I feel like the giant 100 mil /1 billion salaries could have been better spent just hiring a ton of math, computer science, data science graduates and just forming an an AI skunkworks out of them.
Also throw in a ton of graduates from other fields/sciences, arts, psychology, biology, law , finance, or whatever else you can imagine to help create data and red team their fields.
Hiring people with creative writing and musical skills to give it more samples of creative writing and song writing, summarization etc
And people that are good at teaching and breaking complex problems into easier to understand chunks for different age brackets.
Their userbase is big but it's not the same as ChatGTP's, they won't get the same tasks to learn from users that chatgpt does.
LLMs are not the way to AGI and it's becoming clearer to even the most fanatic evangelists. It's not without reason GPT-5 was only a minor incremental update. I am convinced we have reached peak LLM.
There's no way a system of statistical predictions by itself can ever develop anything close to reasoning or intelligence. I think maybe there might be some potential there if we combined LLMs with formal reasoning systems - make the LLM nothing more than a fancy human language <-> formal logic translator, but even then, that translation layer will be inherently unreliable due to the nature of LLMs.
We're finally reaching the point where it's cost-prohibitive to sweep this fact under the rug with scaling out data centers and refreshing version numbers to clear contexts.
> Mark Zuckerberg has blocked recruitment of artificial intelligence staff at Meta, slamming the brakes on a multibillion-dollar hiring spree amid fears of an AI bubble.
> amid fears of an AI bubble
Who told the telegraph that these two things are related? Is it just another case of wishful thinking?
Nothing would give me a nicer feeling of schadenfreude than to see Meta, Google, and these other frothing-at-the-mouth AI hucksters take a bath on their bets.
Can we try to not turn HN into this? I come to this forum to find domain experts with interesting commentary, instead of emotionally charged low effort food fights.
Is it just me or does it feel like billionaires of that ilk can never go broke no matter how bad their decisions are?
The complete shift to the metaverse, the complete shift to LLMs and fat AI glasses, the bullheaded “let’s suck all talents out of the atmosphere” phase and now let’s freeze all hiring. In a handful of years.
And yet, billionaires will remain billionaires. As if there are no consequences for these guys.
Meanwhile I feel another bubble burst coming that will hang everyone else high and dry.
the top100 richest people on the globe can do a lot more stupid stuff and still walk away to a comfortable retirement, whereas the bottom 10-20-.. percent doesn't have this luxury.
not to mention that these rich guys are playing with the money of even richer companies with waaay too much "free cash flow"
It'll be somewhere in between. A lot of capital will be burned, quite a few marginal jobs will be replaced, and AI will run into the wall of not enough new/good training material because all the future creators will be spoiled by using AI.
Even that came after "AI is going to make itself smarter so fast that it's inevitably going to kill us all and must be regulated" talk ended. Remember when that was the big issue?
I've seen a few people convince themselves they were building AGI trying to do that, though it looked more like the psychotic ramblings of someone entering a manic episode committed to github. And so far none of their pet projects have taken over the world yet.
It's actually kind of reminds me of all those people who snap thinking they've solved P=NP and start spamming their "proofs" everywhere.
Makes sense. Previously the hype was so all-encompassing that CEOs could simply rely on an implicit public perception that it was coming for our jerbs. Once they have to start explicitly saying that line themselves, it's because that perception is fading.
> Sam is the main one driving the hype, that's rich...
It's also funny that he's been accusing those who accept better job offers as mercenaries. It does sound like the statements try to modulate competition both in the AI race and in acquiring the talent driving it.
GPT-5 was a massive disappointment to people expecting LLMs to accelerate to the singularity. Unless Google comes out with something amazing in the next Gemini, all the people betting on AI firms owning the singularity will be rethinking their bets.
But then, he's purposely comparing it to the .com bubble - that bubble had some underlying merit. He could compare it to NFTs, the metaverse, the South Sea Company. It wouldn't make sense for him to say it's not a bubble when it's patently clear, so he picks his bubble.
Facebook, Twitter, and some others made it out of the social media bubble. Some "gig" apps survived the gig bubble. Some crypto apps survived peak crypto hype
Not everyone has to lose which he's presumably banking on
It could be that, beyond the AI bubble, there may be a broader understanding of economic conditions that Meta likely has. Corporate spending cuts often follow such insights.
Note: I was too young to fully understand the dot com bubble, but I still remember a few things.
The difference I see is that, conversely to websites like pets.com, AI gave the masses something tangible and transformative with the promise it could get even better. Along with these promises, CEOs also hinted at a transformative impact "comparable to Electricity or the internet itself".
Given the pace of innovation in the last few years I guess a lot of people became firm believers and once you have zealots it takes time for them to change their mind. And these people surely influence the public into thinking that we are not, in fact, in a bubble.
Additionally, the companies that went bust in early 2000s never had such lofty goals/promises to match their lofty market valuations and in lieu of that current high market valuations/investments are somewhat flying under the radar.
> The difference I see is that, conversely to websites like pets.com, AI gave the masses something tangible and transformative with the promise it could get even better.
The promise is being offered, that's for sure. The product will never get there, LLMs by design will simply never be intelligent.
They seem to have been banking on the assumption that human intelligence truly is nothing more than predicting the next word based on what was just said/thought. That assumption sounds wrong on the face of it and they seem to be proving it wrong with LLMs.
However, even friends/colleagues that like me are in the AI field (I am more into the "ML" side of things) always mention that while it is true that predicting the next token is a poor approximation of intelligence, emergent behaviors can't be discounted. I don't know enough to have an opinion on that, but for sure it keeps people/companies buying GPUs.
> but for sure it keeps people/companies buying GPUs.
That's a tricky metric to use as an indicator though. Companies, and more importantly their investors, are pouring mountains of cash in the industry based on the hope of what AI may be in the future rather than what it is today. There are multiple incentives that could drive the market for GPUs, only a portion of those have to do with today's LLM outputs.
It was an example. Pets.com was just the flagship (at least in my mind), but during the dot com bubble there were many many more such sites that had an inflated market value. I mean, if it was just one site that crashed then it wouldn't be called a bubble.
From the Big Short:
Lawrence Fields: "Actually, no one can see a bubble. That's what makes it a bubble."
Michael Burry: "That's dumb, Lawrence. There are always markers."
Ah Michael Burry, the man who has predicted 18 of our last 2 bubbles. Classic broken clock being right, and in a way, perfectly validates the "no one can see a bubble" claim!
If Burry could actually see a bubble/crash, he wouldn't be wrong about them 95%+ of the time... (He actually missed the covid crash as well, which is pretty shocking considering his reputation and claims!)
Ultimately, hindsight is 20/20 and understanding whether or not "the markers" will lead to a major economic event or not is impossible, just like timing the market and picking stocks. At scale, it's impossible.
I feel 18 out of 2 isn't a good enough statistic to say he is "just right twice a day".
What was the cost of the 16 missed predictions? Presumably he is up over all!
Also doesn't even tell us his false positive rate. If, just for example, there were 1 million opportunities for him to call a bubble, and he called 18 and then there were only 2, this makes him look much better at predicting bubbles.
If you think that predicting economic crash every single year since 2012 and being wrong (Except for 2020, when he did not predict crash and there was one), is good data, by all means, continue to trust the Boy Who Cried Crash.
This sets up the other quote from the movie:
Michael Burry: “I may be early but I’m not wrong”. Investor guy: “It’s the same thing! It's the same thing, Mike!”
Smart contracts sometimes fail because they are executed too literally. Fixing that needs something like judges, but automated - so AI! It will be perfect. /s
I don’t think it’s entirely a bubble. Definitely this is revolutionary technology on the scale of going to the moon. It will fundamentally change humanity.
But while the technology is revolutionary the ideas and capability behind building these things aren’t that complicated.
Paying a guy millions doesn’t mean shit. So what mark zuckerberg was doing was dumb.
Of all the examples of things that actually had an impact I would pick this one last... Steam engine, internet, personal computers, radios, GPS, &c. but going to the moon ? The thing we did a few times and stopped doing once we won the ussr vs usa dick contest ?
Impact is irrelevant. We aren’t sure about the impact of AI yet. But the technology is revolutionary. Thus for the example I picked something thats revolutionary but the impact is not as clear.
Good call in this case specifically, but lord this is some kind of directionless leadership despite well thought out concerns over the true economic impact of LLMs and other generative AI tech.
Useful, amazing tech but only for specific niches and not as generalist application that will end and transform the world as we know it.
I find it refreshing to browse r/betteroffline these days after 2 years of being bombarded with grifting LinkedIn lunatics everywhere you look.
The most likely explanation I can think of are drugs.
Offering 1B dollar salaries and then backtracking, it's like when that addict friend calls you with a super cool idea at 11pm and then 5 days later they regret it.
Also rejecting a 1B salary? Drugs, it isn't unheard of in Silicon Valley.