Google, Meta, Microsoft, and Amazon will get through easily. They don't have excessive debt. They can afford to lose their investments into AI. Their valuations will take a hit. Nvidia will lose revenue and profits, stock will go down by 60% or more, but it will also survive.
Oracle will likely fail. It funded its AI pivot with debt. The Debt-to-Revenue ratio is 1.77, the Debt-to-Equity ratio D/E is 520, and it has a free cash flow problem.
OpenAI, Anthropic, and others will be bought for cents on the dollar.
OpenAI, Anthropic, and others will be bought for cents on the dollar.
OpenAI is existential threat to all big tech including Meta, Google, Microsoft, Apple. Hence, they're all spending lavishly right now to not get left behind.
Meta --> GenAI Content creation can disrupt Instagram. ChatGPT likely has more data on a person than Instagram does by now for ads. 800 million daily active users for ChatGPT already.
Google --> Cash cow search is under threat from ChatGPT.
Microsoft --> Productivity/work is fundamentally changed with GenAI.
Apple --> OpenAI can make a device that runs ChatGPT as the OS instead of relying on iOS.
I'm betting that OpenAI will emerge bigger than current big tech in ~5 years or less.
> Apple --> OpenAI can make a device that runs ChatGPT as the OS instead of relying on iOS.
Yeah... No they can't. I don't agree with any of your "disruptions," but this one is just comically incorrect. There was a post on HN somewhat recently that was a simulated computer using LLMs, and it was unusable.
Microsoft is in a pickle. They put AI lipstick on top of decades of unfixed tech debt and their relationship with their userbase isn't great. Their engineering culture is clearly not healthy. For their size and financial resources, their position in the market right now is very delicate.
I think that's the impression you get if you focus on Microsoft as a OS vendor. It's not that anymore, that's why their OS sucks for many years now. Their main business is b2b, cloud services, and azure. I think they are pretty safe from OpenAI. Plus they have invested big in OpenAI as well.
Windows is hard to replace in large organizations. Is there actually any real AI competitors in the stack? Well Google, maybe. The whole Windows+Office+AD+Exchange and now Azure stack is unlikely to go any time soon. However badly they screw it up.
They are one of the few companies actually making money with AI as they have intelligently leveraged the position of Office 365 in companies to sell Copilot. Their AI investment plans are, well, plans which could be scaled down easily. Worst case scenario for them is their investment in OpenAI becoming worthless.
It would hurt but is hardly life threatening. Their revenue driver is clearly their position at the heart of entreprise IT and they are pretty much untouchable here.
I cry for Elon, that precious jewel of a human being.
Tesla (P/E: 273, PEG: 16.3) the car maker without robots, robotaxis is less than 15% of the Tesla valuation at best. When the AI hype dies, selloff starts and negative sentiment hits, we have below $200B market cap company.
You will be able to rent a whole Meta datacenter with thousands of NVIDIA B200 for $5/hour. AWS will become unprofitable due to abundance of capacity...
Is it really a bubble about to burst when literally everyone is talking about AI being in a bubble and maybe bursting soon?
To me, we're clearly not peak AI exuberance. AI agents are just getting started and getting so freaking good. Just the other day, I used Vercel's v0 to build a small business website for a relative in 10 minutes. It looked fantastic and very mobile friendly. I fed the website to ChatGPT5.1 and asked it to improve the marketing text. I fed those improvements back to v0. Finished in 15 minutes. Would have taken me at least one week in the past to do a design, code it, test it for desktop/mobile, write the copy.
The way AI has disrupted software building in 3 short years is astonishing. Yes, code is uniquely great for LLM training due to open source code and documentation but as other industries catch up on LLM training, they will change profoundly too.
It's not that the AI models or products don't work.
It's how much money is being poured into it, how much of the money that is just changing hands between the big players, the revenue, and the valuations.
Well, do you have a model for this? Or is it just regurgitating mass media that it's a bubble?
If hyperscalers keep buying GPUs and Chinese companies keep saying they don't have enough GPUs, especially advanced ones, why should we believe someone that it's a bubble based on "feel"?
This is a very biased example.
Also, it is possible only because right now the tools you've used are heavily subsidised by investors' money. A LOT of it.
Nobody questions the utility of what you just mentioned, but nodoby stops to ask if this would be viable if you were to pay the actual cost of these models, nor what it means for 99.9% of all the other jobs that AI companies claim can be automated, but in reality are not even close to be displaced by their technology.
So what if it's subsidized and companies are in market share grab? Is it going to cost $40 instead of $20 that I paid? Big deal. It still beats the hell out of $2k - $3k that it would have taken before and weeks in waiting time.
100x cheaper, 1000x faster delivery. Further more, v0 and ChatGPT together for sure did much better than the average web designer and copy writer.
Lastly, OpenAI has already stated a few times that they are "very profitable" in inference. There was an analysis posted on HN showing that inference for open source models like Deepseek are also profitable on a per token basis.
We don't know what AI should cost but if you look at the numbers then 2x more expensive is much too low.
Think about the pricing. OpenAI fixed everyone's prices to free and/or roughly the cost of a Netflix subscription, which in turn was pinned to the cost of a cable TV subscription (originally). These prices were made up to sound good to his friends, they weren't chosen based on sane business modelling.
Then everyone had to follow. So Anthropic launched Claude Code at the same price point, before realizing that was deadly and overnight the price went up by an order of magnitude. From $20 to $200/month, and even that doesn't seem to be enough.
If the numbers leaked to Ed Zitron are true then they aren't profitable on inference. But even if that were true, so what? It's a meaningless statement, just another way of saying they're still under-pricing their models. Inferencing and model licensing are their only revenue streams! That has to cover everything including training, staff costs, data licensing, lawsuits, support, office costs etc.
Maybe OpenAI can launch an ad network soon. That's their only hope of salvation but it's risky because if they botch it users might just migrate to Grok or Gemini or Claude.
Then everyone had to follow. So Anthropic launched Claude Code at the same price point, before realizing that was deadly and overnight the price went up by an order of magnitude. From $20 to $200/month, and even that doesn't seem to be enough.
Maybe it was because demand was so high that they didn't have enough GPUs to serve? Hence, the insane GPU demand?
I’ve wondered if it makes sense to buy Intel along with Cerebrus in order to use Intels newest nodes while under development to fab the Cerebrus wafer level inference chips which are more tolerant of defects. Overall that seems like the cheapest way to perform inference - if you have $100B.
If it subsidized it's a problem because we're not talking about Uber trying to disrupt a clearly flawed system of transportation.
We're talking about companies whose entire promise is an industrial revolution of a scale we've never seen before. That is the level of the bet.
The fact they did much better than the average professional is also your own take and assessment that is purely self evident.
Also, your example has fundamentally no value. You mentioned a marginal use case that doesn't scale. Personal websites will be quicker to make because you can get whatever the AI spews your way, you have basically infinite flexibility and the only contraints are "getting it done" and "looking ok/good".
That is not how larger business work, at all. So there is a massive issue of scalability of this.
Finally, OpenAI "states" a lot of things, and a lot of them have been proven to be flat out lies, because they're led by a man who has been proved to be a pathological narcissistic liar many times over.
Yet you keep drinking the kool aid, inlcuding about inference. There are by the way reports that, data in hand, prove quite convincingly that "being profitable on inference" seems to be math gymnastics, and not at all the financial reality of OpenAI.
The vast majority of highly valuable tech companies in the last 35 years have subsidized their products or services in the beginning as they grew. Why should OpenAI be any different? In particular the tokenomics is already profitable.
I think you are missing the fundamental point here. The question is not really if AI has some value. That much is obvious and the exemple you give, increasing developer productivity, is a good one.
The question is: is the value generated by AI aligned with the market projected value as currently priced in AI companies valuation? That's what's more difficult to assess.
The gap between fundamental financial data and valuations is very large. The risk is a brutal reassessment of these prices. That's what people call a bubble bursting and it doesn't mean the underlying technology has no value. The internet bubble burst yet the internet is probably the most significant innovation of the past twenty years.
Well it all started with usual SV style "growth hacking"(price dumping as a SaaS) of "gather users now, monetize later" by OpenAI - which works only if you attain virtual monopoly(basically dominance over segment of a market, with competition not really competing with you) over segment of the market.
The problem is no one attained that position, price expectations are set and it turns out that wishful thinking of reducing costs of running the models by orders of magnitude wasn't fruitful.
Is AI useful? of course.
Are the real costs of it justified? in most cases no.
The question is: is the value generated by AI aligned with the market projected value as currently priced in AI companies valuation? That's what's more difficult to assess.
I agree it is difficult to assess. Right now, competitive pressure is causing big players to go all in or get left behind.
That said, I don't think the bubble is done growing nor do I think it is about to burst.
I personally think we are in 1995 of the dotcom bubble equivalent. When it bursts, it will still be much bigger than in November 2025.
> Is it really a bubble about to burst when literally everyone is talking about AI being in a bubble and maybe bursting soon?
Yes, it is even one of necessary components. Everybody is twitchy afraid of the pop, but immediate returns are too tempting so they keep money in. The bubble pops when something happens and they all start to panicking at the same time. They all need to be sufficiently stressed for that mass run to happen.
Also it seems like Wall Street greedy bankers might have an other subprime crisis on their hands at same time... I wonder which one will be saved again...
After COVID they were still making a killing, but axed 12k people anyway. So, if someone starts doing layoffs and the market reacts well profitable companies will do layoffs as well
Google, Meta, Microsoft, and Amazon will get through easily. They don't have excessive debt. They can afford to lose their investments into AI. Their valuations will take a hit. Nvidia will lose revenue and profits, stock will go down by 60% or more, but it will also survive.
Oracle will likely fail. It funded its AI pivot with debt. The Debt-to-Revenue ratio is 1.77, the Debt-to-Equity ratio D/E is 520, and it has a free cash flow problem.
OpenAI, Anthropic, and others will be bought for cents on the dollar.
Meta --> GenAI Content creation can disrupt Instagram. ChatGPT likely has more data on a person than Instagram does by now for ads. 800 million daily active users for ChatGPT already.
Google --> Cash cow search is under threat from ChatGPT.
Microsoft --> Productivity/work is fundamentally changed with GenAI.
Apple --> OpenAI can make a device that runs ChatGPT as the OS instead of relying on iOS.
I'm betting that OpenAI will emerge bigger than current big tech in ~5 years or less.
> Apple --> OpenAI can make a device that runs ChatGPT as the OS instead of relying on iOS.
Yeah... No they can't. I don't agree with any of your "disruptions," but this one is just comically incorrect. There was a post on HN somewhat recently that was a simulated computer using LLMs, and it was unusable.
OpenAI has no technical moat (others can do what they do), generate content, all have the same data.
OpenAI does not expect to be cash-flow positive until 2029. When no new capital comes in, it can't continue.
OpenAI can's survive any kind of price competition.
They consistently have the best or second best models.
They have infrastructure that serves 800 million monthly active users.
Investors are lining up to give them money. When they IPO, they'll easily be worth over $1 trillion.
There's price competition right now. They're still surviving. If there is price competition, they're the most likely to survive.
> Investors are lining up to give them money. When they IPO, they'll easily be worth over $1 trillion.
Your premise is that there is no bubble. We are talking about what happens when bubble bursts. Without investor money drying out there is no bubble.
I think we are in 1995 of the dotcom bubble for AI.
Google, Meta, Microsoft and Amazon might get through easily as companies. I don't think all G/M/M/A staff will get through easily.
Microsoft is in a pickle. They put AI lipstick on top of decades of unfixed tech debt and their relationship with their userbase isn't great. Their engineering culture is clearly not healthy. For their size and financial resources, their position in the market right now is very delicate.
I think that's the impression you get if you focus on Microsoft as a OS vendor. It's not that anymore, that's why their OS sucks for many years now. Their main business is b2b, cloud services, and azure. I think they are pretty safe from OpenAI. Plus they have invested big in OpenAI as well.
Windows is hard to replace in large organizations. Is there actually any real AI competitors in the stack? Well Google, maybe. The whole Windows+Office+AD+Exchange and now Azure stack is unlikely to go any time soon. However badly they screw it up.
I don't think so.
They are one of the few companies actually making money with AI as they have intelligently leveraged the position of Office 365 in companies to sell Copilot. Their AI investment plans are, well, plans which could be scaled down easily. Worst case scenario for them is their investment in OpenAI becoming worthless.
It would hurt but is hardly life threatening. Their revenue driver is clearly their position at the heart of entreprise IT and they are pretty much untouchable here.
I cry for Elon, that precious jewel of a human being.
Tesla (P/E: 273, PEG: 16.3) the car maker without robots, robotaxis is less than 15% of the Tesla valuation at best. When the AI hype dies, selloff starts and negative sentiment hits, we have below $200B market cap company.
It will hurt Elon mentally. He will need a hug.
Never bet against TSLA. Elon will just start selling tickets Mars colony.
You will be able to rent a whole Meta datacenter with thousands of NVIDIA B200 for $5/hour. AWS will become unprofitable due to abundance of capacity...
Sounds like "We're too big to fail. If we go down, everyone goes down. It is your choice."
But unlike 08 crisis, we're getting a heads up to bring out the lube.
I hope it goes down. Its really not a very powerful threat.
Is it really a bubble about to burst when literally everyone is talking about AI being in a bubble and maybe bursting soon?
To me, we're clearly not peak AI exuberance. AI agents are just getting started and getting so freaking good. Just the other day, I used Vercel's v0 to build a small business website for a relative in 10 minutes. It looked fantastic and very mobile friendly. I fed the website to ChatGPT5.1 and asked it to improve the marketing text. I fed those improvements back to v0. Finished in 15 minutes. Would have taken me at least one week in the past to do a design, code it, test it for desktop/mobile, write the copy.
The way AI has disrupted software building in 3 short years is astonishing. Yes, code is uniquely great for LLM training due to open source code and documentation but as other industries catch up on LLM training, they will change profoundly too.
The economics of it is what's the problem, not the power of LLMs.
It's not that the AI models or products don't work.
It's how much money is being poured into it, how much of the money that is just changing hands between the big players, the revenue, and the valuations.
Well, do you have a model for this? Or is it just regurgitating mass media that it's a bubble?
If hyperscalers keep buying GPUs and Chinese companies keep saying they don't have enough GPUs, especially advanced ones, why should we believe someone that it's a bubble based on "feel"?
This is a very biased example. Also, it is possible only because right now the tools you've used are heavily subsidised by investors' money. A LOT of it. Nobody questions the utility of what you just mentioned, but nodoby stops to ask if this would be viable if you were to pay the actual cost of these models, nor what it means for 99.9% of all the other jobs that AI companies claim can be automated, but in reality are not even close to be displaced by their technology.
Why is it biased?
So what if it's subsidized and companies are in market share grab? Is it going to cost $40 instead of $20 that I paid? Big deal. It still beats the hell out of $2k - $3k that it would have taken before and weeks in waiting time.
100x cheaper, 1000x faster delivery. Further more, v0 and ChatGPT together for sure did much better than the average web designer and copy writer.
Lastly, OpenAI has already stated a few times that they are "very profitable" in inference. There was an analysis posted on HN showing that inference for open source models like Deepseek are also profitable on a per token basis.
We don't know what AI should cost but if you look at the numbers then 2x more expensive is much too low.
Think about the pricing. OpenAI fixed everyone's prices to free and/or roughly the cost of a Netflix subscription, which in turn was pinned to the cost of a cable TV subscription (originally). These prices were made up to sound good to his friends, they weren't chosen based on sane business modelling.
Then everyone had to follow. So Anthropic launched Claude Code at the same price point, before realizing that was deadly and overnight the price went up by an order of magnitude. From $20 to $200/month, and even that doesn't seem to be enough.
If the numbers leaked to Ed Zitron are true then they aren't profitable on inference. But even if that were true, so what? It's a meaningless statement, just another way of saying they're still under-pricing their models. Inferencing and model licensing are their only revenue streams! That has to cover everything including training, staff costs, data licensing, lawsuits, support, office costs etc.
Maybe OpenAI can launch an ad network soon. That's their only hope of salvation but it's risky because if they botch it users might just migrate to Grok or Gemini or Claude.
I’ve wondered if it makes sense to buy Intel along with Cerebrus in order to use Intels newest nodes while under development to fab the Cerebrus wafer level inference chips which are more tolerant of defects. Overall that seems like the cheapest way to perform inference - if you have $100B.
LLMs are particularly good at web development (granted that's a big market), probably due to a lot of the training material being that.
If it subsidized it's a problem because we're not talking about Uber trying to disrupt a clearly flawed system of transportation. We're talking about companies whose entire promise is an industrial revolution of a scale we've never seen before. That is the level of the bet. The fact they did much better than the average professional is also your own take and assessment that is purely self evident. Also, your example has fundamentally no value. You mentioned a marginal use case that doesn't scale. Personal websites will be quicker to make because you can get whatever the AI spews your way, you have basically infinite flexibility and the only contraints are "getting it done" and "looking ok/good". That is not how larger business work, at all. So there is a massive issue of scalability of this. Finally, OpenAI "states" a lot of things, and a lot of them have been proven to be flat out lies, because they're led by a man who has been proved to be a pathological narcissistic liar many times over. Yet you keep drinking the kool aid, inlcuding about inference. There are by the way reports that, data in hand, prove quite convincingly that "being profitable on inference" seems to be math gymnastics, and not at all the financial reality of OpenAI.
The vast majority of highly valuable tech companies in the last 35 years have subsidized their products or services in the beginning as they grew. Why should OpenAI be any different? In particular the tokenomics is already profitable.
I think you are missing the fundamental point here. The question is not really if AI has some value. That much is obvious and the exemple you give, increasing developer productivity, is a good one.
The question is: is the value generated by AI aligned with the market projected value as currently priced in AI companies valuation? That's what's more difficult to assess.
The gap between fundamental financial data and valuations is very large. The risk is a brutal reassessment of these prices. That's what people call a bubble bursting and it doesn't mean the underlying technology has no value. The internet bubble burst yet the internet is probably the most significant innovation of the past twenty years.
Well it all started with usual SV style "growth hacking"(price dumping as a SaaS) of "gather users now, monetize later" by OpenAI - which works only if you attain virtual monopoly(basically dominance over segment of a market, with competition not really competing with you) over segment of the market.
The problem is no one attained that position, price expectations are set and it turns out that wishful thinking of reducing costs of running the models by orders of magnitude wasn't fruitful.
Is AI useful? of course.
Are the real costs of it justified? in most cases no.
That said, I don't think the bubble is done growing nor do I think it is about to burst.
I personally think we are in 1995 of the dotcom bubble equivalent. When it bursts, it will still be much bigger than in November 2025.
Let him cook
> Is it really a bubble about to burst when literally everyone is talking about AI being in a bubble and maybe bursting soon?
Yes, it is even one of necessary components. Everybody is twitchy afraid of the pop, but immediate returns are too tempting so they keep money in. The bubble pops when something happens and they all start to panicking at the same time. They all need to be sufficiently stressed for that mass run to happen.
So after the bubble pops, do you think the AI market will still be bigger in November 2025?
In other words, do you think we're in 1995 of the dotcom or 2000?
I'm okay with being victim to RAM and NVME prices returning to pre-skyrocket levels.
And GPUs being available again for normal prices.
Obviously, at least in the US the AI bubble is the only thing keeping the economy afloat. If it wasn't for the bubble the US would be in a recession.
Not sure how the situation is in Europe and Asia, but I would guess about the same.
Now they are talking like Wall Street greedy bankers back in subprime crisis.
Also it seems like Wall Street greedy bankers might have an other subprime crisis on their hands at same time... I wonder which one will be saved again...
In other words - "We will be firing many of you when the bubble bursts."
It’s not a matter of if, it’s a matter of when.
Most of all of Big Tech, especially Google are doing just fine, making $100B a quarter.
Startups and other unprofitable companies however...
After COVID they were still making a killing, but axed 12k people anyway. So, if someone starts doing layoffs and the market reacts well profitable companies will do layoffs as well
Except, yes, they will.
Not immune, maybe, but pretty well off if they didn't buy in.