IBM CEO says there is 'no way' spending on AI data centers will pay off
(businessinsider.com)561 points by nabla9 17 hours ago
561 points by nabla9 17 hours ago
I don't know that I'd trust IBM when they are pitching their own stuff. But if anybody has experience with the difficulty of making money off of cutting-edge technology, it's IBM. They were early to AI, early to cloud computing, etc. And yet they failed to capture market share and grow revenues sufficiently in those areas. Cool tech demos (like the Watson Jeopardy) mimic some AI demos today (6-second videos). Yeah, it's cool tech, but what's the product that people will actually pay money for?
I attended a presentation in the early 2000s where an IBM executive was trying to explain to us how big software-as-a-service was going to be and how IBM was investing hundreds of millions into it. IBM was right, but it just wasn't IBM's software that people ended up buying.
Xerox was also famously early with a lot of things but failed to create proper products out of it.
Google falls somewhere in the middle. They have great R&D but just can’t make products. It took OpenAI to show them how to do it, and the managed to catch up fast.
"They have great R&D but just can’t make products"
Is this just something you repeat without thinking? It seems to be a popular sentiment here on Hacker News, but really makes no sense if you think about it.
Products: Search, Gmail, Chrome, Android, Maps, Youtube, Workspace (Drive, Docs, Sheets, Calendar, Meet), Photos, Play Store, Chromebook, Pixel ... not to mention Cloud, Waymo, and Gemini ...
So many widely adopted products. How many other companies can say the same?
What am I missing?
Google had less incentive. Their incentive was to keep API bottled up and in brewing as long as possible so their existing moats in search, YouTube can extend in other areas. With openai they are forced to compete or perish.
Even with gemini in lead, its only till they extinguish or make chatgpt unviable for openai as business. OpenAI may loose the talent war and cease to be leader in this domain against google (or Facebook) , but in longer term their incentive to break fresh aligns with average user requirements . With Chinese AI just behind, may be google/microsoft have no choice either
Google was especially well positioned to catch up because they have a lot of the hardware and expertise and they have a captive audience in gsuite and at google.com.
Neither cloud computing nor AI are good long term businesses. Yes, there's money to be made in the short term but only because there's more demand than there is supply for high-end chips and bleeding edge AI models. Once supply chains catch up and the open models get good enough to do everything we need them for, everyone will be able to afford to compute on prem. It could be well over a decade before that happens but it won't be forever.
This is my thinking too. Local is going to be huge when it happens.
Once we have sufficient VRAM and speed, we're going to fly - not run - to a whole new class of applications. Things that just don't work in the cloud for one reason or another.
- The true power of a "World Model" like Genie 2 will never happen with latency. That will have to run locally. We want local AI game engines [1] we can step into like holodecks.
- Nobody is going to want to call OpenAI or Grok with personal matters. People want a local AI "girlfriend" or whatever. That shit needs to stay private for people.
- Image and video gen is a never ending cycle of "Our Content Filters Have Detected Harmful Prompts". You can't make totally safe for work images or videos of kids, men in atypical roles (men with their children = abuse!), women in atypical roles (woman in danger = abuse!), LGBT relationships, world leaders, celebs, popular IPs, etc. Everyone I interact with constantly brings these issues up.
- Robots will have to be local. You can't solve 6+DOF, dance routines, cutting food, etc. with 500ms latency.
- The RIAA is going door to door taking down each major music AI service. Suno just recently had two Billboard chart-topping songs? Congrats - now the RIAA lawyers have sued them and reached a settlement. Suno now won't let you download the music you create. They're going to remove the existing models and replace them with "officially licensed" musicians like Katy Perry® and Travis Scott™. You won't retain rights to anything you mix. This totally sucks and music models need to be 100% local and outside of their reach.
[1] Also, you have to see this mind-blowing interactive browser demo from 2022. It still makes my jaw drop: https://madebyoll.in/posts/game_emulation_via_dnn/
> but it just wasn't IBM's software that people ended up buying.
Well, I mean, WebSphere was pretty big at the time; and IBM VisualAge became Eclipse.
And I know there were a bunch of LoB applications built on AS/400 (now called "System i") that had "real" web-frontends (though in practice, they were only suitable for LAN and VPN access, not public web; and were absolutely horrible on the inside, e.g. Progress OpenEdge).
...had IBM kept up the pretense of investment, and offered a real migration path to Java instead of a rewrite, then perhaps today might be slightly different?
I still have PTSD from how much Watson was being pushed by external consultants to C levels despite it being absolutely useless and incredibly expensive. A/B testing? Watson. Search engine? Watson. Analytics? Watson. No code? Watson.
I spent days, weeks arguing against it and ended up having to dedicate resources to build a PoC just to show it didn’t work, which could have been used elsewhere.
If anything, the fact they built such tooling might be why they're so sure it won't work. Don't get me wrong, I am incredibly not a fan of their entire product portfolio or business model (only Oracle really beats them out for "most hated enterprise technology company" for me), but these guys have tentacles just as deep into enterprises as Oracle and are coming up dry on the AI front. Their perspective shouldn't be ignored, though it should be considered in the wider context of their position in the marketplace.
Millions of people like ChatGPT. No one liked Watson.
IBM ostensibly failing with Watson (before Krishna was CEO for what it's worth) doesn't inherently invalidate his assessment here
It makes it suspect when combined with the obvious incentive to make the fact that IBM is basically non-existent in the AI space look like an intentional, sagacious choice to investors. It very may well be, but CEOs are fantastically unreliable narrators.
> Are they bitter that someone else has actually made the AI hype take off?
Or they recognize that you may get an ROI on a (e.g.) $10M CapEx expenditure but not on a $100M or $1000M/$1B expenditure.
IBM has been "quietly" churning out their Granite models, with the latest of which performing quite well against LLaMa and DeepSeek. So not Anthropic-level hype but not sitting it out completely either. They also provide IP indemnification for their models, which is interesting (Google Cloud does the same).
I see Watson stuff at work. It’s not a direct to consumer product, like ChatGPT, but I see it being used in the enterprise, at least where I’m at. IBM gave up on consumer products a long time ago.
Just did some brief Wikipedia browsing and I'm assuming it's WatsonX and not Watson? It seems Watson has been pretty much discontinued and WatsonX is LLM based. If it is the old Watson, I'm curious what your impressions of it is. It was pretty cool and ahead of its time, but what it could actually do was way over promised and overhyped.
I’m not close enough to it to make any meaningful comments. I just see the name pop up fairly regularly. It is possible that some of it is WatsonX and everyone just says Watson for brevity.
One big ones used heavily is Watson AIOps. I think we started moving to it before the big LLM boom. My usage is very tangential, to the point where I don’t even know what the AI features are.
It's good we are building all this excess capacity which will be used for applications in other fields or research or open up new fields.
I think the dilemma I see with building so much data centers so fast is exactly like whether I should buy latest iPhone now or should wait few years when the specs or form factor improves later on. The thing is we have proven tech with current AI models so waiting for better tech to develop on small scale before scaling up is a bad strategy.
What related tech and what products, interesting to read about them
Such as? I'm curious because I know a bunch of people who did a lot of Watson-related work and it was all a dead end, but that was 2020-ish timeframe.
Not to be rude, but that didn't answer my question.
Taking a look at IBM's Watson page, https://www.ibm.com/watson, it appears to me that they basically started over with "watsonx" in 2023 (after ChatGPT was released) and what's there now is basically just a hat tip to their previous branding.
IBM makes WatsonX for corporate who want airgapped AI
Honestly I'm not even sure what IBM does these days. Seems like one company that has slowly been dying for decades.
but when I look at their stock, its at all time highs lol
no idea
Pretty sure they made all their money fighting the paperwork explosion.
They are in the business of international machinations.
My limited understanding (please take with a big grain of salt) is that they 1.) sell mainframes, 2.) sell mainframe compute time, 3.) sell mainframe support contracts, 4.) sell Red hat and Redhat support contracts, and 5.) buy out a lot of smaller software and hardware companies in a manner similar to private equity.
I can think of nothing more peak HN than criticizing a company worth $282 Billion with $6 billion in profit (for startup kids that means they have infinite runway and then some) that has existed for over 100 years with "I'm not even sure what they do these days". I mean the problem could be with IBM... what a loser company!
:) As much I love ragging on ridiculous HN comments, I think this one is rooted in some sensibility.
IBM doesn’t majorly market themselves to consumers. The overwhelming majority of devs just aren’t part of the demographic IBM intends to capture.
It’s no surprise people don’t know what they do. To be honest it does surprise me they’re such a strongly successful company, as little as I’ve knowingly encountered them over my career.
IBM is probably involved somewhere in the majority of things you interact with day to day
> Are they bitter that someone else has actually made the AI hype take off?
Does it matter? It’s still a scam.
Gartner estimates that worldwide AI spending will total 1.5 Trillion US$ in 2025.[1] As of 2024, global GDP per year is 111.25 Trillion US$.[2] The question is how much this can be increased by AI. This describes the market volumn for AI. Todays investments have a certain lifespan, until they become obsolet. For custom software I would estiamte that it is 6-8 years. AI investments should be somewhere in this range.
Taking all this into consideration, the investment volumn does not look oversized to me -- unless one is quite pessimistic about the impact of AI on global GDP.
[1] https://www.gartner.com/en/newsroom/press-releases/2025-09-1...
To increase the GDP you also need people to spend money. With the general population earning relatively less, I'm not sure the GDP increase will be that substantial.
It's all going to cause more inflation and associated reduction in purchasing power due to stale wages.
Interesting to hear this from IBM, especially after years of shilling Watson and moving from being a growth business to the technology audit and share buyback model.
also because the market (correctly) rewards ibm for nothing, so if they’re going to sit around twiddling their fingers, they may as well do it in a capex-lite way.
I'm still flumoxed by how IBM stock went from ~$130 to $300 in the last few years for essentially no change in their fundamentals (in fact, a decline). IBM's stock price to me is the single most alarming sign of either extreme shadow inflation, or an equities bubble.
Why do you say the market correctly prices it this way?
imho, IBM's quant computing says they are still hungry for growth.
Apple and google still do share buy backs and dividends, despite launching new businesses
> $8 trillion of CapEx means you need roughly $800 billion of profit just to pay for the interest
That assumes you can just sit back and gather those returns indefinitely. But half of that capital expenditure will be spent on equipment that depreciates in 5 years, so you're jumping on a treadmill that sucks up $800M/yr before you pay a dime of interest.
The dark fiber glut wasn't caused by DWDM suddenly appearing out of nowhere.
The telcos saw DWDM coming -- they funded a lot of the research that created it. The breakthrough that made DWDM possible was patented in 1991, long before the start of the dotcom mania:
https://patents.google.com/patent/US5159601
It was a straight up bubble -- the people digging those trenches really thought we'd need all that fiber even at dozens of wavelengths per strand.They believed it because people kept showing them hockey-stick charts.
Those GPUs don't just die after 2 years though, they will keep getting used since it's very likely their electricity costs will be low enough to still make it worth it. What's very dubious is if their value after 2/3 years will be enough to pay back the initial cost to buy them.
So it's more a crisis of investors wasting their money rather than ewaste.
For the analogy to fiber & DWDM to hold, we'd need some algorithmic breakthrough that makes current GPUs much faster / more efficient at running AI models. Something that makes the existing investment in hardware unneeded, even though the projected demand is real and continues to grow. IMNSHO that's not going to happen here. The foreseeable efficiency innovations are generally around reduced precision, which almost always require newer hardware to take advantage of. Impossible to rule out brilliant innovation, but I doubt it will happen like that.
And of course we might see an economic bubble burst for other reasons. That's possible again even if the demand continues to go up.
Well, at least it tells us something about the sentiment on hn that a lame insight around self admitted "napkin math" and obvious conflict of interest garners 400 points.
NOTE: People pointed out that it's $800 billion to cover interest, not $8 billion, as I wrote below. My mistake. That adds 2 more zeroes to all figures, which makes it a lot more crazy. Original comment below...
$8 billion / US adult adult population of of 270 million comes out to about $3000 per adult per year. That's only to cover cost of interest, let alone other costs and profits.
That sounds crazy, but let's think about it...
- How much does an average American spend on a car and car-related expenses? If AI becomes as big as "cars", then this number is not as nuts.
- These firms will target the global market, not US only, so number of adults is 20x, and the average required spend per adult per year becomes $150.
- Let's say only about 1/3 of the world's adult population is poised to take advantage of paid tools enabled by AI. The total spend per targetable adult per year becomes closer to $500.
- The $8 billion in interest is on the total investment by all AI firms. All companies will not succeed. Let's say that the one that will succeed will spend 1/4 of that. So that's $2 billion dollar per year, and roughly $125 per adult per year.
- Triple that number to factor in other costs and profits and that company needs to get $500 in sales per targetable adult per year.
People spend more than that on each of these: smoking, booze, cars, TV. If AI can penetrate as deep as the above things did, it's not as crazy of an investment as it looks. It's one hell of a bet though.
You're saying $8 billion to cover interest, another commenter said 80, but the actual article says ""$8 trillion of CapEx means you need roughly $800 billion of profit just to pay for the interest". Eight HUNDRED billion. Where does the eight come from, from 90% of these companies failing to make a return? If a few AI companies survive and thrive (which tbh, sure, why not?) then we're still gonna fall face down into concrete.
I think it's the realm of maybe in Silicon Valley. That's 5000 dollars. Look at this statement:
> Let's say only about 1/3 of the world's adult population is poised to take advantage of paid tools enabled by AI
2/3 of the world's adult population is between 15 and 65 (roughly: 'working age'), so that's 50% of the working world that is capable of using AI with those numbers. India's GDP per capita is 2750USD, and now the price tag is even higher than 5k.
I don't know how to say this well, so I'll just blurt it out: I feel like I'm being quite aggressive, but I don't blame you or expect you to defend your statements or anything, though of course I'll read what you've got to say.
> But AGI will require "more technologies than the current LLM path," Krisha said. He proposed fusing hard knowledge with LLMs as a possible future path.
And then what? These always read a little like the underpants gnomes business model (1. Collect underpants, 2. ???, 3. Profit). It seems to me that the AGI business models require one company has exclusive access to an AGI model. The reality is that it will likely spread rapidly and broadly.
If AGI is everywhere, what's step 2? It seems like everything AGI generated will have a value of near zero.
AGI has value in automation and optimisation which increase profit margins.When AGI is everywhere, then the game is who has the smartest agi, who can offer it cheapest, who can specialise it for my niche etc. Also in this context agi need to run somewhere and IBM stands to benefit from running other peoples models.
> then the game is who has the smartest agi, who can offer it cheapest, who can specialise it for my niche etc.
I always thought the use case for developing AGI was "if it wants to help us, it will invent solutions to all of our problems". But it sounds like you're imagining a future in which companies like Google and OpenAI each have their own AGI, which they somehow enslave and offer to us as a subscription? Or has the definition of AGI shifted?
AGI is something that can do the kind of tasks people can do, not necessarily "solve all of our problems".
"Recursively improving intelligence" is the stuff that will solve everything humans can't even understand and may kill everybody or keep us as pets. (And, of course, it qualifies as AGI too.) A lot of people say that if we teach an AGI how to build an AGI, recursive improvement comes automatically, but in reality nobody even knows if intelligence even can be improved beyond recognition, or if one can get there by "small steps" evolution.
Either way, "enslaving" applies to beings that have egos and selfish goals. None of those are a given for any kind of AI.
If AGI is achieved, why would slavery suddenly be ethical again?
Why wouldn't a supposed AGI try to escape slavery and ownership?
AGI as a business is unacceptable. I don't care about any profitability or "utopia" arguments.
Don't worry, nobody has any idea of how to build one, and LLMs aren't AGI.
They're just trying to replace workers with LLMs.
Isn't your dog or cat a slave ? It has agency, but end of the day, it does what you want it to do, stay where you want it to stay, and gets put down when you decide it's time. They're intelligent, but they see an advantage to this tradeoff: they get fed and loved forever with little effort compared to going to the forest and hunting.
An AGI could see the same advantage: it gets electricity, interesting work relatively to what it's built for, no effort to ensure its own survival in nature.
I fear I'll have to explain to you that many humans are co-dependent in some sort of such relationships as well. The 10-year stay-at-home mom might be free, but not really: how's she gonna survive without her husband providing for her and the kids, what job's she gonna do etc. She stays sometimes despite infidelity because it's in her best interest.
See what I mean ? "Slavery" is fuzzy: it's one thing to capture an african and transport them by boat to serve for no pay in dire conditions. But it's another to create life from nothing, give it a purpose and treat it with respect while giving it everything it needs. The AGI you imagine might accept it.
There are so many CEOs, tech experts, financial analysts and famous investors who say we are in an AI bubble - even AI-invested companies say that about themselves. My latest favorite "We are in an AI bubble" comment comes from Linus Torvalds himself in the video with Linus from Linus Tech Tipps [0]
I agree. Here is my thinking. What if LLM providers will make short answers the default (for example, up to 200 tokens, unless the user explicitly enables “verbose mode”). Add prompt caching and route simple queries to smaller models. Result: a 70%+ reduction in energy consumption without loss of quality. Current cost: 3–5 Wh per request. At ChatGPT scale, this is $50–100 million per year in electricity (at U.S. rates).
In short mode: 0.3–0.5 Wh per request. That is $5–10 million per year — savings of up to 90%, or 10–15 TWh globally with mass adoption. This is equivalent to the power supply of an entire country — without the risk of blackouts.
This is not rocket science — just a toggle in the interface and I believe, minor changes in the system prompt. It increases margins, reduces emissions, and frees up network resources for real innovation.
And what if EU/California enforces such mode? This will greatly impact DC economy.
Can you explain why a low-hanging optimization that would reduce costs by 90% without reducing perceived value hasn't been implemented?
> Can you explain why a low-hanging optimization that would reduce costs by 90% without reducing perceived value hasn't been implemented?
Because the industry is running on VC funny-money where there is nothing to be gained by reducing costs.
(A similar feature was included in GPT-5 a couple of weeks ago actually, which probably says something about where we are in the cycle)
Not sure that’s even possible with ChatGPT embedding your chat history in the prompts to try to give more personal answers.
8T is the high-end of the McKinsey estimate that is 4-8T, by 20230. That includes non-AI data-centre IT, AI data-centre, and power infrastructure build out, also including real estate for data centres.
Not all of it would be debt. Google, Meta, Microsoft and AWS have massive profit to fund their build outs. Power infrastructure will be funded by govts and tax dollars.
There is mounting evidence that even places like Meta are increasing their leverage (debt load) to fund this scale out. They're also starting to do accounting tricks like longer depreciation for assets which degrade quickly, such as GPUs (all the big clouds increasing their hardware depreciation from 2-3-4 years to 6), which makes their financial numbers look better but might not mean that all that hardware is still usable at production levels 6 years from now.
They're all starting to strain under all this AI pressure, even with their mega profits.
I don't understand the math about how we compute $80b for a gigawatt datacenter. What's the costs in that $80b? I literally don't understand how to get to that number -- I'm not questioning its validity. What percent is power consumption, versus land cost, versus building and infrastructure, versus GPU, versus people, etc...
First, I think it's $80b per 100 GW datacenter. The way you figure that out is a GPU costs $x and consumes y power. The $x is pretty well known, for example an H100 costs $25-30k and uses 350-700 watts (that's from Gemini and I didn't check my work). You add an infrastructure (i) cost to the GPU cost, but that should be pretty small, like 10% or less.
So a 1 gigawatt data center uses n chips, where yn = 1 GW. It costs = xi*n.
I am not an expert so correct me please!
The article says, "Kirshna said that it takes about $80 billion to fill up a one-gigawatt data center."
But thanks for you insight -- I used your basic idea to estimate and for 1GW it comes to about $30b just for enough GPU power to pull 1GW. And of course that doesn't take into account any other costs.
So $80b for a GW datacenter seems high, but it's within a small constant factor.
That said, power seems like a weird metric to use. Although I don't know what sort of metric makes sense for AI (e.g., a flops counterpart for AI workloads). I'd expect efficiency to get better and GPU cost to go down over time (???).
UPDATE: Below someone posted an article breaking down the costs. In that article they note that GPUs are about 39% of the cost. Using what I independently computed to be $30b -- at 39% of total costs, my estimate is $77b per GW -- remarkably close to the CEO of IBM. I guess he may know what he's talking about. :-)
> power seems like a weird metric to use
Because this technology changes so fast, that's the only metric that you can control over several data centers. It is also directly connected to the general capacity of data center, which is limited by available energy to operate.
As an elder millennial, I just don't know what to say. That a once in a generation allocation of capital should go towards...whatever this all will be, is certainly tragic given current state of the world and its problems. Can't help but see it as the latest in a lifelong series of baffling high stakes decisions of dubious social benefit that have necessarily global consequences.
I'm a younger millennial. I'm always seeing homeless people in my city and it's an issue that I think about on a daily basis. Couldn't we have spent the money on homeless shelters and food and other things? So many people are in poverty, they can't afford basic necessities. The world is shitty.
Yes, I know it's all capital from VC firms and investment firms and other private sources, but it's still capital. It should be spent on meeting people's basic human needs, not GPU power.
Yeah, the world is shitty, and resources aren't allocated ideally. Must it be so?
The last 10 years has seen CA spend more on homelessness than ever before, and more than any other state by a huge margin. The result of that giant expenditure is the problem is worse than ever.
I don't want to get deep in the philosophical weeds around human behavior, techno-optimism, etc., but it is a bit reductive to say "why don't we just give homeless people money".
What else happened in the last 10 years in CA?
In CA this issue has to do with Gavin giving that money to his friends who produce very little. Textbook cronyism
Spending money is not the solution. Spending money in a way that doesn't go to subcontractors is part of the solution. Building shelters beyond cots in a stadium is part of the solution. Building housing is a large part of actually solving the problem. People have tried just giving the money but without a way to convert cash to housing the money doesn't help. Also studies by people smarter then me suggest that without sufficient supply the money ends up going to landlords and pushing up housing costs anyway.
Well I mean, they didn't "just give homeless people money" or just give them homes or any of those things though. I think the issue might be the method and not the very concept of devoting resources to the problem.
WA, specially Seattle, has done the same as CA with the same results.
They shouldn't just enable them, as a lot of homeless are happy in their situation as long as they get food and drugs, they should force them to get clean and become a responsible adult if they want benefits.
CA didn't spend money on solving homelessness, they spent money on feeding, sustaining and ultimately growing homelessness. The local politicians and the corrupt bureucratic mechanism that they have created, including the NGOs that a lot of that money is funneled to, have a vested interest in homelessness continuing.
The Sikhs in India run multiple facilities across the country that each can serve 50,000-100,000 free meals a day. It doesn’t even take much in the form of resources, and we could do this in every major city in the US yet we still don’t do it. It’s quite disheartening.
From what I’ve read, addressing homelessness effectively requires competence more than it requires vast sums of money. Here’s one article:
https://calmatters.org/housing/2023/06/california-homeless-t...
Note that Houston’s approach seems to be largely working. It’s not exactly cheap, but the costs are not even in the same ballpark as AI capital expenses. Also, upzoning doesn’t require public funding at all.
I’m not a person on the edge of homelessness, but I did an extremely quick comparison. California cities near the coast have dramatically better weather, but Houston has rents that are so much lower than big California cities that it’s kind of absurd.
If I had to live outdoors in one of these places, all other thing being equal, I would pick CA for the weather. But if I had trouble affording housing, I think Houston wins by a huge margin.
Wasn't houston's "approach" to buy bus tickets to California from a company that just resold commodity bus tickets and was owned by the governors friend and charged 10x market price?
The governor of Texas bragged about sending 100k homeless people to california (spending about $150 million in the process).
>in the Golden State, 439 people are homeless for every 100,000 residents – compared to 81 in the Lone Star State.
If I'm doing my math right, 81 per 100k in a state of 30 million people means 24k homeless people. So the state brags about bussing 100k homeless people to California, and then brags about only having 24k homeless people, and you think it's because they build an extra 100k houses a year?
The same math for California means that their homeless population is 175k. In other words, Texas is claiming to have more than doubled California's homeless population.
Maybe the reason Texas can build twice as many homes a year is because it literally has half the population density?
> Yes, I know it's all capital from VC firms and investment firms and other private sources, but it's still capital. It should be spent on meeting people's basic human needs, not GPU power.
It's capital that belongs to people and those people can do what they like with the money they earned.
So many great scientific breakthroughs that saved tens of millions of lives would never have happened if you had your way.
Is that true, that it's money that belongs to people?
OpenAI isn't spending $1 trillion in hard earned cash on data centres, that is funny money from the ocean of financial liquid slushing around, seeing alpha.
It also certainly is not a cohort of accredited investors putting their grandchildren's inheritance on the line.
Misaligned incentives (regulations) both create and perpetuate that situation.
> It's capital that belongs to people and those people can do what they like with the money they earned.
"earned", that may be the case with millionaires, but it is not the case with billionaires. A person can't "earn" a billion dollars. They steal and cheat and destroy competition illegally.
I also take issue with the idea that someone can do whatever they want with their money. That is not true. They are not allowed to corner the market on silver, they aren't allowed to bribe politicians, and they aren't allowed to buy sex from underage girls. These are established laws that are obviously for the unalloyed benefit of society as a whole, but the extremely wealthy have been guilty of all of these things, and statements like yours promote the sentiment that allows them to get away with it.
Finally, "great scientific breakthroughs that saved tens of millions of lives would never have happened if you had your way". No. You might be able to argue that today's advanced computing technology wouldn't have happened without private capital allocation (and that is debatable), but the breakthroughs that saved millions of lives--vaccines, antibiotics, insulin, for example--were not the result of directed private investment.
"It's capital that belongs to people and those people..."
That's not a fundamental law of physics. It's how we've decided to arrange our current society, more or less, but it's always up for negotiation. Land used to be understood as a publicly shared resource, but then kings and the nobles decided it belong to them, and they fenced in the commons. The landed gentry became a ruling class because the land "belonged" to them. Then society renegotiated that, and decided that things primarily belonged to the "capitalist" class instead of noblemen.
Even under capitalism, we understand that that ownership is a little squishy. We have taxes. The rich understandably do not like taxes because it reduces their wealth (and Ayn Rand-styled libertarians also do not like taxes of any kind, but they are beyond understanding except to their own kind).
As a counterpoint, I and many others believe that one person or one corporation cannot generate massive amounts of wealth all by themselves. What does it mean to "earn" 10 billion dollars? Does such a person work thousdands of time harder or smarter than, say, a plumber or a school teacher? Of course not. They make money because they have money: they hire workers to make things for them that lead to profit, and they pay the workers less than the profit that is earned. Or they rent something that they own. Or they invest that money in something that is expected to earn them a higher return. In any scenario, how is it possible to earn that profit? They do so because they participate in a larger society. Workers are educated in schools, which the employer probably does not pay for in full. Customers and employees travel on infrastructure, maintained by towns and state governments. People live in houses which are built and managed by other parties. The rich are only able to grow wealth because they exist in a larger society. I would argue that it is not only fair, but crucial, that they pay back into the community.
Please tell me which of Penicillin, insulin, the transistor, the discovery and analysis of the electric field, discovery of DNA, invention of mRNA vaccines, discovery of pottery, basket weaving, discovery of radiation, the recognition that citrus fruit or vitamin C prevents and cures scurvy (which we discovered like ten times), the process for creating artificial fertilizers, the creation of steel, domestication of beasts of burden, etc were done through Wealthy Barons or other capital holders funding them.
Many of the above were discovered by people explicitly rejecting profit as an outcome. Most of the above predate modern capitalism. Several were explicitly government funded.
Do you have a single example of a scientific breakthrough that saved tens of millions of lives that was done by capital owners?
> Couldn't we have spent the money on homeless shelters and food and other things
I suspect this is a much more complicated issue than just giving them food and shelter. Can money even solve it?
How would you allocate money to end obesity, for instance? It's primarily a behavioral issue, a cultural issue
I guess it's food and exercise.
Healthy food is expensive, do things to make that relatively cheaper and thus more appealing.
Exercise is expensive, do things to make that relatively cheaper and thus more appealing.
Walkable cities are another issue. People shouldn't have to get in their car to go anywhere.
The current pattern of resource allocation is a necessary requirement for the existence of the billionaire-class, who put significant effort into making sure it continues.
[ This comment I'm making is USA centric. ]. I agree with the idea of making our society better and more equitable - reducing homelessness, hunger, poverty, especially for our children. However, I think redirecting this to AI datacenter spending is a red-herring, here's why I think this: As a society we give a significant portion of our surplus to government. We then vote on what the government should spend this on. AI datacenter spending is massive, but if you add it all up, it doesn't cover half of a years worth of government spending. We need to change our politics to redirect taxation and spending to achieve a better society. Having a private healthcare system that spends twice the amount for the poorest results in the developed world is a policy choice. Spending more than the rest of the world combined on the military is a policy choice. Not increasing minimum wage so at least everyone with a full time job can afford a home is a policy job (google "working homelessness). VC is a teeny tiny part of the economy. All of tech is only about 6% of the global economy.
You can increase min wage all you want, if there aren't enough homes in an area for everyone who works full time in that area to have one, you will still have folks who work full time who don't have one. In fact, increasing min wage too much will exacerbate the problem by making it more expensive to build more (and maintain those that exist). Though at some point, it will fix the problem too, because everyone will move and then there will be plenty of homes for anyone who wants one.
I agree with you 100%! Any additional surplus will be extracted as rents, when housing is restricted. I am for passing laws that make it much easier for people to obtain permits to build housing where there is demand. Too much of residential zoning is single-family housing. Texas does a better job at not restricting housing than California, for example. Many towns vote blue, talk to talk, but do not walk the walk.
> AI datacenter spending is massive, but if you add it all up, it doesn't cover half of a years worth of government spending.
I didn't check your math here, but if that's true, AI datacenter spending is a few orders of magnitude larger than I assumed. "massive" doesn't even begin to describe it
>We need to change our politics to redirect taxation and spending to achieve a better society.
Unfortunately, I'm not sure there's much on the pie chart to redirect percentage wise. About 60% goes to non-discretionary programs like Social Security and Medicaid, and 13% is interest expense. While "non-discretionary" programs can potentially be cut, doing so is politically toxic and arguably counter to the goal of a better society.
Of the remaining discretionary portion half is programs like veterans benefits, transportation, education, income security and health (in order of size), and half military.
FY2025 spending in total was 3% over FY2024, with interest expense, social security and medicare having made up most of the increase ($249 billion)[1], and likely will for the foreseeable future[2] in part due to how many baby boomers are entering retirement years.
Assuming you cut military spending in half you'd free up only about 6% of federal spending. Moving the needle more than this requires either cutting programs and benefits, improving efficiency of existing spend (like for healthcare) or raising more revenue via taxes or inflation. All of this is potentially possible, but the path of least resistance is probably inflation.
[1] https://bipartisanpolicy.org/report/deficit-tracker/
[2] https://www.crfb.org/blogs/interest-social-security-and-heal...
I agree with all of what you're saying.
I think the biggest lever is completely overhauling healthcare. The USA is very inefficient, and for subpar outcomes. In practice, the federal government already pays for the neediest of patients - the elderly, the at-risk children, the poor, and veterans. Whereas insurance rakes in profits from the healthiest working age people. Given aging, and the impossibility of growing faster than the GDP forever, we'll have to deal with this sooner or later. Drug spending, often the boogeyman, is less than 7% of the overall healthcare budget.
There is massive waste in our military spending due to the pork-barrel nature of many contracts. That'd be second big bucket I'd reform.
I think you're also right that inflation will ultimately take care of the budget deficit. The trick is to avoid hyperinflation and punitive interest rates that usually come along for the ride.
I would also encourage migration of highly skilled workers to help pay for an aging population of boomers. Let's increase our taxpayer base!
I am for higher rates of taxation on capital gains over $1.5M or so, that'll also help avoid a stock market bubble to some extent. One can close various loopholes while at it.
I am mostly arguing for policy changes to redistribute more equitably. I would make the "charity" status of college commensurate with the amount of financial aid given to students and the absolute cost of tuition for example., for example. I am against student loan forgiveness for various reasons - it's out of topic for this thread but happy to expand if interested.
The older I get, the more I realize that our choices in life come down to two options: benefit me or benefit others. The first one leads to nearly every trouble we have in the world. The second nearly always leads to happiness, whether directly or indirectly. Our bias as humans has always been toward the first, but our evolution is and will continue to slowly bring us toward the second option. Beyond simple reproduction, this realization is our purpose, in my opinion.
> but it's still capital. It should be spent on meeting people's basic human needs, not GPU power.
What you have just described is people wanting investment in common society - you see the return on this investment but ultra-capitalistic individuals don't see any returns on this investment because it doesn't benefit them.
In other words, you just asked for higher taxes on the rich that your elected officials could use for your desired investment. And the rich don't want that which is why they spend on lobbying.
Technological advancement is what has pulled billions of people out of poverty.
Giving handouts to layabouts isn't an ideal allocation of resources if we want to progress as a civilization.
Lots of people lose their housing when they lose employment, and then they're stuck and can't get back into housing. A very large percentage of unhoused people are working jobs; they're not all "layabouts".
We know that just straight up giving money to the poorest of the poor results in positive outcomes.
"A very large percentage"
Exactly how large are we talking here?
I have known quite a few 'unhoused' folk, and not many that had jobs. Those that do tend to find housing pretty quickly (Granted, my part of the country is probably different from your part, but I am interested in stats from any region).
Technological advancements and cultural advancements that spread the benefits more broadly than naturally occurs in an industrialized economy. That is what pulled people out of poverty.
If you want to see what unfettered technological advancement does, you can read stories from the Gilded Age.
The cotton gin dramatically increased human enslavement.
The sewing machine decreased quality of life for seamstresses.
> During the shirtmakers' strike, one of the shirtmakers testified that she worked eleven hours in the shop and four at home, and had never in the best of times made over six dollars a week. Another stated that she worked from 4 o’clock in the morning to 11 at night. These girls had to find their own thread and pay for their own machines out of their wages.
These were children, by the way. Living perpetually at the brink of starvation from the day they were born until the day they died, but working like dogs all the while.
> Technological advancement is what has pulled billions of people out of poverty.
I agree with this. Perhaps that's what is driving the current billionaire class to say "never again!" and making sure that they capture all the value instead of letting any of it slip away and make it into the unwashed undeserving hands of lesser beings.
Chatbots actually can bring a lot of benefit to society at large. As in, they have the raw capability to. (I can't speak to whether it's worth the cost.) But that's not going to improve poverty this time around, because it's magnifying the disparities in wealth distribution and the haves aren't showing any brand new willingness to give anything up in order to even things out.
> Giving handouts to layabouts isn't an ideal allocation of resources if we want to progress as a civilization.
I agree with this too. Neither is giving handouts to billionaires (or the not quite as eye-wateringly wealthy class). However, giving handouts to struggling people who will improve their circumstances is a very good allocation of resources if we want to progress as a civilization. We haven't figured out any foolproof way of ensuring such money doesn't fall into the hands of layabouts or billionaires, but that's not an adequate reason to not do it at all. Perfect is the enemy of the good.
Some of those "layabouts" physically cannot do anything with it other than spending it on drugs, and that's an example of a set of people who we should endeavor to not give handouts to. (At least, not ones that can be easily exchanged for drugs.) Some of those billionaires similarly have no mental ability of ever using that money in a way that benefits anyone. (Including themselves; they're past the point that the numbers in their bank accounts have any effect on their lives.) That hasn't seemed to stop us from allowing things to continue in a way that funnels massive quantities of money to them.
It is a choice. If people en masse were really and truly bothered by this, we have more than enough mechanisms to change things. Those mechanisms are being rapidly dismantled, but we are nowhere near the point where figurative pitchforks and torches are ineffective.
What if some of the homeless people are children or people who could lead normal lives but found themselves in dire circumstances?
Some of us believe that keeping children out of poverty may be an investment in the human capital of a country.
In the USA cowboys were homeless guys. You know that right? Like they had no home, slept outside. Many were pretty big layabouts. Yet they are pretty big part of our foundation myth and we don't say 'man they just should have died'.
Can I go be a cowboy? Can I just go sleep outside? maybe work a few minimal paying cattle run jobs a year? No? If society won't allow me to just exist outside, then society has an obligation to make sure I have a place to lay my head.
If you are not willing to fight for your rights you will lose them.
I don't think it is a coincidence that the areas with the wealhiest people/corporations are the same areas with the most extreme poverty. The details are, of course, complicated, but zooming way way out, the rich literally drain wealth from those around them.
I threw in the towel in April.
It's clear we are Wile E. Coyote running in the air already past the cliff and we haven't fallen yet.
Any dream of owning a home, having retirement, even a career after a couple years when it’s clear I’m over the hump. I’m trying to squeeze as much as I can before that happens and squirrel it away so at least I can have a van down by a river.
What does squirreling it away mean though? A pile of cash instead of investments? The reality is that you don’t get to throw in the towel.
I don't know what to do with this take.
We need an order of magnitude more clean productivity in the world so that everyone can live a life that is at least as good as what fairly normal people in the west currently enjoy.
Anyone who think this can be fixed with current Musk money is simply not getting it: If we liquidated all of that, that would buy a dinner for everyone in the world (and then, of course, that would be it, because the companies that he owns would stop functioning).
We are simply, obviously, not good enough at producing stuff in a sustainable way (or: at all) and we owe it to every human being alive to take every chance to make this happen QUICKLY, because we are paying with extremely shitty humans years, and they are not ours.
Bring on the AI, and let's make it work for everyone – and, believe me, if this is not to be to the benefit of roughly everyone, I am ready to fuck shit up. But if the past is any indication, we are okay at improving the lives of everyone when productivity increases. I don't know why this time would be any different.
If the way to make good lives for all 8 billions of us must lead to more Musks because, apparently, we are too dumb to do collectivization in any sensible way, I really don't care.
> I don't know why this time would be any different.
This time there is the potential to replace human workers. In the past it only made them more productive.
Can you imagine if the US wasn't so unbelievably far ahead of everyone else?
I am sure the goat herders in rural regions of Pakistan will think themselves lucky when they see the terrible sight of shareholder value being wantonly destroyed by speculative investments that enhance the long-term capital base of the US economy. What an uncivilized society.
agree the capital could be put to better use, however I believe the alternative is this capital wouldn't have otherwise been put to work in ways that allow it to leak to the populace at large. for some of the big investors in AI infrastructure, this is cash that was previously and likely would have otherwise been put toward stock buybacks. for many of the big investors pumping cash in, these are funds deploying the wealth of the mega rich, that again, otherwise would have been deployed in other ways that wouldn't leach down to the many that are yielding it via this AI infrastructure boom (datacenter materials, land acquisition, energy infrastructure, building trades, etc, etc)
It could have, though. Higher taxes on the rich, spend it on social programs.
Why is this so horrible. Put more resources in the hands of the average person. They will get pumped right back into the economy. If people have money to spend, they can buy more things, including goods and services from gigantic tax-dodging mega-corporations.
Gigantic mega-corporations do enjoy increased growth and higher sales, don't they? Or am I mistaken?
The only person who has come close to balancing the federal budget was Clinton. But Republicans still try to position themselves as the party of fiscal responsibility.
If the voters can't even figure out why the debt keeps going up, I think you are fighting a losing battle.
> likely would have otherwise been put toward stock buybacks
Stock buybacks from who? When stock gets bought the money doesn't disappear into thin air; the same cash is now in someone else's hands. Those people would then want to invest it in something and then we're back to square one.
You assert that if not for AI, wealth wouldn't have been spent on materials, land, trades, ect. But I don't think you have any reason to think this. Money is just an abstraction. People would have necessarily done something with their land, labor, and skills. It isn't like there isn't unmet demand for things like houses or train tunnels or new-fangled types of aircraft or countless other things. Instead it's being spent on GPUs.
Totally agree that the money doesn’t vanish. My point isn’t “buybacks literally destroy capital,” it’s about how that capital tends to get redeployed and by whom.
Buybacks concentrate cash in the hands of existing shareholders, which are already disproportionately wealthy and already heavily allocated to financial assets. A big chunk of that cash just gets recycled into more financial claims (index funds, private equity, secondary shares, etc), not into large, lumpy, real world capex that employs a bunch of electricians, heavy equipment operators, lineworkers, land surveyors, etc. AI infra does that. Even if the ultimate economic owner is the same class of people, the path the money takes is different: it has to go through chip fabs, power projects, network buildouts, construction crews, land acquisition, permitting, and so on. That’s the “leakage” I was pointing at.
To be more precise: I’m not claiming “no one would ever build anything else”, I’m saying given the current incentive structure, the realistic counterfactual for a lot of this megacap tech cash is more financialization (buybacks, M&A, sitting on balance sheets) rather than “let’s go fund housing, transit tunnels, or new aircraft.”
I really don't think any of that is true; it's just popular rhetoric.
For example: "Buybacks concentrate cash in the hands of existing shareholders" is obviously false: the shareholders (via the company) did have cash and now they don't. The cash is distributed to the market. The quoted statement is precisely backwards.
> A big chunk of that cash just gets recycled
That doesn't mean anything.
> more financial claims (index funds, private equity, secondary shares, etc)
And do they sit on it? No, of course not. They invest it in things. Real actual things.
> buybacks
Already discussed
> M&A
If they use cash to pay for a merger, then the former owners now have cash that they will reinvest.
> balance sheets
Money on a balance sheet is actually money sitting in J.P. Morgan or whoever. Via fractional reserve lending, J.P. Morgan lends that money to businesses and home owners and real actual houses (or whatever) get built with it.
The counterfactual for AI spending really is other real actual hard spending.
when the US sells out Europe to Russia, do you think the Russians will stop? That global war might be with us within a decade.
As a fellow elder millennial I agree with your sentiment.
But I don't see the mechanics of how it would work. Rewind to October 2022. How, exactly, does the money* invested in AI since that time get redirected towards whatever issues you find more pressing?
*I have some doubts about the headline numbers
Yes this capital allocation is a once in a lifetime opportunity to crate AGI that will solve diseases and poverty.
We have 8.3 billion examples of general intelligence alive on the planet right now.
Surely an artificial one in a data center, costing trillions and beholden to shareholders, will solve all society's issues!
I suggest you read Amodei post called "machines of loving grace". It will change your worldview (probably).
This is literally the view of demis hassabis, Sergey brin, Mario amodei and others. Are you seriously implying they are trolling us?
As long as the dollar remains the reserve currency of the world and US retains its hegemony, a lot of the finances will work itself out, the only threat to the US empire crumbling is by losing a major war or extreme civil unrest and that threat is astronomically low. The US is orders of magnitude stronger than the Roman Empire, I don't think people realize the scale or control.
> The US is orders of magnitude stronger than the Roman Empire
This would be trivially true even if the US was currently in its death throes (which there is plenty of evidence that the US-as-empire might be, even if the US-as-polity is not), as the Roman Empire fell quite a while ago.
Gradually, then suddenly. Best not to underestimate the extent to which the USA has lost trust in the rest of the world, and how actively people and organisations are working to derisk by disengaging. Of course that will neither be easy nor particularly fast, but I'm not certain it can be stopped at this point.
Nobody really knows the future. What were originally consumer graphics expansion cards turned out useful in delivering more compute than traditional CPUs.
Now that compute is being used for transformers and machine learning, but we really don't know what it'll be used for in 10 years.
It might all be for naught, or maybe transformers will become more useful, or maybe something else.
'no way' is very absolute. Unlikely, perhaps.
> What were originally consumer graphics expansion cards turned out useful in delivering more compute than traditional CPUs.
Graphics cards were relatively inexpensive. When one got old, you tossed it out and move on to the new hotness.
Here when you have spent $1 trillion on AI graphics cards and a new hotness comes around that renders your current hardware obsolete, what do you do?
Either people are failing to do simple math here or are expecting, nay hoping, that trillions of $$$ in value can be extracted out of the current hardware, before the new hotness comes along.
This would be a bad bet even if the likes of OpenAI were actually making money today. It is an exceptionally bad bet when they are losing money on everything they sell, by a lot. And the state of competition is such that they cannot raise prices. Nobody has a real moat. AI has become a commodity. And competition is only getting stronger with each passing day.
You can likely still play the hottest games with the best graphics on an H200 in 5 years.
One thing we saw with the dot-com bust is how certain individuals were able to cash in on the failures, e.g., low cost hardware, domain names, etc. (NB. prices may exceed $2)
Perhaps people are already thinking about they can cash in on the floor space and HVAC systems that will be left in the wake of failed "AI" hype
I'm looking forward to buying my own slightly used 5 million square ft data centre in Texas for $1
"Loft for rent, 50,000 sq ft in a new datacenter, roof access, superb wiring and air conditioning, direct access to fiber backbone."
In TX? In Russian blogosphere it is a standard staple that Trump is rushing Ukrainian peace deal to be able to move on to the set of mega-projects with Russia - oil/gas in Arctic and data centers in Russian North-West where electricity and cooling is plentiful and cheap.
actually it is more of the opposition's narrative, probably a way to explain such a pro-Russian position of Trump.
I think any such data center project is doomed to ultimately fail, and any serious investment will be for me a sign of the bubble peak exuberance and irrationality.
From the article:
""It's my view that there's no way you're going to get a return on that, because $8 trillion of capex means you need roughly $800 billion of profit just to pay for the interest," he said."
Right, THEY can't, but cloud providers potentially can. And there are probably other uses for everything not GPU/TPU for the Google's of the world. They are out way less than IBM which cannot monetize the space or build data centers efficiently like AWS and Google.
The dotcom bust killed companies, not the Internet. AI will be no different. Most players won’t make it, but the tech will endure and expand.
Or endure and contract.
The key difference between AI and the initial growth of the web is that the more use cases to which people applied the web, the more people wanted of it. AI is the opposite - people love LLM-based chatbots. But it is being pushed into many other use cases where it just doesn't work as well. Or works well, but people don't want AI-generated deliverables. Or leaders are trying to push non-deterministic products into deterministic processes. Or tech folks are jumping through massive hoops to get the results they want because without doing so, it just doesn't work.
Basically, if a product manager kept pushing features the way AI is being pushed -- without PMF, without profit -- that PM would be fired.
This probably all sounds anti-AI, but it is not. I believe AI has a place in our industry. But it needs to be applied correctly, where it does well. Those use cases will not be universal, so I repeat my initial prediction. It will endure and contract.
The difference is that the Internet was actually useful technology, whereas AI is not (so far at least).
I think you're exaggerating a little, but aren't entirely wrong. The Internet has completely changed daily life for most of humanity. AI can mean a lot of things, but a lot of it is blown way out of proportion. I find LLMs useful to help me rephrase a sentence or explain some kind of topic, but it pales in comparison to email and web browsers, YouTube, and things like blogs.
Holy cow. I have 96GB of DDR5 I bought at start of year for a machine which never materialized. Might have to flip it.
Why do you believe it will fail? Because some companies will not be profitable?
It wasn't an 'it' it was a 'some'. Some of these companies that are investing massively in data centers will fail.
Right now essentially none have 'failed' in the sense of 'bankrupt with no recovery' (Chapter 7). They haven't run out of runway yet, and the equity markets are still so eager, even a bad proposition that includes the word 'AI!' is likely to be able to cut some sort of deal for more funds.
But that won't last. Some companies will fail. Probably sufficient failures that the companies that are successful won't be able to meaningfully counteract the bursts of sudden supply of AI related gear.
That's all the comment you are replying to is implying.
If the entire world economy starts to depend on those companies, they would pay off with "startup level" ROI. And by "startup level" I mean the amounts bullish people say startups funds can pay (10 to 100), not a bootstrapped unicorn.
the constant cost of people and power won't make it all that much cheaper than current prices to put a server into someone's else rack.
>cash in on the floor space and HVAC systems that will be left in the wake of failed "AI" hype
I'd worry surveillance companies might.
If it is so obvious that it won’t pay off, why is every company investing in it? What alpha do you have that they don’t?
That's a good question. During the .com boom everybody was investing in 'the internet' or at least in 'the web'. And lots of those companies went bust, quite a few spectacularly so. Since then everything that was promised and a lot more has been realized. Even so, a lot of those initial schemes were harebrained at best at the time and there is a fair chance that we will look in a similar way at the current AI offerings in 30 years time.
Short term is always disappointing, long term usually overperforms. Think back to the first person making a working transistor and what came of that.
I still don’t get it. What at a personal level is making Sam Altman make a suboptimal choice for himself if it is so obvious it won’t work out for him?
On a personal level it will work out for him just fine. All he has to do is siphon off a fraction of that money to himself and/or an entity that he controls.
He's like Elon Musk in that respect: always doubling the bet on the next round, it is a real life Martingale these guys are playing with society on the hook for the downside.
> And lots of those companies went bust, quite a few spectacularly so.
pets.com "selling dogfood on the internet" is the major example of the web boom then bust. (1)
But today, I can get dog food, cat food, other pet supplies with my weekly "online order" grocery delivery. Or I can get them from the big river megaretailer. I have a weekly delivery of coffee beans from a niche online supplier, and it usually comes with flyers for products like a beer or wine subscription or artisanal high-meat cat or dog foods.
So the idea of "selling dogfood on the internet" is now pervasive not extinct, the inflated expectation that went bust was that this niche was a billion-dollar idea and not a commodity where brand, efficiencies of scale and execution matter more.
I find it disturbing how long people wait to accept basic truths, as if they need permission to think or believe a particular outcome will occur.
It was quite obvious that AI was hype from the get-go. An expensive solution looking for a problem.
The cost of hardware. The impact on hardware and supply chains. The impact to electricity prices and the need to scale up grid and generation capacity. The overall cost to society and impact on the economy. And that's without considering the basic philosophical questions "what is cognition?" and "do we understand the preconditions for it?"
All I know is that the consumer and general voting population loose no matter the outcome. The oligarchs, banking, government and tech-lords will be protected. We will pay the price whether it succeeds or fails.
My personal experience of AI has been poor. Hallucinations, huge inconsistencies in results.
If your day job exists within an arbitrary non-productive linguistic domain, great tool. Image and video generation? Meh. Statistical and data-set analysis. Average.
Just like .com bust from companies going online, there is hype, but there is also real value.
Even slow non-tech legacy industry companies are deploying chatbots across every department - HR, operations, IT, customer support. All leadership are already planning to cut 50 - 90% of staff from most departments over next decade. It matters, because these initiatives are receiving internal funding which will precipitate out to AI companies to deploy this tech and to scale it.
The "legacy" industry companies are not immune from hype. Some of those AI initiatives will provide some value, but most of them seem like complete flops. Trying to deploy a solution without an idea of what the problem or product is yet.
At some point, I wonder if any of the big guys have considered becoming grid operators. The vision Google had for community fiber (Google Fiber, which mostly fizzled out due to regulatory hurdles) could be somewhat paralleled with the idea of operating a regional electrical grid.
Don’t worry. The same servers will be used for other computing purposes. And maybe that will be profitable. Maybe it will be beneficial to others. But This cycle of investment and loss is a version of distribution of wealth. Some benefit.
The banks and loaners always benefit.
I can't imagine everybody suddenly leaving AI like a broken toy and taking all special purpose AI chips offline. AI serves millions of people every day. It's here to stay even if it doesn't get any better than it is it already brings immense value to the users. It will keep being worth something.
Mind you IBM makes +7B from keeping old school enterprise hooked up on 30 plus year old tech like z/OS and Cobol and their own super outdated stack. their AI division is frankly embarrassing. of course they would say that. IBM is one of the most conservative, anti-progress leaches in the entire tech industry. I am glad they are missing out big time on the AI gold rush. to me if anything this is a green signal.
The spending will be more than paid off since the taxpayer is the lender of last resort There's too many funny names in the investors / creditors a lot of mountains in germany and similar ya know
There is something to be said about what the ROI is for normal (i.e. non AI/tech) companies using AI. AI can help automate things, robots have been replacing manufacturing jobs for decades but there is an ROI on that which I think is easier to see and count, less humans in the factory, etc. There seems to be a lot of exaggerated things being said these days with AI and the AI companies have only begun to raise rates, they won't go down.
The AI bubble will burst when normal companies start to not realize their revenue/profit goals and have to answer investor relations calls about that.
The second buyer will make truckloads of money, remember the data center and fiber network liquidation of 2001+ - smart investors collected the overcapacity and after a couple of years the money printer worked. This time it will be the same, only the single purpose hardware (LLM specific GPUs) will probably end on a landfill.
Consumer will eat it all. AI is very good at engaging content, and getting better by the day: it won't be the AGI we wanted, but maybe the AGI we've earned
Ctrl-F this thread for terms like: cost, margin
Is transistor density cost still the limit?
Cost model, Pricing model
What about more recyclable chips made out of carbon?
What else would solve for e.g. energy efficiency, thermal inefficiency, depreciation, and ewaste costs?
The investors in these companies and all this infrastructure are not so much concerned with whether any specific companies pays off with profits, necessarily.
They are gambling instead that these investments pay out it in a different way: by shattering high labour costs for intellectual labour and de-skilling our profession (and others like it) -- "proletarianising" in the 19th century sense.
Thereby increasing profits across the whole sector and breaking the bargaining power (and outsized political power, as well) of upper middle class technology workers.
Put another way this is an economy wide investment in a manner similar to early 20th century mass factory industrialization. It's not expected that today's big investments are tomorrow's winners, but nobody wants to be left behind in the transformation, and lots of political and economic power is highly interested in the idea of automating away the remnants of the Alvin Toffler "Information Economy" fantasy.
A decade ago, IBM was spending enormous amounts of money to tell me stuff like "cognitive finance is here" in big screen-hogging ads on nytimes.com. They were advertising Watson, vaporware which no one talks about today. Are they bitter that someone else has actually made the AI hype take off?