Comment by tyingq
Comment by tyingq 17 hours ago
If I had the cash, I could sell dollar bills for 50 cents and do a $7b run rate :)
Comment by tyingq 17 hours ago
If I had the cash, I could sell dollar bills for 50 cents and do a $7b run rate :)
Those are estimates. Notice they didn’t assume 0% or a million %. They chose numbers that are a plausible approximation of the true unknown values, also known as an estimate.
This is pretty silly thing to say. Investment banks suffer zero reputational damage when their analysts get this sort of thing wrong. They don’t even have to care about accuracy because there will never be a way to even check this number, if anyone even wanted to go back and rate their assumptions, which also never happens.
Fair enough. I was looking for a shortcut way of saying "I find this guess credible", see also: https://news.ycombinator.com/item?id=46126597
Calling this unmotivated assumption an "estimate" is just plain lying though, regardless of the faith uou have in the source of the assumption.
They had pretty drastic price cuts on Opus 4.5. It's possible they're now selling inference at a loss to gain market share, or at least that their margins are much lower. Dario claims that all their previous models were profitable (even after accounting for research costs), but it's unclear that there's a path to keeping their previous margins and expanding revenue as fast or faster than their costs (each model has been substantially more expensive than the previous model).
I've been wondering about this generally... Are the per-request API prices I'm paying at a profit or a loss? My billing would suggest they are not making a profit on the monthly fees (unless there are a bunch of enterprise accounts in group deals not being used, I am one of those I think)
The leaders of Anthropic, OpenAI and DeepMind all hope to create models that are much more powerful than the ones they have now.
A large portion of the many tens of billions of dollars they have at their disposal (OpenAI alone raised 40 billion in April) is probably going toward this ambition—basically a huge science experiment. For example, when an AI lab offers an individual researcher a $250 million pay package, it can only be because they hope that the researcher can help them with something very ambitious: there's no need to pay that much for a single employee to help them reduce the costs of serving the paying customers they have now.
The point is that you can be right that Anthropic is making money on the marginal new user of Claude, but Anthropic's investors might still get soaked if the huge science experiment does not bear fruit.
> their investors might still take a bath if the very-ambitious aspect of their operations do not bear fruit
Not really. If the technology stalls where it is, AI still have a sizable chunk of the dollars previously paid to coders, transcribers, translators and the like.
Maybe for those of us not-too-clever ones, what is the bet? Why is it different? Would be pretty great to have like a clear articulation of this!
The bet, (I would have thought) obviously, is that AI will be a huge part of humanity’s future, and that Anthropic will be able to get a big piece of that pie.
This is (I would have thought) obviously different from selling dollars for $0.50, which is a plan with zero probability of profit.
Edit: perhaps the question was meant to be about how Bun fits in? But the context of this sub-thread has veered to achieving a $7 billion revenue.
The question is/was about how they intend to obtain that big piece of pie, what that looks like.
Somehow? I've been keeping an eye on my inbox, waiting to get a karma vesting plan from HN, for ages. What's this talk of somehow?
you have anthropic confused with something like lovable.
anthropic's unit margins are fine, many lovable-like businesses are not.
If that was genuinely happening here - Anthropic were selling inference for less than the power and data center costs needed to serve those tokens - it would indeed be a very bad sign for their health.
I don't think they're doing that.
Estimates I've seen have their inference margin at ~60% - there's one from Morgan Stanley in this article, for example: https://www.businessinsider.com/amazon-anthropic-billions-cl...