Comment by ojosilva

Comment by ojosilva a day ago

61 replies

They did not. Anthropic is protecting its huge asset: the Claude Code value chain, which has proven itself to be a winner among devs (me included, after trying everything under the sun in 2025). If anything, Anthropic's mistake is that they are incapable of monetizing their great models in the chat market, where ChatGPT reigns: ie. Anthropic did not invest in image generation, Google did and Gemini has a shot at the market now.

Apparently nobody gets the Anthropic move: they are only good at coding and that's a very thin layer. Opencode and other tools are game for collecting inputs and outputs that can later be used to train their own models - not necessarily being done now, but they could - Cursor did it. Also Opencode makes it all easily swappable, just eval something by popping another API key and let's see if Codex or GLM can replicate the CC solution. Oh, it does! So let's cancel Claude and save big bucks!

Even though CC the agent supports external providers (via the ANTHROPIC_BASE_URL env var), they are working hard on making it impossible for other models to support their every increasing agent feature set (skills, teleport and remote sessions, LSP, Chrome integration, etc). The move totally makes sense, like it or not.

bloppe 21 hours ago

> Also Opencode makes it all easily swappable

It's all easily swappable without OpenCode. Just symlink CLAUDE.md -> AGENTS.md and run `codex` instead of `claude`.

> they are working hard on making it impossible for other models to support their every increasing agent feature set (skills, teleport and remote sessions, LSP, Chrome integration, etc).

Every feature you listed has an open-source MCP server implementation, which means every agent that supports MCP already has all those features. MCP is so epic because it has already nailed the commodification coffin firmly shut. Besides, Anthropic has way less funding than OAI or Google. They wouldn't win the moat-building race even if there were one.

That said, the conventional wisdom is that lowering switching costs benefits the underdogs, because the incumbents have more market share to lose.

  • pacoWebConsult an hour ago

    Models each have their own, often competing, quirks on how they utilize AGENTS.md and CLAUDE.md. It's very likely a CLAUDE.md written for use with Claude Code utilizes prompting techniques that results in worse output if taken directly and used with Codex. For example, Anthropic recommends putting info that an agent must adhere to in statements like "MUST run tests after writing code" and other all-caps directives, whereas people have found using the same language with GPT-5.2 results in less instruction following, more timid responses than if the AGENTS.md were written without them.

  • msephton 3 hours ago

    You don't even need to symlink. Just put @AGENTS.md in your CLAUDE.md

  • submeta 8 hours ago

    > symlink CLAUDE.md -> AGENTS.md and run `codex` instead of `claude`.

    This is simple and beautiful. Thank you for sharing it :)

nikcub 21 hours ago

> ie. Anthropic did not invest in image generation, Google did and Gemini has a shot at the market now.

They're after the enterprise market - where office / workspace + app + directory integration, security, safety, compliance etc. are more important. 80% of their revenue is from enterprise - less churn, much higher revenue per W/token, better margins, better $/user.

Microsoft adopting the Anthropic models into copilot and Azure - despite being a large and early OpenAI investor - is a much bigger win than yet another image model used to make memes for users who balk at spending $20 per month.

Same with the office connector - which is only available to enterprises[0] (further speaking to where their focus is). There hasn't yet been a "claude code" moment for office productivity, but Anthropic are the closest to it.

[0] This may be a mistake as Claude Code has been adopted from the ground up

  • leokennis 7 hours ago

    > They're after the enterprise market

    I am curious how big of a chance they have. I could imagine many enterprises that are already (almost by default) Microsoft customers (Windows, Office, Entra etc.) will just default to Copilot (and maybe Azure) to keep everything neatly integrated.

    So an enterprise would need to be very dedicated to use everything Microsoft, but then go through the trouble use Claude as their AI just because it is slightly better for coding.

    I have a feeling I am missing something here though, I would be happy for anyone to educate me!

    • Rastonbury 6 hours ago

      I think at the current price point the capability of office copilot (which I don't use, only read reviews) is that it's basically email writer/summarizer/meeting notes.

      Can't light a candle to Opus 4.5 who can now create and modify financial models from PDFs and augmented with websearch and the Excel skill (gpt-5.2 can do this too). That said the market IS smaller

  • ozim 17 hours ago

    People underestimate enterprise market.

    Usually you can see it when someone nags about “call us” pricing that is targeted at enterprise. People that nag about it are most likely not the customers someone wants to cater to.

    • projektfu 17 hours ago

      When I was a software developer, I mostly griped about this when I wanted to experiment to see if I would even ask my larger enterprise if they would be interested in looking into it. I always felt like companies were killing a useful marketing stream from the enterprise's own employees. I think Tailscale has really nailed it, though. They give away the store to casual users, but make it so that a business will want to talk to sales to get all the features they need with better pricing per user. Small businesses can survive quite well on the free plan.

    • Dylan16807 12 hours ago

      I'm sure everyone "wants to" land a many million dollar deal with a big company that has mild demands, but that doesn't mean those naggers are bad customers. Bad customers have much more annoying and unreasonable demands than a pricing sheet.

      • ozim 4 hours ago

        I don’t think anyone lands contracts with “mild demands”.

        Most of the time you want to cut off ‘non customers’ as soon as possible and don’t leave ‘big fish’ without having direct contact person who can explain stuff. People just clicking around on their own will make assumptions that need to be addressed in a way no one wastes time.

lvl155 16 hours ago

This is really not the point. Anthropic isn’t cutting off third-party. You can use their models via API all you want. Why are people conflating this issue? Anthropic doesn’t owe anyone anything to offer their “unlimited” pro tiers outside of Claude Code. It’s not hard to build your own Opencode and use API keys. CLI interface by itself is not a moat.

  • noosphr 15 hours ago

    People should take this as a lesson on how much we are being subsidized right now.

    Claude code runs into use limitations for everyone at every tier. The API is too expensive to use and it's _still_ subsidized.

    I keep repeating myself but no one seems to listen: quadratic attention means LLMs will always cost astronomically more than you expect after running the pilot project.

    Going from 10k loc to 100k loc isn't a 10x increase, it's a 99x increase. Going from 10k loc to 1m loc isn't a 100x increase, it's a 9999x increase. This is fundamental to how transformers work and is the _best case scenario_. In practice things are worse.

    • the_gipsy 15 hours ago

      I don't see LLMs ingesting the LoCs. I see CC finding and grepping and reading file contents piecewise, precisely because it is too expensive to ingest a whole project.

      So what you say is not true: cost does not directly correlate with LoC.

    • anonym29 14 hours ago

      >Claude code runs into use limitations for everyone at every tier

      What do you mean by this? I know plenty of people who never hit the upgraded Opus 4.5 limits anymore even on the $100 plan, even those who used to hit the limits on the $200 plan w/ Opus 4 and Opus 4.1.

      >The API is too expensive to use and it's _still_ subsidized.

      What do you mean by saying the API is subsidized? Anthropic is a private company that isn't required to (and doesn't) report detailed public financial statements. The company operating at a loss doesn't mean all inference is operating at a loss, it means that the company is spending an enormous amount of money on R&D. The fact that the net loss is shrinking over time tells us that the inference is producing net profit over time. In this business, there is enormous up front cost to train a model. That model then goes on to generate initially large, but subsequently gradually diminishing revenue until the model is deprecated. That said, at any given snapshot-in-time, while there is likely large ongoing R&D expenditure on the next model causing the overall net profit for the entire company to still be negative, it's entirely possible that several, if not many or even most of the previously trained models have fully recouped their training costs in inference revenue.

      It's fairly obvious that the monthly subscriptions are subsidized to gain market share the same way Uber rides were on early on, but what indication do you have that the PAYG API is being subsidized? How would total losses have shrunk from $5.6B in 2024 to just $3B in 2025 while ARR grew from ~$1B to ~$7B over the same time period (one where usage of the platform dramatically expanded) if PAYG API inference wasn't running at a net profit for the company?

      >quadratic attention means LLMs will always cost astronomically more than you expect after running the pilot project

      This is only true as long as O(n²) quadratic attention remains the prevailing paradigm. As Qwen3-Next and Nemotron 3 Nano have shown with hybrid linear attention + sparse quadratic layers and a hybrid Mamba SSM, not all modern, performant LLMs necessarily need to run strictly O(n²) quadratic attention models. Sure, these aren't frontier models competitive with Opus 4.5 or Gemini 3 Pro or GPT 5.2 xhigh, but these aren't experimental tiny toy models like RWKV or Falcon Mamba that serve as little more than PoCs for alternative architectures, either. Qwen3-Next and Nemotron 3 Nano are solid players in their respective local weight classes.

      • cmrdporcupine 11 hours ago

        Nemotron 3 is amazing. 60 tokens/s on my 128GB Nvidia GB10, and actually emits some pretty reasonable "smart" content" for its size.

    • DSingularity 8 hours ago

      Good architecture (eg separation of concerns) means you won’t need to expose 1M loc to the llm all at once.

jrsj a day ago

It might make sense from Anthropics perspective but as a user of these tools I think it would be a huge mistake to build your workflow around Claude Code when they are pushing vendor lock in this aggressively.

Making this mistake could end up being the AI equivalent of choosing Oracle over Postgres

  • Terretta a day ago

    As a user of Claude Code via API (the expensive way), Anthrophic's "huge mistake" is capping monthly spend (billed in advance and pay as you go some $500 - $1500 at a time, by credit card) at just $5,000 a month.

    It's a supposedly professional tool with a value proposition that requires being in your work flow. Are you going to keep using a power drill on your construction site that bricks itself the last week or two of every month?

    An error message says contact support. They then point you to an enterprise plan for 150 seats when you have only a couple dozen devs. Note that 5000 / 25 = 200 ... coincidence? Yeah, you are forbidden to give them more than Max-like $200/dev/month for the usage-based API that's "so expensive".

    They are literally "please don't give us money any more this month, thanks".

    • johnpaulkiser 20 hours ago

      This sounds like a stop loss? Are they losing money per token even through the api?

      • Terretta 15 hours ago

        Sure does.

        I imagine a combination of stop loss and market share. If larger shops use up compute, you can't capture as many customers by headcount.

        // There was a figure around o3, an astonishing model punching far above the weights (ahem) of models that came after, that suggested the thinkiest mode cost on the order of $3500 to do a deep research. Perhaps OpenAI can afford that, while Anthropic can't.

      • bodge5000 4 hours ago

        That leads to the obvious question; is the API next on the chopping block? Or would they just increase the API pricing to a point where they are A) making profit off it and B) nobody would use the API just for a different client?

        • theshrike79 3 hours ago

          I'm pretty sure everyone is pricing their APIs to break-even, maybe profit if people use caching properly (like GPT-5 can do if you mark the prompts properly)

      • notahacker 20 hours ago

        Sounds plausible they're not really making any. Arbitrary and inflexible pricing policies aren't unusual, but it sounds easy enough for a new rapidly-growing company to let the account managers decide which companies they might have a chance of upselling 150 seat enterprise licenses to and just bill overage for everyone else...

  • ojosilva a day ago

    Their target is the Enterprise anyway. So they are apparently willing to enrage their non-CC user base over vendor-locking.

    But this is not the equivalent of Oracle over Postgres, as these are different technology stacks that implement an independent relational database. Here were talking about Opencode which depends on Claude models to work "as a better Claude" (according to the enraged users in the webs). Of course, one can still use OC with a bazillion other models, but Anthropic is saying that if you want the Claude Code experience, you gotta use the CC agent period.

    Now put yourself in the Anthropic support person shoes, and suppose you have to answer an issue of a Claude Max user who is mad that OC is throwing errors when calling a tool during a vibe session, probably because the multi-million dollar Sonnet model is telling OC to do something it can't because its not the claude agent. Claude models are fine-tuned for their agent! If the support person replies "OC is an unsupported agent for Claude Code Max" you get an enraged customer anyway, so you might as well cut the crap all together by the root.

  • adw 18 hours ago

    Switching tools is _very easy_.

  • solumunus a day ago

    I’ve done that and unless I’m missing something it seems like it would be trivial for me to switch to an alternative.

    • jrsj a day ago

      If you’ve only got a CLAUDE.md and sub agent definitions in markdown it is pretty easy to do at the moment, although more of their feature set is moving in a direction that doesn’t have 1:1 equivalents in other tools.

      The client is closed source for a reason and they issued DMCA takedowns against people who published sourcemaps for a reason.

zitterbewegung 21 hours ago

I rather have a product that is only good at one single thing than mid for everything else especially when the developer experience for me is much more consistent than using gemini and chatgpt to the point that I only have chatgpt for productivity reasons and also sometimes making better prompts to claude (when I don't use claude to make a better prompt). After realizing that Anthropic is discounting token usages for claude code they should have made that more explicit and also the API key (but hindsight is 20/20) they should already have been blocking third party apps or just have you make another API key that has no discount but even then this could have pissed off developers.

  • ndespres 21 hours ago

    You’re asking two different LLMs to help you talk more better to another LLM?

    • djvdq 19 hours ago

      This sounds like way too much for me.

      I wonder when they will add another level and talk to LLM how to talk to another LLM how to talk to another LLM

Majromax a day ago

> Anthropic is protecting its huge asset: the Claude Code value chain

Why is that their “huge asset?” The genus of this complaint is that Opencode et al replace everything but the LLM, so it seems like the latter is the true “huge asset.”

If Clause Code is being offered at or near operational breakeven, I don’t see the advantage of lock-in. If it’s being offered at a subsidy, then it’s a hint that Claude Code itself is medium-term unsustainable.

“Training data” is a partial but not full explanation of the gap, since it’s not obviously clear to me how Anthropic can learn from Claude Code sessions but not OpenCode sessions.

  • cowl 7 hours ago

    If developers are using Claude code with it's quirks, Anthropic controls the backend LLM. If developers are using OpenCode, it's easy for developers to try different LLMs and maybe substitute it (temporarily or permanently). In an enterprise market, once they choose a tool they tend to stay with that even if it is not the best, the cost and timeframe of changing is too high. if developers could swap LLMs freely on their own tool that is big missed opportunity for Anthropic. Not a User friendly move, but the norm in Enterprise.

    Right now, most enterprises are experimenting with different LLMs and once they chose they will be locked for a long time. If they cant can't chose because their coding agent doesn't let them they be locked to that.

  • dchftcs a day ago

    Anthropic and OpenAI are essentially betting that a somewhat small difference in accuracy translates to a huge advantage, and continuing to be the one that's slightly but consistently better than others is the only way they can justify investments in them at all. It's natural to then consider that an agent trained to use a specific tool will be better at using that tool. If Claude continues to be slightly better than other models at coding, and Claude Code continues to be slightly better than OpenCode, combined it can be difficult to beat them even at a cheaper price. Right now, even though Kimi K2 and the likes are cheaper with OpenCode and perform decently, I spend more than 10x the amount on Claude Code.

    • Majromax 17 hours ago

      In that case though, why the lock-in? If the combination really does have better performance than competitors’ offerings, then Anthropic should encourage an open ecosystem, confident in winning the comparison.

gpm 21 hours ago

The problem the second you stop subsidizing Claude Code and start making money on it the incentive to use it over opencode disappears. If opencode is the better tool than claude code - and that's the reason people are using their claude subscription with it instead of claude code - people will end up switching to it.

Maybe they can hope to murder opencode in the meantime with predatory pricing and build an advantage that they don't currently have. It seems unlikely though - the fact that they're currently behind proves the barrier to building this sort of tool isn't that high, and there's lots of developers who build their own tooling for fun that you can't really starve out of doing that.

I'm not convinced that attempting to murder opencode is a mistake - if you're losing you might as well try desperate tactics. I think the attempt is a pretty clear signal that Antrhopic is losing though.

  • shepherdjerred 19 hours ago

    It’s possible that tokens become cheap enough that they don’t need to raise prices to make a profit. The latest opus is 3x less expensive than the previous.

    • gpm 19 hours ago

      Then the competitors drop prices though. The current justification for claude code is just that it's an order of magnitude (or more) cheaper per token than comparable alternatives. That's a terrible business model to be stuck in.

      • shepherdjerred 19 hours ago

        If everyone is dropping prices in this scenario then I don’t see how the user eventually gets squeezed.

        I mean I guess they could do a bait and switch (drop prices so low that Anthropic goes bankrupt, then raises price) but that’s possible in literally any industry, and sees unlikely given the current number of competitors

        • gpm 19 hours ago

          Terrible for Anthropic I mean, not the user.

Palmik a day ago

I am pretty sure most people get Anthropic's move. I also think "getting it" is perfectly compatible with being unhappy about it and voicing that opinion online.

  • F7F7F7 a day ago

    OP is responding to an article that largely frames Anthropic as clueless.

    • shawnz 21 hours ago

      I don't think it is intending to frame the move as clueless, but rather short-sighted. It could very well be a good move for them in the short term.

socketcluster 17 hours ago

Agreed. The system is ALL about who controls the customer relationship.

If Anthropic ended up in a position that they had to beg various Client providers to be integrated (properly) and had to compete with other LLMs on the same clients and could be swapped out at a moment's notice, they would just become a commodity and lose all leverage. They don't want to end up in such situation. They do need to control the delivery of the product end-to-end to ensure that they control the customer relationship and the quality.

This is also going to be KEY in terms of democratizing the AI industry for small startups because this model of ai-outside-tools-inside provides an alternative to tools-outside-ai-inside platforms like Lovable, Base44 and Replit which don't leave as much flexibility in terms of swapping out tooling.

themafia 20 hours ago

> Anthropic's mistake is that they are incapable of monetizing their great models in the chat market

The types of people who would use this tool are precisely the types of people who don't pay for licenses or tools. They're in a race to the bottom and they don't even know it.

> and that's a very thin layer

I don't think Anthropic understands the market they just made massive investments in.

serf 12 hours ago

>They did not. Anthropic is protecting its huge asset: the Claude Code value chain

that's just it, it has been proven over and over again with alternatives that CC isn't the moat that Anthropic seems to think it is. This is made evident with the fact that they're pouring R&D into DE/WM automation meanwhile CC has all the same issues it has had for months/years -- it's as if they think CC is complete.

if anything MCP was a bigger moat than CC.

also : I don't get the opencode reference. Yes, it's nice -- but codex and gemini-cli are largely compatible with cc generated codebases.

There will be some initial bumpiness as you tell the agent to append the claude.md file to all agent reads -- or better yet just merge it into agent file.) -- but that's about as rough as it'll get.

irthomasthomas a day ago

They’re betting that the stickiness of today’s regular users is more valuable than the market research and training data they were receiving from those nerdy, rule-breaking users.

apstls 16 hours ago

> they are working hard on making it impossible for other models to support their every increasing agent feature set (skills, teleport and remote sessions, LSP, Chrome integration, etc). The move totally makes sense, like it or not.

I don't understand, why would other models not be able to support any, or some, or even a particular single one of these? I don't even see most of these as relevant to the model itself, but rather the harness/agentic framework around it. You could argue these require a base degree of model competence for following instructions, tool calling, etc, but these things are assumed for any SOTA model today, we are well past this. Almost all of these things, if not all, are already available in other CLI + IDE-based agentic coding tools.

8note 18 hours ago

i think they're trading future customer acquisition and model quality for the current claude code userbase which they might also lose from this choice.

the reason i got the subscription wasnt to use claude code. when i subscribed you couldnt even use it for claude code. i got it because i figured i could use those tokens for anything, and as i figured out useful stuff, i could split it off onto api calls.

now that exploration of "what can i do with claude" will need to be elsewhere, and the results of a working thing will want to stay with the model that its working on.

sergiotapia 21 hours ago

The model is the best.

The CLI tool is terrible compared to opencode.

That is the unfortunate reality, we are now being foisted claude code. :( I wish they just fork opencode.

  • stefan_ 19 hours ago

    It's crazy how bad the interface it is. I'm generally a fan of the model performance but there is not a day where their CLI will not flash random parts of scrollback or have a second of input lag just typing in the initial prompt (how is that even possible? you are not doing anything?). If this is their "premier tool" no vending machine business can save them.

    • [removed] 18 hours ago
      [deleted]
[removed] a day ago
[deleted]
behnamoh 17 hours ago

> making it impossible for other models to support their every increasing agent feature set (skills, teleport and remote sessions, LSP, Chrome integration, etc)

I use CC as my harness but switch between third party models thanks to ccs. If Anthropic decided to stop me from using third party models in CC, I wouldn't just go "oh well, let's buy another $200/mo Claude subscription now". No. I'd be like: "Ok, I invested in CC—hooks/skills/whatever—but now let's ask CC to port them all to OpenCode and continue my work there".

aaroninsf 14 hours ago

> Anthropic did not invest in image generation

I'd be pretty happy if Anthropic acquired Midjourney