Comment by giancarlostoro

Comment by giancarlostoro 16 hours ago

16 replies

Not just AWS, looks like Anthropic uses it heavily as well. I assume they get plenty of handholding from Amazon though. I'm surprised any cloud provider does not invest drastically more into their SDK and tooling, nobody will use your cloud if they literally cannot.

cmiles8 16 hours ago

Well AWS says Anthropic uses it but Anthropic isn’t exactly jumping up and down telling everyone how awesome it is, which tells you everything you need to know.

If Anthropic walked out on stage today and said how amazing it was and how they’re using it the announcement would have a lot more weight. Instead… crickets from Anthropic in the keynote

  • cobolcomesback 15 hours ago

    AWS has built 20 data centers in Indiana full of half a million Trainium chips explicitly for Anthropic. Anthropic is using them heavily. The same press announcement that Anthropic has made about Google TPUs is the exact same one they made a year ago about Trainium. Hell, even in the Google TPU press release they explicitly mention how they are still using Trainium as well.

  • hustwindmaple 10 hours ago

    I met a AWS engineer a couple of weeks ago and he said Trainium is actually being used for Anthropic model inference, not for training. Inferentia is basically defected Trainiums chips that nobody wants to use.

  • teruakohatu 16 hours ago

    > Anthropic isn’t exactly jumping up and down telling everyone how awesome it is, which tells you everything you need to know.

    You can’t really read into that. They are unlikely to let their competitors know if they have a slight performance/$ edge by going with AWS tech.

    • cmiles8 16 hours ago

      With GCP announcing they built Gemini 3 on TPUs the opposite is true. Anthropic is under pressure to show they don’t need expensive GPUs. They’d be catching up at this point, not leaking some secret sauce. No reason for them to not boast on stage today unless there’s nothing to boast about.

      • 0x457 15 hours ago

        Yes, but Google benefit from people using their TPUs, while Anthropic gains nothing unless AWS throws money at them for saying it.

IshKebab 14 hours ago

> I'm surprised any cloud provider does not invest drastically more into their SDK and tooling

I used to work for an AI startup. This is where Nvidia's moat is - the tens of thousands of man-hours that has gone into making the entire AI ecosystem work well with Nvidia hardware and not much else.

It's not that they haven't thought of this, it's just that they don't want to hire another 1k engineers to do it.

logicchains 14 hours ago

>I'm surprised any cloud provider does not invest drastically more into their SDK and tooling, nobody will use your cloud if they literally cannot.

Building an efficient compiler from high-level ML code to a TPU is actually quite a difficult software engineering feat, and it's not clear that Amazon has the kind of engineering talent needed to build something like that. Not like Google which have developed multiple compilers and language runtimes.

[removed] 14 hours ago
[deleted]