Comment by ofirpress

Comment by ofirpress 3 days ago

123 replies

[SWE-bench co-author here] It seems like they run this test on a subset of 50 tasks, and that they only run the test once per day. So a lot of the movement in accuracy could be attributed to that. I would run on 300 tasks and I'd run the test suite 5 or 10 times per day and average that score. Lots of variance in the score can come from random stuff like even Anthropic's servers being overloaded.

Davidzheng 3 days ago

but degradation from servers being overloaded would be the type of degradation this SHOULD measure no? Unless it's only intended for measuring their quietly distilling models (which they claim not to do? idk for certain)

  • botacode 3 days ago

    Load just makes LLMs behave less deterministically and likely degrade. See: https://thinkingmachines.ai/blog/defeating-nondeterminism-in...

    They don't have to be malicious operators in this case. It just happens.

    • bgirard 3 days ago

      > malicious

      It doesn't have to be malicious. If my workflow is to send a prompt once and hopefully accept the result, then degradation matters a lot. If degradation is causing me to silently get worse code output on some of my commits it matters to me.

      I care about -expected- performance when picking which model to use, not optimal benchmark performance.

      • Aurornis 3 days ago

        Non-determinism isn’t the same as degradation.

        The non-determinism means that even with a temperature of 0.0, you can’t expect the outputs to be the same across API calls.

        In practice people tend to index to the best results they’ve experienced and view anything else as degradation. In practice it may just be randomness in either direction from the prompts. When you’re getting good results you assume it’s normal. When things feel off you think something abnormal is happening. Rerun the exact same prompts and context with temperature 0 and you might get a different result.

      • novaleaf 3 days ago

        this is about variance of daily statistics, so I think the suggestions are entirely appropriate in this context.

    • strongpigeon 3 days ago

      The question I have now after reading this paper (which was really insightful) is do the models really get worse under load, or do they just have a higher variance? It seems like the latter is what we should expect, not it getting worse, but absent load data we can't really know.

    • altcognito 3 days ago

      Explain this though. The code is deterministic, even if it relies on pseudo random number generation. It doesn't just happen, someone has to make a conscious decision to force a different code path (or model) if the system is loaded.

      • minimaltom 3 days ago

        Its not deterministic. Any individual floating point mul/add is deterministic, but in a GPU these are all happening in parallel and the accumulation is in the order they happen to complete.

        When you add A then B then C, you get a different answer than C then A then B, because floating point, approximation error, subnormals etc.

      • jmalicki 3 days ago

        For all practical purposes any code reliant on the output of a PRNG is non-deterministic in all but the most pedantic senses... And if the LLM temperature isn't set to 0 LLMs are sampling from a distribution.

        If you're going to call a PRNG deterministic then the outcome of a complicated concurrent system with no guaranteed ordering is going to be deterministic too!

      • pertymcpert 3 days ago

        Floating point math isn't associative for operations that are associative in normal math.

      • make3 3 days ago

        There's a million algorithms to make LLM inference more efficient as a tradeoff for performance, like using a smaller model, using quantized models, using speculative decoding with a more permissive rejection threshold, etc etc

      • FL33TW00D 3 days ago

        It takes a different code path for efficiency.

        e.g

        if (batch_size > 1024): kernel_x else: kernel_y

    • stefan_ 3 days ago

      The primary (non malicious, non stupid) explanation given here is batching. But I think you would find looking at large-scale inference the batch sizes being ran on any given rig are fairly static - there is a sweet spot for any given model part ran individually between memory consumption and GPU utilization, and generally GPUs do badly at job parallelism.

      I think the more likely explanation is again with the extremely heterogeneous compute platforms they run on.

      • bonoboTP 2 days ago

        Why do you think batching has anything to do with the model getting dumber? Do you know what batching means?

      • hatmanstack 3 days ago

        That's why I'd love to get stats on load/hardware/location of where my inference is running. Looking at you Trainiuim.

    • make3 3 days ago

      It's very clearly a cost tradeoff that they control and that should be measured.

  • samusiam 2 days ago

    I'd argue that it depends how that degradation manifests whether you want to include it or not.

    Consider two scenarios: (1) degradation leads to the model being routed behind the scenes to a different server, with subtly different performance characteristics, all unbeknownst to the user; (2) degradation leads to the model refusing a request and returning an "overloaded" message.

    In the first case, absolutely you want to include that because that's the kind of lack of transparency about performance that you'd want signal on. In the second case, an automated test harness might fail, but in the real world the user will just wait and retry when the server is under less load. Maybe you don't include that because it's actually misleading to say that performance (in terms of the model's intelligence, which is how the benchmark will be interpreted) is worse.

  • megabless123 3 days ago

    noob question: why would increased demand result in decreased intelligence?

    • exitb 3 days ago

      An operator at load capacity can either refuse requests, or move the knobs (quantization, thinking time) so requests process faster. Both of those things make customers unhappy, but only one is obvious.

      • codeflo 3 days ago

        This is intentional? I think delivering lower quality than what was advertised and benchmarked is borderline fraud, but YMMV.

      • sh3rl0ck 3 days ago

        I'd wager that lower tok/s vs lower quality of output would be two very different knobs to turn.

    • awestroke 3 days ago

      I've seen some issues with garbage tokens (seemed to come from a completely different session, mentioned code I've never seen before, repeated lines over and over) during high load, suspect anthropic have some threading bugs or race conditions in their caching/inference code that only happen during very high load

    • vidarh 3 days ago

      It would happen if they quietly decide to serve up more aggressively distilled / quantised / smaller models when under load.

      • [removed] 3 days ago
        [deleted]
      • chrisjj 3 days ago

        They advertise the Opus 4.5 model. Secretly substituting a cheaper one to save costs would be fraud.

    • Wheaties466 3 days ago

      from what I understand this can come from the batching of requests.

  • cmrdporcupine 3 days ago

    I've personally witnessed large variability in behaviour even within a given session -- which makes sense as there's nothing stopping Anthropic from shuttling your context/session around load balanced through many different servers, some of which might be quantized heavily to manage load and others not at all.

    I don't know if they do this or not, but the nature of the API is such you could absolutely load balance this way. The context sent at each point is not I believe "sticky" to any server.

    TLDR you could get a "stupid" response and then a "smart" response within a single session because of heterogeneous quantization / model behaviour in the cluster.

    • epolanski 3 days ago

      I've defended opus in the last weeks but the degradation is tangible. It feels like it degraded by a generation tbh.

mohsen1 3 days ago

Hope you don't mind the unrelated question:

How do you pay for those SWE-bench runs?

I am trying to run a benchmark but it is too expensive to run enough runs to get a fair comparison.

https://mafia-arena.com

  • ofirpress 3 days ago

    Benchmarks can get costly to run- you can reach out to frontier model creators to try and get them to give you free credits, but usually they'll only agree to that once your benchmark is pretty popular.

    • Dolores12 3 days ago

      so basically they know requests using your API key should be treated with care?

    • epolanski 3 days ago

      The last thing a proper benchmark should do is reveal it's own API key.

      • plagiarist 3 days ago

        IMO it should need a third party running the LLM anyway. Otherwise the evaluated company could notice they're receiving the same requests daily and discover benchmarking that way.

      • sejje 3 days ago

        That's a good thought I hadn't had, actually.

    • mohsen1 3 days ago

      yes I reached out to them but as you say it's a chicken-and-egg problem.

      Thanks!

nikcub 3 days ago

> I would run on 300 tasks and I'd run the test suite 5 or 10 times per day and average that score.

assume this is because of model costs. anthropic could either throw some credits their way (would be worthwhile to dispel the 80 reddit posts a day about degrading models and quantization) or OP could throw up a donation / tip link

  • simsla 3 days ago

    Probably, but with a small sample size like that, they should probably be taking the uncertainty into account, because I wouldn't be surprised if a lot of this variation falls within expected noise.

    E.g. some binomial interval proportions (aka confidence intervals).

  • phist_mcgee 3 days ago

    Then you'd get people claiming that the benchmarks were 'paid for' by anthropic

    • nikcub 3 days ago

      one thing you learn from being on the internet is that you're never going to satisfy everybody

seunosewa 3 days ago

The degradation may be more significant within the day than at the same time every day.

  • GoatInGrey 3 days ago

    Sure, but it's still useful insight to see how it performs over time. Of course, cynically, Anthropic could game the benchmark by routing this benchmark's specific prompts to an unadulterated instance of the model.

rootnod3 3 days ago

Sorry what?

"You can't measure my Cloud Service's performance correctly if my servers are overloaded"?

"Oh, you just measured me at bad times each day. On only 50 different queries."

So, what does that mean? I have to pick specific times during the day for Claude to code better?

Does Claude Code have office hours basically?

  • johnsmith1840 3 days ago

    This has been happening for years. Tgere's a great paper from microsoft on Deepspeed AI inference.

    Basically the paper showed methods for how to handle heavy traffic load by changing model requirements or routing to different ones. This was awhile ago and I'm sure it's massively more advanced now.

    Also why some of AI's best work for me is early morning and weekends! So yes, the best time to code with modern LLM stacks is when nobody else is. It's also possibly why we go through phases of "they neutered the model" some time after a new release.

  • kuboble 3 days ago

    I wonder if my great experience with claude are partly due to the fact that my working hours don't overlap with the US west coast

  • swyx 3 days ago

    chill out, ofir does not work for anthropic. he's just saying there's inherent variability in LLMs and you need to at least 30x the samples that OP is doing in order to make any form of statistically significant conclusions.

  • copilot_king 3 days ago

    [flagged]

    • rootnod3 3 days ago

      Verily, my vichyssoise of verbiage veers most verbose, so let me run that thing out of tokens fast.

bhk 3 days ago

According to Anthropic: "We never reduce model quality due to demand, time of day, or server load."

https://www.anthropic.com/engineering/a-postmortem-of-three-...

chrisjj 3 days ago

> Lots of variance in the score can come from random stuff like even Anthropic's servers being overloaded.

Are you suggesting result accuracy varies with server load?

dana321 3 days ago

"Lots of variance in the score can come from random stuff like even Anthropic's servers being overloaded"

Aha, so the models do degrade under load.

cedws 3 days ago

Agreed, this benchmark would be much more useful ran multiple times a day. That could reveal degredation in line with load patterns.

  • bredren 3 days ago

    For CC, I suspect it also need to be testing and labeling separate runs against subscription, public API and Bedrock-served models?

    It’s a terrific idea to provide this. ~Isitdownorisitjustme for LLMs would be the parakeet in the coalmine that could at least inform the multitude of discussion threads about suspected dips in performance (beyond HN).

    What we could also use is similar stuff for Codex, and eventually Gemini.

    Really, the providers themselves should be running these tests and publishing the data.

    The availability status information is no longer sufficient to gauge the service delivery because it is by nature non-deterministic.

  • swyx 3 days ago

    i recall another project here on HN maybe 4-6 months ago that would run tests 4x a day or something. not sure how to find them again

sjtgraham 2 days ago

Why should users care about Anthropic's servers being overloaded?