bgirard 3 days ago

> malicious

It doesn't have to be malicious. If my workflow is to send a prompt once and hopefully accept the result, then degradation matters a lot. If degradation is causing me to silently get worse code output on some of my commits it matters to me.

I care about -expected- performance when picking which model to use, not optimal benchmark performance.

  • Aurornis 3 days ago

    Non-determinism isn’t the same as degradation.

    The non-determinism means that even with a temperature of 0.0, you can’t expect the outputs to be the same across API calls.

    In practice people tend to index to the best results they’ve experienced and view anything else as degradation. In practice it may just be randomness in either direction from the prompts. When you’re getting good results you assume it’s normal. When things feel off you think something abnormal is happening. Rerun the exact same prompts and context with temperature 0 and you might get a different result.

    • bonoboTP 3 days ago

      This has nothing to do with overloading. The suspicion is that when there is too much demand (or they just want to save costs), Anthropic sometimes uses a less capable (quantized, distilled, etc) version of the model. People want to measure this so there is concrete evidence instead of hunches and feelings.

      To say that this measurement is bad because the server might just be overloaded completely misses the point. The point is to see if the model sometimes silently performs worse. If I get a response from "Opus", I want a response from Opus. Or at least want to be told that I'm getting slightly-dumber-Opus this hour because the server load is too much.

    • F7F7F7 3 days ago

      “Just drink the water, it’s all water.”

  • novaleaf 3 days ago

    this is about variance of daily statistics, so I think the suggestions are entirely appropriate in this context.

strongpigeon 3 days ago

The question I have now after reading this paper (which was really insightful) is do the models really get worse under load, or do they just have a higher variance? It seems like the latter is what we should expect, not it getting worse, but absent load data we can't really know.

altcognito 3 days ago

Explain this though. The code is deterministic, even if it relies on pseudo random number generation. It doesn't just happen, someone has to make a conscious decision to force a different code path (or model) if the system is loaded.

  • minimaltom 3 days ago

    Its not deterministic. Any individual floating point mul/add is deterministic, but in a GPU these are all happening in parallel and the accumulation is in the order they happen to complete.

    When you add A then B then C, you get a different answer than C then A then B, because floating point, approximation error, subnormals etc.

    • bonoboTP 3 days ago

      It can be made deterministic. It's not trivial and can slow it down a bit (not much) but there are environment variables you can set to make your GPU computations bitwise reproducible. I have done this in training models with Pytorch.

      • minimaltom 3 days ago

        There are settings to make it reproducible but they incur a non-negligible drop in performance.

        Unsurprising given they amount to explicit synchronization to make the order of operations deterministic.

  • jmalicki 3 days ago

    For all practical purposes any code reliant on the output of a PRNG is non-deterministic in all but the most pedantic senses... And if the LLM temperature isn't set to 0 LLMs are sampling from a distribution.

    If you're going to call a PRNG deterministic then the outcome of a complicated concurrent system with no guaranteed ordering is going to be deterministic too!

    • gmueckl 3 days ago

      No, this isn't right. There are totally legitimate use cases for PRNGs as sources of random number sequences following a certain probability distribution where freezing the seed and getting reproducibility is actually required.

      • jmalicki 3 days ago

        And for a complicated concurrent system you can also replay the exact timings and orderings as well!

        • gmueckl 2 days ago

          That's completely different from PRNGs. I don't understand why you think those things belong together.

    • bonoboTP 3 days ago

      How is this related to overloading? The nondeterminism should not be a function of overloading. It should just time out or reply slower. It will only be dumber if it gets rerouted to a dumber, faster model eg quantized.

    • joquarky 2 days ago

      Temperature can't be literally zero, or it creates a divide by zero error.

      When people say zero, it is shorthand for “as deterministic as this system allows”, but it's still not completely deterministic.

      • forgotTheLast 2 days ago

        Zero temp just uses argmax, which is what softmax approaches if you take the limit of T to zero anyway. So it could very well be deterministic.

  • pertymcpert 3 days ago

    Floating point math isn't associative for operations that are associative in normal math.

    • measurablefunc 3 days ago

      That would just add up to statistical noise instead of 10% degradation over a week.

      • kevin_thibedeau 3 days ago

        Catastrophic error accumulation can produce more profound effects than noise.

        • measurablefunc 3 days ago

          Just to make sure I got this right. They serve millions of requests a day & somehow catastrophic error accumulation is what is causing the 10% degradation & no one at Anthropic is noticing it. Is that the theory?

  • make3 3 days ago

    There's a million algorithms to make LLM inference more efficient as a tradeoff for performance, like using a smaller model, using quantized models, using speculative decoding with a more permissive rejection threshold, etc etc

  • FL33TW00D 3 days ago

    It takes a different code path for efficiency.

    e.g

    if (batch_size > 1024): kernel_x else: kernel_y

stefan_ 3 days ago

The primary (non malicious, non stupid) explanation given here is batching. But I think you would find looking at large-scale inference the batch sizes being ran on any given rig are fairly static - there is a sweet spot for any given model part ran individually between memory consumption and GPU utilization, and generally GPUs do badly at job parallelism.

I think the more likely explanation is again with the extremely heterogeneous compute platforms they run on.

  • bonoboTP 2 days ago

    Why do you think batching has anything to do with the model getting dumber? Do you know what batching means?

    • stefan_ 2 days ago

      Well if you were to read the link you might just find out! Today is your chance to be less dumb than the model!

      • bonoboTP 2 days ago

        I checked the link, it never says that the model's prediction get lower quality due to batching, just nondeterministic. I don't understand why people conflate these things. Also it's unlikely that they use smaller batch sizes when load is lower. They just likely spin up and down GPU serves based on demand, or more likely, reallocate servers and gpus between different roles and tasks.

  • hatmanstack 3 days ago

    That's why I'd love to get stats on load/hardware/location of where my inference is running. Looking at you Trainiuim.

make3 3 days ago

It's very clearly a cost tradeoff that they control and that should be measured.