Comment by codeflo
Comment by codeflo 3 days ago
This is intentional? I think delivering lower quality than what was advertised and benchmarked is borderline fraud, but YMMV.
Comment by codeflo 3 days ago
This is intentional? I think delivering lower quality than what was advertised and benchmarked is borderline fraud, but YMMV.
>And according to Google, they always delete data if requested.
However, the request form is on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard'.
I guess I just don't know how to square that with my actual experiences then.
I've seen sporadic drops in reasoning skills that made me feel like it was January 2025, not 2026 ... inconsistent.
LLMs sample the next token from a conditional probability distribution, the hope is that dumb sequences are less probable but they will just happen naturally.
Funny how those probabilities consistently at 2pm UK time when all the Americans come online...
It's more like the choice between "the" and "a" than "yes" and "no".
I wouldn't doubt that these companies would deliberately degrade performance to manage load, but it's also true that humans are notoriously terrible at identifying random distributions, even with something as simple as a coin flip. It's very possible that what you view as degradation is just "bad RNG".
yep stochastic fantastic
these things are by definition hard to reason about
i’d wait any amount of time lol.
at least i would KNOW it’s overloaded and i should use a different model, try again later, or just skip AI assistance for the task altogether.
They don't advertise a certain quality. You take what they have or leave it.
If there's no way to check, then how can you claim it's fraud? :)
Per Anthropic’s RCA linked in Ops post for September 2025 issues:
“… To state it plainly: We never reduce model quality due to demand, time of day, or server load. …”
So according to Anthropic they are not tweaking quality setting due to demand.