Comment by astrange
No, Anthropic and OpenAI definitely actually believe what they're saying. If you believe companies only care about their shareholders, then you shouldn't believe this about them because they don't even have that corporate structure - they're PBCs.
There doesn't seem to be a reason to believe the rest of this critique either; sure those are potential problems, but what do any of them have to do with whether a system has a transformer model in it? A recording of a human mind would have the same issues.
> It has no way to evaluate if a particular sequence of tokens is likely to be accurate, because it only selects them based on the probability of appearing in a similar sequence, based on the training data.
This in particular is obviously incorrect if you think about it, because the critique is so strong that if it was true, the system wouldn't be able to produce coherent sentences. Because that's actually the same problem as producing true sentences.
(It's also not true because the models are grounded via web search/coding tools.)
> if it was true, the system wouldn't be able to produce coherent sentences. Because that's actually the same problem as producing true sentences
It is...not at all the same? Like they said, you can create perfectly coherent statements that are just wrong. Just look at Elon's ridiculously hamfisted attempts around editing Grok system prompts.
Also, a lot of information on the web is just wrong or out of date, and coding tools only get you so far.