Comment by 0xDEAFBEAD
Comment by 0xDEAFBEAD 2 days ago
That's not what's going on here? The algorithms aren't being given any pattern of "being evaluated" / "not being evaluated", as far as I can tell. They're doing it zero-shot.
Put it another way: Why is this distinction important? We use the word "knowing" with humans. But one could also argue that humans are pattern-matchers! Why, specifically, wouldn't "knowing" apply to LLMs? What are the minimal changes one could make to existing LLM systems such that you'd be happy if the word "knowing" was applied to them?
Not to be snarky but “as far as I can tell” is the rub isn’t it?
LLMs are better at matching patterns than we are in some cases. That’s why we made them!
> But one could also argue that humans are pattern-matchers!
No, one could not unless they were being disingenuous.