Comment by 0xDEAFBEAD

Comment by 0xDEAFBEAD 2 days ago

3 replies

That's not what's going on here? The algorithms aren't being given any pattern of "being evaluated" / "not being evaluated", as far as I can tell. They're doing it zero-shot.

Put it another way: Why is this distinction important? We use the word "knowing" with humans. But one could also argue that humans are pattern-matchers! Why, specifically, wouldn't "knowing" apply to LLMs? What are the minimal changes one could make to existing LLM systems such that you'd be happy if the word "knowing" was applied to them?

devmor 2 days ago

Not to be snarky but “as far as I can tell” is the rub isn’t it?

LLMs are better at matching patterns than we are in some cases. That’s why we made them!

> But one could also argue that humans are pattern-matchers!

No, one could not unless they were being disingenuous.

  • mewpmewp2 2 days ago

    What about animals knowing? E.g. dog knows how to X or its name. Are these things fine to say?

  • 0xDEAFBEAD a day ago

    >Not to be snarky but “as far as I can tell” is the rub isn’t it?

    From skimming the paper, I don't believe they're doing in-context learning, which would be the obvious interpretation of "pattern matching". That's what I meant to communicate.

    >No, one could not unless they were being disingenuous.

    I think it is just about as disingenuous as labeling LLMs as pattern-matchers. I don't see why you would consider the one claim to be disingenuous, but not the other.