Comment by stevenhuang

Comment by stevenhuang 4 days ago

1 reply

If you think about it, those criticisms extend to human thinking too. We aren't infallible in all situations either.

It's only when we can interact with the environment to test our hypothesis that we then refine what we know and update our priors appropriately.

If we let LLMs do that as well, by allowing it to run code and interact with documentation/the internet and double-check things its not sure of, it's not out of the question LLMs won't eventually be able to more reliably understand its limitations.

Hugsun 7 hours ago

As they are currently constructed, I would say that it is out of the question.

Humans usually know (at least roughly) the source of anything they know, as there will be a memory or a known event associated with that knowledge.

LLMs have no analogous way to determine the source of their knowledge. They might know that all their knowledge comes from their training, but it has no way of knowing what was included in the training and what wasn't.

This could maybe be achieved with some more fancy RAG systems, or online training abilities. I think an essential piece is the ability to know the source of information. When LLMs reliably do, and apply that knowledge, they'll be much more useful. Hopefully somebody can achieve this.