Comment by Hugsun

Comment by Hugsun 7 hours ago

0 replies

As they are currently constructed, I would say that it is out of the question.

Humans usually know (at least roughly) the source of anything they know, as there will be a memory or a known event associated with that knowledge.

LLMs have no analogous way to determine the source of their knowledge. They might know that all their knowledge comes from their training, but it has no way of knowing what was included in the training and what wasn't.

This could maybe be achieved with some more fancy RAG systems, or online training abilities. I think an essential piece is the ability to know the source of information. When LLMs reliably do, and apply that knowledge, they'll be much more useful. Hopefully somebody can achieve this.