Comment by anshumankmr

Comment by anshumankmr 4 days ago

2 replies

>The LLM doesn't have a good idea of its limitations (any more than humans do). I expect this will create false refusals, as the model becomes overcautious.

Can it not be trained to do so? From my anecdotal observations, the knowledge cutoff is one thing that LLMs are really well trained to know about. Those are limitations that LLMs are currently well trained to handle. Why can it not be trained to know that it is quite frequently bad at math, it may produce sometimes inaccurate code etc.

For humans also, some people know some things are just not their cup of tea. Sure there are times people may have half baked knowledge about things but one can tell if they are good at XYZ things, and not so much at other things.

fudged71 4 days ago

It's a chicken and egg situation. You don't know a model's capabilities until it is trained. When you then change the training with that learning, it will have modified capabilities.

regularfry 4 days ago

Apart from anything else there will be a lot of text about the nature of LLMs and their inherent limitations in its training set. It might only need to be made salient the fact that it is one in order to produce the required effect.