Comment by johnsmith1840

Comment by johnsmith1840 3 days ago

0 replies

Did an interesting study that actually LLMs "hide" internal data.

They don't just "forget" that information can come back at a later time if you continue to train.

So basically any time a model is trained you need to check it's entire memory not just a small part.