Comment by Centigonal

Comment by Centigonal 3 days ago

7 replies

It seems to me that "forgetting correctly" is rapidly becoming a more pertinent problem in this field than "learning correctly." We're making great strides in getting models to teach themselves new facts, but the state of the art in jettisoning the least relevant information given new knowledge and finite capacity is lagging far behind.

"Forgetting correctly" is something most human brains are exceptionally good at, too. I wonder how that works...

Davidzheng 3 days ago

I don't think forgetting correctly is something humans are really good at. I'm not convinced human brains are "exceptionally good" at much of what we do tbh. I think human brain memory capacity is so large that most of forgetting is nowhere near "clearing space for new info" but because the brain correctly knows that some past bad information interferes with learning new things.

  • kalium-xyz 3 days ago

    Yea, As far as im aware we have no true idea of the limits of human memory. Either way its amazing that the hippocampus can encode sequences of neurons firing somewhere and replay them later.

  • pixl97 2 days ago

    Eh, I'd disagree. First the human brain is an evolutionary miracle when it comes to filtering. When you walk in a new room and then are questioned about it later you will most likely remember things like the door or where set some object, but after that your brain will filter out and just make up details as needed.

    The other thing is the brain down values and prunes paths we don't use and strengthens one's we do. This is why something you've not done it a while might need a refresher for you to do right again.

zelphirkalt 2 days ago

As far as I know we have made very little progress on identifying which weights to what degree in an ANN are responsible for what output and as such we cannot discard information, that a user would mark as wrong or inaccurate or undesirable. The human mind however, can do this easily. We remember (though not perfectly) that something is wrong, classified as not useful, irrelevant, and we don't do that any longer and over time might even forget about that now less traveled path. An ANN has no obvious mechanism for that at least.

azeirah 3 days ago

Learning is strongly related to spaced repetition.

This is often associated with learning tools like anki and stuff, but the real world is all about encountering things at certain frequencies (day night cycles, seasons, places you visit, people you see.... everything, really)

I'm wondering if there maybe some sort of inverse to SR, maybe?

johnsmith1840 3 days ago

Did an interesting study that actually LLMs "hide" internal data.

They don't just "forget" that information can come back at a later time if you continue to train.

So basically any time a model is trained you need to check it's entire memory not just a small part.

campbel 3 days ago

Is it some form of least-recently-used approach? I'm running tests on my own mind trying to figure it out now :D part of what I love about this area of computer science.