Comment by tlack

Comment by tlack 5 hours ago

2 replies

Do you find you really need that level of “resolution” with memories?

On our [1] chatbots we use one long memories text field per chatbot <-> user relationship.

Each bot response cycle suggests a new memory to add as part of its prompt (along with the message etc)

Then we take that new memory and the existing memories text and feed it to a separate “memory archivist” LLM prompt cycle that’s tasked with adding this new memory and resummarizing the whole thing, yielding a replacement for the stored memories, with this new memory added.

Maybe overly simplistic but easy to manage and pretty inexpensive. The archiving part is async and fast. The LLM seems pretty good sussing out what’s important and what isn’t.

[1] https://Graydient.ai

gkorland 2 hours ago

If it's your personal assassinate and is helping you for months it means pretty fast it will start forget the details and only have a vogue view of you and your preferences. So instead of being you personal assassinate it practically cluster your personality and give you general help with no reliance on real historical data.

codekisser 3 hours ago

I have already tried what you're doing, and it didn't perform well enough for me. I've been developing this project for a two years now. Its memory isn't going to fit in a single prompt.

I imagine that your AI chatbots aren't as cheap or performant as they can be with your potentially enormous prompts. Technical details aside, just like when talking to real people, it feels nice when they recall minor details you mentioned a long time ago.