Comment by dvt
Yes, but this should be trivially done with an internal `MEMORY` tool the LLM calls. I know that the context can't grow infinitely, but this shouldn't prevent filling the context with relevant info when discussing topic A (even a lazy RAG approach should work).
You are asking for a feature like this. Future advances will help in this.
https://youtu.be/ZUZT4x-detM