Comment by bronco21016
Comment by bronco21016 6 hours ago
Might be an interesting problem for understanding how various models perform recollection of prior tokens within the context window. I'm sure they could list animals until their window is full but what I'm not sure of is how much of the window they could fill without repeating.
Even more interesting is if a thinking LLM would come up with tricks mitigating its own known limits - like listing animals in alphabetical order, or launching a shell/interpreter with a list that contains previous answers (which it then checks each new answer against).