Comment by timr
There’s nothing particularly wrong with the article - it’s a superficial summary of stuff that has historically happened in the world of LLM context windows.
The problem is - and it’s a problem common to AI right now - you can’t generalize anything from it. The next thing that drives LLMs forward could be an extension of what you read about here, or it could be a totally random other thing. There are a million monkeys tapping on keyboards, and the hope is that someone taps out Shakespeare’s brain.
I don't really understand this line of criticism, in this context.
What would "generalizing" the information in this article mean? I think the author does a good job of contextualizing most of the techniques under the general umbrella of in-context learning. What would it mean to generalize further beyond that?