Comment by nopinsight
Comment by nopinsight 4 days ago
In the words of an author:
"What is the performance limit when scaling LLM inference? Sky's the limit.
We have mathematically proven that transformers can solve any problem, provided they are allowed to generate as many intermediate reasoning tokens as needed. Remarkably, constant depth is sufficient.
http://arxiv.org/abs/2402.12875 (ICLR 2024)"
Is this the infinite monkey Shakespeare trope?