darkwater 6 hours ago

Yes but... aren't human researchers doing the same? They are mostly wrong most of the times, and try again, and verify again their work, until they find something that actually works. What I mean is that this "in hindsight" test would be biased by being in hindsight, because we know already the answer so we would discard the LLM answer as just randomly generated. But "connecting the dots" is basically doing a lot try and error in your mind, emitting only the results that make at least some kind of sense to us.