Comment by leshokunin
Comment by leshokunin 4 days ago
Could be. It would make sense: there’s only so many next logical words / concepts after an idea. It’s not like language keeps inventing new logic at a rate we can’t keep with.
Also, new human knowledge is probably only marginally derivative from past knowledge, so we’re not likely to see a vast difference between our knowledge creation and what a system that predicts the next logical thing does.
That’s not a bad thing. We essentially now have indexed logic at scale.
> It’s not like language keeps inventing new logic at a rate we can’t keep with.
Maybe it does. Maybe, to a smart enough model, given its training on human knowledge so far, the next logical thing after "Sure, here's a technically and economically feasible cure for disease X" is in fact such a cure, or at least useful steps towards it.
I'm exaggerating, but I think the idea may hold true. It might be too early to tell one way or another definitively.