Comment by narenm16
i agree. it feels like scaling up these large models is such an inefficient route that seems to be warranting new ideas (test-time compute, etc).
we'll likely reach a point where it's infeasible for deep learning to completely encompass human-level reasoning, and we'll need neuroscience discoveries to continue progress. altman seems to be hyping up "bigger is better," not just for model parameters but openai's valuation.