Comment by verdverm
They do not change it, from what I have seen, o3 is more hype and marketing than a meaningful step towards models which can exhibit real creativity and reasoning as humans perform it (rather than perceive it, which is the root of the hype)
For example, a small child is completely capable of being told "get in the car" and can understand, navigate, open the door, and get in, with incredibly little energy usage (maybe about the amount of a single potato chip/crisp)
Now consider what I have been working on recently (1) evaluating secops tools from both a technical and business perspective (2) prototyping and creating an RFC for the next version of our DX at the org. They are very far from this capability because it involves so many competing incentives, trade offs, and not just the context of the current state of code, but also the history and vision. Crafting that vision is especially beyond what a foundation in transformers can offer. They are in essence an averaging and sequence prediction algorithm
These tools are useful, even provide an ROI, but by no means anywhere close to what I would call intelligent.
Would love to know if you know any other papers like:
Faith and Fate: Limits of Transformers on Compositionality https://arxiv.org/abs/2305.18654
Maybe the analogy is something with gold mining. We could pretend that the machines that mine gold are actually creating gold. Pretending the entire gold mining sector is instead a discovery of alchemy.
Maybe the way alchemy kind of leads to chemistry is the analogy that applies?
I don't even know if that is right though.
The intelligence is in the training data. The model then is extracting the intelligence.
We can't forget Feynman's ideas here that we aren't going to make a robot cheetah that runs fast. We will make a machine that uses wheels. Viewing things through the lense of a cheetah is a category error.
While I agree completely with you we very well both might be completely and utterly wrong. A category error on what intelligence "is".