Comment by esperent
You may or may not be right, but your arguments sound like echos of what software developers were saying four or five years ago. And four or five years ago, they were right.
Don't dismiss an AI tool just because the first iterations aren't useful, it'll be iterated on faster than you can believe possible.
I've created (small, toy) transformers and also modeled injection molded parts in Solidworks.
There is a really big difference. It's obvious how programming languages can use tokens for an attention mechanism, which gives them excellent ability to have parallelized performance (versus RNNs, the prior way of relating tokens) with much broader ability to maintain coherence.
I don't know the parallel with brep. What are the tokens here? It's a fundamental architectural question.
But unlike four or five years ago for programming, when the attention mechanism was clear with transformers and the answer was, basically, just "scale it up", we don't even really know where to begin here. So assuming some magic is going to happen is optimistic.
It'd be exciting if we could figure it out. Maybe the answer is that we do away with brep and all tree based systems entirely (though I'm a little unclear how we maintain the mathematical precision of brep, especially with curves, which is necessary for machining -- your machinist is going to throw you out if you start giving them meshes, at least for precision work that has anything with a radius).