Comment by vessenes
I’d like to courteously disagree. I think existing models and existing tools are good enough to bootstrap this at least.
I’d propose the following architecture:
Step 1: Microsoft phi style - read code and write specifications using a frontier model. You could use an ensemble here to nitpick the spec; it’s only going to get written once. We also have of course many many rfcs and codebases that conform to them or where they do not we have an existing repository of bug reports, patches, forum complaints, etc.
Step 2-4: implement multilayer evaluation: does it compile? Does an existing model think the code complies with the spec on inspection? When it’s run on qemu are the key evals the same as the original software?
I propose most of steps 2-4 are automatable and rely on existing tooling and provide a framework that is, if not cheap, achievable. I’m also sure someone could improve this plan with a few more minutes of thought.
To me the interesting question is - will this add capabilities at current model sizes? My prior is yes in that the current behemoth size models feel like they are only incrementally better than 1/10 size distills. I interpret that to mean we haven’t gotten the most out of these larger scales. I will note Dario disagrees on this - he’s publicly said we need at least 10x more scale than we have now.