Comment by htrp

Comment by htrp 12 hours ago

1 reply

it wasn't economical to deploy but i expect it wasn't wasted, expect the openai team to pick that back up at some point

mips_avatar 12 hours ago

The scoop Dylan Patel got was that part way through the gpt4.5 pretraining run the results were very very good, but it leveled off and they ended up with a huge base model that really wasn't any better on their evals.