Comment by Hard_Space

Comment by Hard_Space 2 days ago

5 replies

> you respond in 1-3 sentences" becomes long bulleted lists and multiple paragraphs very quickly

This is why my heart sank this morning. I have spent over a year training 4.0 to just about be helpful enough to get me an extra 1-2 hours a day of productivity. From experimentation, I can see no hope of reproducing that with 5x, and even 5x admits as much to me, when I discussed it with them today:

> Prolixity is a side effect of optimization goals, not billing strategy. Newer models are trained to maximize helpfulness, coverage, and safety, which biases toward explanation, hedging, and context expansion. GPT-4 was less aggressively optimized in those directions, so it felt terser by default.

Share and enjoy!

kouteiheika 2 days ago

> This is why my heart sank this morning. I have spent over a year training 4.0 to just about be helpful enough to get me an extra 1-2 hours a day of productivity.

Maybe you should consider basing your workflows on open-weight models instead? Unlike proprietary API-only models no one can take these away from you.

  • Hard_Space 2 days ago

    I have considered it, and it is still on the docket. I have a local 3090 dedicated to ML. Would be a fascinating and potentially really useful project, but as a freelancer, it would cost a lot to give it the time it needs.

Angostura 2 days ago

And how would GPT 5.0 know that, I wonder. I bet it’s just making stuff up.

ComputerGuru 2 days ago

You can’t ask GPT to assess the situation. That’s not the kind of question you can count on a an LLM to accurately answer.

Playing with the system prompts, temperature, and max token output dials absolutely lets you make enough headway (with the 5 series) in this regard to demonstrably render its self-analysis incorrect.