Comment by alvis

Comment by alvis 12 hours ago

2 replies

It's interest to see this quote: `for the bottom 10% of user turns sorted by model-generated tokens (including hidden reasoning and final output), GPT‑5-Codex uses 93.7% fewer tokens than GPT‑5`

It sounds like it can make simple tasks much more correct. It's impressive to me. Today coding agent tends to pretend they're working hard by generating lots of unnecessary code. Hope it's true

bn-l 10 hours ago

This is my issue with gpt-5. If you use the low or medium reasoning it’s garbage. If you use high, it’ll think for up to five minutes on something dead simple.

  • srcreigh 8 hours ago

    Can you be more specific about what type of code you're talking about, and what makes it garbage?

    I'm happy with medium reasoning. My projects have been in Go, Typescript, React Dockerfiles stuff like that. The code almost always works, it's usually not "Clean code" though.