Comment by jumploops
Yes, given a similarly sparse prompt, Claude Code seems to perform "better" because it eagerly does things you don't necessarily know to ask
GPT-5 may underwhelm with the same sparse prompt, as it seems to do exactly what's asked, not more
You can still "fully vibe" with GPT-5, but the pattern works better in two steps:
1. Plan (iterate on high-level spec/PRD, split into actions)
2. Build (work through plans)
Splitting the context here is important, as any LLM will perform worse as the context gets more polluted.
The best of both worlds would surely be for the LLM to write what you've asked, but also write comments about other things it could have done so you can consider those extra bits when you check the output.