Comment by acuozzo
You're experiencing throttling. Use the API instead and pay per token.
You also have to treat this as outsourcing labor to a savant with a very, very short memory, so:
1. Write every prompt like a government work contract in which you're required to select the lowest bidder, so put guardrails everywhere. Keep a text editor open with your work contract, edit the goal at the bottom, and then fire off your reply.
2. Instruct the model to keep a detailed log in a file and, after a context compaction, instruct it to read this again.
3. Use models from different companies to review one another's work. If you're using Opus-4.5 for code generation, then consider using GPT-5.2-Codex for review.
4. Build a mental model for which models are good at which tasks. Mine is:
3a. Mathematical Thinking (proofs, et al.): Gemini DeepThink
3b. Software Architectural Planning: GPT5-Pro (not 5.1 or 5.2)
3c. Web Search & Deep Research: Gemini 3-Pro
3d. Technical Writing: GPT-4.5
3e. Code Generation & Refactoring: Opus-4.5
3f. Image Generation: Nano Banana Pro
> You're experiencing throttling. Use the API instead and pay per token.
That was using pay per token.
> Write every prompt like a government work contract in which you're required to select the lowest bidder, so put guardrails everywhere.
That is what I was doing yesterday. Worked fantastically. Today, I do the very same thing and... Nope. Can't even stick to the simplest instructions that have been perfectly fine in the past.
> If you're using Opus-4.5 for code generation, then consider using GPT-5.2-Codex for review.
As mentioned, I tried using Opus, but it didn't even get the point of producing anything worth reviewing. I've had great luck with it before, but not today.
> Instruct the model to keep a detailed log in a file and, after a context compaction
No chance of getting anywhere close to needing compaction today. I had to abort long before that.
> Build a mental model for which models are good at which tasks.
See, like I mentioned before, I thought I had this figured out, but now today it has all gone out the window.