Flagship models have rarely being cheaper, and especially not on release day. Only a few cases of this really.
Notable exceptions are Deepseek 3.2 and Opus 4.5 and GPT 3.5 Turbo.
The price drops usually are the form of flash and mini models being really cheap and fast. Like when we got o4 mini or 2.0 flash which was a particularly significant one.
Literally no difference in productivity from a free/ <0.50c output OpenRouter model. All these > $1.00+ per mm output are literal scams. No added value to the world.
Many problems where latter spins its wheel and Pro gets it in one go, for me. You need to give Pro full files as context and you need to fit within its ~60k (I forget exactly) silent context window if using via ChatGPT. Don't have it make edits directly, have it give the execution plan back to Codex
Getting more expensive has been the trend for the closed weights frontier models. See Gemini 3 Pro vs 2.5 Pro. Also see Gemini 2.5 Flash vs 2.0 Flash. The only thing that got cheaper recently was Opus 4.5 vs Opus 4.
https://platform.openai.com/docs/pricing