Top
Best
New

Posted by mudkipdev 12 hours ago

GPT-5.4(openai.com)
https://openai.com/index/gpt-5-4-thinking-system-card/

https://x.com/OpenAI/status/2029620619743219811

726 points | 618 commentspage 7
creatonez 5 hours ago|
> We put a particular focus on improving GPT‑5.4’s ability to create and edit spreadsheets, presentations, and documents.

Nothing infuriates me more than an LLM tool randomly deciding to create docx or xlsx files for no apparent reason. They have to use a random library to create these files, and they constantly screw up API calls and get completely distracted by the sheer size of the scripts they have to write to output a simple documents. These files have terrible accessibility (all paper-like formats do) and end up with way too much formatting. Markdown was chosen as the lingua franca of LLMs for a reason, trying to force it into a totally unsuitable format isn't going to work.

swingboy 12 hours ago||
Even with the 1m context window, it looks like these models drop off significantly at about 256k. Hopefully improving that is a high priority for 2026.
nthypes 12 hours ago||
$30/M Input and $180/M Output Tokens is nuts. Ridiculous expensive for not that great bump on intelligence when compared to other models.
stri8ted 12 hours ago||
Price Input: $2.50 / 1M tokens Cached input: $0.25 / 1M tokens Output: $15.00 / 1M tokens

https://openai.com/api/pricing/

nthypes 12 hours ago|||
Gemini 3.1 Pro

$2/M Input Tokens $15/M Output Tokens

Claude Opus 4.6

$5/M Input Tokens $25/M Output Tokens

nthypes 12 hours ago||
Just to clarify,the pricing above is for GPT-5.4 Pro. For standard here is the pricing:

$2.5/M Input Tokens $15/M Output Tokens

energy123 12 hours ago|||
For Pro
joe_mamba 12 hours ago|||
Better tokens per dollar could be useless for comparison if the model can't solve your problem.
rvz 12 hours ago|||
You didn't realize they can increase / change prices for intelligence?

This should not be shocking.

nickthegreek 12 hours ago||
OP made no mention of not understanding cost relation to intelligence. In fact, they specifically call out the lack of value.
moralestapia 12 hours ago||
Don't use it?
bob1029 10 hours ago||
I was just testing this with my unity automation tool and the performance uplift from 5.2 seems to be substantial.
esafak 5 hours ago||
An important feature is the introduction of tool search, which provides models with a "lightweight list of available tools along with a tool search capability", thereby Making MCP Great Again!
ulfw 4 hours ago||
So desperate how they're bumping out these 'updates'
motza 9 hours ago||
No doubt this was released early to ease the bad press
ilaksh 12 hours ago||
Remember when everyone was predicting that GPT-5 would take over the planet?
dbbk 12 hours ago||
It was truly scary, according to Sam...
zeeebeee 10 hours ago||
iTs lITeRaLlY AGI bro
vicchenai 12 hours ago||
Honestly at this point I just want to know if it follows complex instructions better than 5.1. The benchmark numbers stopped meaning much to me a while ago - real usage always feels different.
freedomben 6 hours ago|
> When toggled on, /fast mode in Codex delivers up to 1.5x faster token velocity with GPT‑5.4. It’s the same model and the same intelligence, just faster.

I hate these blog posts sometimes. Surely there's got to be some tradeoff. Or have we finally arrived at the world's first "free lunch"? Otherwise why not make /fast always active with no mention and no way to turn it off?

More comments...