Top
Best
New

Posted by meetpateltech 1 day ago

GPT‑5.4 Mini and Nano(openai.com)
245 points | 143 commentspage 4
yomismoaqui 1 day ago|
Not comparing with equivalent models from Anthropic or Google, interesting...
Tiberium 1 day ago|
They did actually compare them in the tweet, see https://x.com/OpenAI/status/2033953592424731072

Direct image: https://pbs.twimg.com/media/HDoN4PhasAAinj_?format=png&name=...

derefr 1 day ago||
OpenAI don't talk about the "size" or "weights" of these models any more. Anyone have any insight into how resource-intensive these Mini/Nano-variant models actually are at this point?

I assume that OpenAI continue to use words like "mini" and "nano" in the names of these model variants, to imply that they reserve the smallest possible resource-units of their inference clusters... but, given OpenAI's scale, that may well be "one B200" at this point, rather than anything consumers (or even most companies) could afford.

I ask because I'm curious whether the economics of these models' use-cases and call frequency work out (both from the customer perspective, and from OpenAI's perspective) in favor of OpenAI actually hosting inference on these models themselves, vs. it being better if customers (esp. enterprise customers) could instead license these models to run on-prem as black-box software appliances.

But of course, that question is only interesting / only has a non-trivial answer, if these models are small enough that it's actually possible to run them on hardware that costs less to acquire than a year's querying quota for the hosted version.

technocrat8080 1 day ago|
Have they ever talked about their size or weights?
derefr 1 day ago||
They never put the parameter counts in their model names like other AI companies did, but back in the GPT3 era (i.e. before they had PR people sitting intermediating all their comms channels), OpenAI engineers would disclose this kind of data in their whitepapers / system cards.

IIRC, GPT-3 itself was admitted to be a 175B model, and its reduced variants were disclosed to have parameter-counts like 1.3B, 6.7B, 13B, etc.

technocrat8080 1 day ago||
Wow, would love to see a source for this.
morpheos137 1 day ago||
i switched to claude when i found chatgpt would argue with just about anything I said even when it was wrong. they have over optimised antisychophancy. i want a model that simulates critical thinking not one that repeats half baked often incomplete dogmas. the chatgpt 5x range is extraordinarily powerful but also extra ordinarily frustrating to try to use for anything creative or productive that is original in my opinion. claude basically is able to think critically while being neither sycophantic or argumentative most of the time in my option with appropriate user prompting. recent chat gpts seem to fight me every step of the way when not doing boiler plate. i don't want to waste my time fighting a tool.
casey2 1 day ago||
I googled all the testimonial names and they are all linked-in mouthpieces.
reconnecting 1 day ago||
All three ChatGPT models (Instant, Thinking, and Pro) have a new knowledge cutoff of August 2025.

Seriously?

dpoloncsak 1 day ago||
Do you find the results vary based on whether it uses RAG to hit the internet vs the data being in the weights itself? I'm not sure I've really noticed a difference, but I don't often prompt about current events or anything.
reconnecting 1 day ago||
I noticed that many recent technologies are not familiar to LLMs because of the knowledge cutoff, and thus might not appear in recommendations even if they better match the request.
dpoloncsak 1 day ago||
Oh thats a good point, yeah.

If I told it I'm shopping for a budget-level Mac, it may not recommend the Neo. I'm sure software only moves faster, too. Especially as more code is 'written' blindly, new stacks may never see adoption

zild3d 1 day ago||
whats surprising about that? most of the minor version updates from all the labs are post training updates / not changing knowledge cutoff
reconnecting 1 day ago||
Thanks for letting me know, I will be waiting for the major update.
F7F7F7 1 day ago||
It's been like this since GPT 3.5. This is not a limitation and is generally considered a natural outcome of the process.

So there's no major update in the sense that you might be thinking. Most of the time there's not even an announcement when/if training cut offs are updated. It's just another byline.

A 6 month lag seems to be the standard across the frontier models.

reconnecting 1 day ago||
I've actually started worrying that the amount of false data produced with LLMs on the public internet might provoke a situation where the knowledge cutoff becomes permanently (and silently) frozen. Like we can't trust data after 2025 because it will poison training data at scale, and models will only cover major events without capturing the finer details.
gwern 1 day ago||
I agree. That's why you should write as much as you can now, if you want to get it into the LLMs (https://gwern.net/blog/2024/writing-online). You never know when the window will slam shut and LLM training goes 'hermetic' as they focus on 'civilization in a datacenter' where only extremely vetted whitelisted data gets included in the 'seed' and everything is reconstructed from scratch for the training value & safety.
varispeed 1 day ago||
I stopped paying attention to GPT-5.x releases, they seem to have been severely dumbed down.
miltonlost 1 day ago||
[flagged]
system2 1 day ago|
I am feeling the version fatigue. I cannot deal with their incremental bs versions.