Top
Best
New

Posted by atgctg 12/11/2025

GPT-5.2(openai.com)
https://platform.openai.com/docs/guides/latest-model

System card: https://cdn.openai.com/pdf/3a4153c8-c748-4b71-8e31-aecbde944...

1195 points | 1083 commentspage 5
whereistejas 12/11/2025|
Did anyone notice how Cursor wasn’t an early tester? I wonder why…
rishabhaiover 12/12/2025||
After I saw Opus 4.5 search through zig's std io because it wasn't aware of a breaking change in the recent release, I fell in love with claude-code and I don't see a strong enough reason to switch to codex at the moment.
jasonthorsness 12/11/2025||
Does anyone have it yet in ChatGPT? I'm still on 5.1 :(.
FergusArgyll 12/11/2025||
> We deploy GPT‑5.2 gradually to keep ChatGPT as smooth and reliable as we can; if you don’t see it at first, please try again later.
mudkipdev 12/11/2025|||
No, but it's already in codex
jasonthorsness 12/12/2025||
I have it now
hbarka 12/11/2025||
A year ago Sunday Pichai declared code red, now it’s Sam Altman declaring code red. How tables have turned, and I think the acquisition of Windsurf and Kevin Hou by Google seems to correlate with their level up.
jerrygenser 12/12/2025|
Acquisition of noam shazeer to supercharge their Gemini flagship model line I think made a bigger impact.

To make an argument it was Kevin Hou, then we would need to see Antigravity their new IDE being key. I think the crown jewel are the Gemini models.

fulafel 12/11/2025||
So GDPval is OpenAI's own benchmark. PDF link: https://arxiv.org/pdf/2510.04374
yearolinuxdsktp 12/12/2025||
Plus users are now defaulted to a faster, less deep GPT-5.2 Thinking mode called “Standard”, and you now have to manually select “Extended” to get back to previous deep thinking level for Plus users. Yet the 3K messages a week quota is the same regardless of thinking level. Also, the selection does not sync to mobile (you know, just not enough RAM in computers these days to persist a setting between web and mobile).
FergusArgyll 12/11/2025||
> Additionally, on our internal benchmark of junior investment banking analyst spreadsheet modeling tasks—such as putting together a three-statement model for a Fortune 500 company with proper formatting and citations, or building a leveraged buyout model for a take-private—GPT 5.2 Thinking's average score per task is 9.3% higher than GPT‑5.1’s, rising from 59.1% to 68.4%.

Confirming prior reporting about them hiring junior analysts

EastLondonCoder 12/12/2025||
I’ve been using GPT-4o and now 5.2 pretty much daily, mostly for creative and technical work. What helped me get more out of it was to stop thinking of it as a chatbot or knowledge engine, and instead try to model how it actually works on a structural level.

The closest parallel I’ve found is Peter Gärdenfors’ work on conceptual spaces, where meaning isn’t symbolic but geometric. Fedorenko’s research on predictive sequencing in the brain fits too. In both cases, the idea is that language follows a trajectory through a shaped mental space, and that’s basically what GPT is doing. It doesn’t know anything, but it generates plausible paths through a statistical terrain built from our own language use.

So when it “hallucinates”, that’s not a bug so much as a result of the system not being grounded. It’s doing what it was designed to do: complete the next step in a pattern. Sometimes that’s wildly useful. Sometimes it’s nonsense. The trick is knowing which is which.

What’s weird is that once you internalise this, you can work with it as a kind of improvisational system. If you stay in the loop, challenge it, steer it, it feels more like a collaborator than a tool.

That’s how I use it anyway. Not as a source of truth, but as a way of moving through ideas faster.

BrtByte 12/12/2025||
Once you drop the idea that it's a knowledge oracle and start treating it as a system that navigates a probability landscape, a lot of the confusion just evaporates
ostacke 12/12/2025||
Interesting concept with conceptual spaces, but how does that affect how you work with LLM:s in practice?
EastLondonCoder 12/12/2025||
I think of it like improvising with a very skilled but slightly alien musician.

If you just hand it a chord chart, it’ll follow the structure. But if you understand the kinds of patterns it tends to favour, the statistical shapes it moves through, you can start composing with it, not just prompting it.

That’s where Gärdenfors helped me reframe things. The model isn’t retrieving facts. It’s traversing a conceptual space. Once you stop expecting grounded truth and start tracking coherence, internal consistency, narrative stability, you get a much better sense of where it’s likely to go off course.

It reminds me of salespeople who speak fluently without being aligned with the underlying subject. Everything sounds plausible, but something’s off. LLMs do that too. You can learn to spot the mismatch, but it takes practice, a bit like learning to jam. You stop reading notes and start listening for shape.

dinobones 12/11/2025||
It's becoming challenging to really evaluate models.

The amount of intelligence that you can display within a single prompt, the riddles, the puzzles, they've all been solved or are mostly trivial to reasoners.

Now you have to drive a model for a few days to really get a decent understanding of how good it really is. In my experience, while Sonnet/Opus may not have always been leading on benchmarks, they have always *felt* the best to me, but it's hard to put into words why exactly I feel that way, but I can just feel it.

The way you can just feel when someone you're having a conversation with is deeply understanding you, somewhat understanding you, or maybe not understanding at all. But you don't have a quantifiable metric for this.

This is a strange, weird territory, and I don't know the path forward. We know we're definitely not at AGI.

And we know if you use these models for long-horizon tasks they fail at some point and just go off the rails.

I've tried using Codex with max reasoning for doing PRs and gotten laughable results too many times, but Codex with Max reasoning is apparently near-SOTA on code. And to be fair, Claude Code/Opus is also sometimes equally as bad at doing these types of "implement idea in big codebase, make changes too many files, still pass tests" type of tasks.

Is the solution that we start to evaluate LLMs on more long-horizon tasks? I think to some degree this was the spirit of SWE Verified right? But even that is being saturated now.

Libidinalecon 12/12/2025||
Totally agree. I just got a free trial month I guess to try to bring me back to chatGPT but I don't really know what to ask it to display if it is on par with Gemini.

I really have a sinking feel right now actually of what an absolute giant waste of capital all this is.

I am glad for all the venture capital behind all this to subsidize my intellectual noodlings on a super computer but my god what have we done?

This is so much fun but this doesn't feel like we are getting closer to "AGI" after using Gemini for about 100 hours or so now. The first day maybe but not now when you see how off it can still be all the time.

ACCount37 12/11/2025||
The good old "benchmarks just keep saturating" problem.

Anthropic is genuinely one of the top companies in the field, and for a reason. Opus consistently punches above its weight, and this is only in part due to the lack of OpenAI's atrocious personality tuning.

Yes, the next stop for AI is: increasing task length horizon, improving agentic behavior. The "raw general intelligence" component in bleeding edge LLMs is far outpacing the "executive function", clearly.

imiric 12/11/2025||
Shouldn't the next stop be to improve general accuracy, which is what these tools have struggled with since their inception? Until when are "AI" companies going to offload the responsibility on the user to verify the output of their tools?

Optimizing for benchmark scores, which are highly gamed to begin with, by throwing more resources at this problem is exceedingly tiring. Surely they must've noticed the performance plateau and diminishing returns of this approach by now, yet every new announcement is the same.

ACCount37 12/11/2025||
What "performance plateau"? The "plateau" disappears the moment you get harder unsaturated benchmarks.

It's getting more and more challenging to do that - just not because the models don't improve. Quite the opposite.

Framing "improve general accuracy" as "something no one is doing" is really weird too.

You need "general accuracy" for agentic behavior to work at all. If you have a simple ten step plan, and each step has a 50% chance of an unrecoverable failure, then your plan is fucked, full stop. To advance on those benchmarks, the LLM has to fail less and recover better.

Hallucinations is a "solvable but very hard to solve" problem. Considerable progress is being made on it, but if there's "this one weird trick" that deletes hallucinations, then we sure didn't find it yet. Humans get a body of meta-knowledge for free, which lets them dodge hallucinations decently well (not perfectly) if they want to. LLMs get pathetic crumbs of meta-knowledge and little skill in using it. Room for improvement, but, not trivial to improve.

aaroninsf 12/11/2025|
As a popcorn eating bystander it is striking to scan the top comments and find they alternate so dramatically in tone and conclusions.
More comments...