Top
Best
New

Posted by mudkipdev 3 hours ago

GPT-5.4(openai.com)
https://openai.com/index/gpt-5-4-thinking-system-card/

https://x.com/OpenAI/status/2029620619743219811

420 points | 392 comments
Philip-J-Fry 36 minutes ago|
I find it quite funny how this blog post has a big "Ask ChatGPT" box at the bottom. So you might think you could ask a question about the contents of the blog post, so you type the text "summarise this blog post". And it opens a new chat window with the link to the blog post followed by "summarise this blog post". Only to be told "I can't access external URLs directly, but if you can paste the relevant text or describe the content you're interested in from the page, I can help you summarize it. Feel free to share!"

That's hilarious. Does OpenAI even know this doesn't work?

Aurornis 23 minutes ago||
Probably intentional. They don't want open, no-registration endpoints able to trigger the AI into hitting URLs.
jazzypants 15 minutes ago|||
But, why include the non-functional chat box in the article?
embedding-shape 8 minutes ago|||
Different team "manages" the overall blog than the team who wrote that specific article. At one point, maybe it made sense, then something in the product changed, team that manages the blog never tested it again.

Or, people just stopped thinking about any sort of UX. These sort of mistakes are all over the place, on literally all web properties, some UX flows just ends with you at a page where nothing works sometimes. Everything is just perpetually "a bit broken" seemingly everywhere I go, not specific to OpenAI or even the internet.

observationist 9 minutes ago||||
They're having service issues - ChatGPT on the web is broken for a lot of people. The app is working in android - I'd assume that the rollout hit a hitch and the chatbox in the article would normally work.
jdndbdjsj 6 minutes ago||||
Welcome to a big company
ionwake 7 minutes ago|||
bro you replied to didnt even understand your comment
m3kw9 2 minutes ago|||
what? it's their own site and own llm. I could paste most sites and it would work.
judge2020 14 minutes ago||
Works for me: https://rr.judge.sh/Labradorretriever/d6af05/chrome_j9rXJMlf...
__jl__ 59 minutes ago||
What a model mess!

OpenAI now has three price points: GPT 5.1, GPT 5.2 and now GPT 5.4. There version numbers jump across different model lines with codex at 5.3, what they now call instant also at 5.3.

Anthropic are really the only ones who managed to get this under control: Three models, priced at three different levels. New models are immediately available everywhere.

Google essentially only has Preview models! The last GA is 2.5. As a developer, I can either use an outdated model or have zero insurances that the model doesn't get discontinued within weeks.

strongpigeon 53 minutes ago||
> Google essentially only has Preview models! The last GA is 2.5. As a developer, I can either use an outdated model or have zero insurances that the model doesn't get discontinued within weeks.

What's funny is that there is this common meme at Google, you can either use the old, unmaintained tool that's used everywhere, or the new beta tools that doesn't quite do what you want.

Not quite the same, but it did remind me of it.

fhrow4484 47 minutes ago|||
https://static0.anpoimages.com/wordpress/wp-content/uploads/...
yieldcrv 35 seconds ago||
Preview Road (only choice, last preview was deprecated)
L-four 38 minutes ago||||
Gmail was in beta for 5 years, until 2009.
metalliqaz 9 minutes ago||
"Gemini, translate 'beta' from Googlespeak to English."

"Ok, here is the translation:"

    'we don't want to offer support'
cyanydeez 3 minutes ago||
Nah, it's "We dont want to provide a consistent model that we'll be stuck with supporting for a decade because it just takes up space; until we run everyone out of business, we can't afford to have customers tying their systems to any given model"

Really, the economics makes no sense, but that's what they're doing. You can't have a consistent model because it'll pin their hardware & software, and that costs money.

cyanydeez 6 minutes ago||||
The business models of LLMs don't include any garuntee, and some how that's fine for a burgeoning decade of trillions of dollars of consumption.

Sure, makes total sense guys.

m_fayer 32 minutes ago||||
My 5ish years in the mines of Android native back in the day are not years I recall fondly. Never change, Google.
jakub_g 48 minutes ago|||
"Everything is beta or deprecated."
Aurornis 24 minutes ago|||
> What a model mess! OpenAI now has three price points: GPT 5.1, GPT 5.2 and now GPT 5.4.

I don't know, this feels unnecessarily nitpicky to me

It isn't hard to understand that 5.4 > 5.2 > 5.1. It's not hard to understand that the dash-variants have unique properties that you want to look up before selecting.

Especially for a target audience of software engineers skipping a version number is a common occurrence and never questioned.

CobrastanJorji 7 minutes ago|||
> Google essentially only has Preview models.

It's really nice to see Google get back to its roots by launching things only to "beta" and then leaving them there for years. Gmail was "beta" for at least five years, I think.

0xbadcafebee 34 minutes ago|||
> or have zero insurances that the model doesn't get discontinued within weeks

Why are you using the same model after a month? Every month a better model comes out. They are all accessible via the same API. You can pay per-token. This is the first time in, like, all of technology history, that a useful paid service is so interoperable between providers that switching is as easy as changing a URL.

phainopepla2 1 minute ago||
If you're trying to use LLMs in an enterprise context, you would understand. Switching models sometimes requires tweaking prompts. That can be a complete mess, when there are dozens or hundreds of prompts you have to test.
embedding-shape 42 minutes ago|||
> OpenAI now has three price points: GPT 5.1, GPT 5.2 and now GPT 5.4.

I guess that's true, but geared towards API users.

Personally, since "Pro Mode" became available, I've been on the plan that enables that, and it's one price point and I get access to everything, including enough usage for codex that someone who spends a lot of time programming, never manage to hit any usage limits although I've gotten close once to the new (temporary) Spark limits.

m3kw9 1 minute ago|||
thats how they had it for years, is a mess, but controlled
raincole 21 minutes ago|||
They aggressively retire models, so GPT 5.1 and 5.2 are probably going to go soon.
delaminator 53 minutes ago|||
two great problems in computing

naming things

cache invalidation

off by one errors

arthurcolle 56 minutes ago||
There is a lot of opportunity here for the AI infrastructure layer on top of tier-1 model providers
motoxpro 21 minutes ago||
This is what clouds like AWS, Azure, and GCP solve (vertex AI, etc). They are already an abstraction on top of the model makers with distribution built in.

I also don't believe there is any value in trying to aggregate consumers or businesses just to clean up model makers names/release schedule. Consumers just use the default, and businesses need clarity on the underlying change (e.g. why is it acting different? Oh google released 3.6)

minimaxir 3 hours ago||
The marquee feature is obviously the 1M context window, compared to the ~200k other models support with maybe an extra cost for generations beyond >200k tokens. Per the pricing page, there is no additional cost for tokens beyond 200k: https://openai.com/api/pricing/

Also per pricing, GPT-5.4 ($2.50/M input, $15/M output) is much cheaper than Opus 4.6 ($5/M input, $25/M output) and Opus has a penalty for its beta >200k context window.

I am skeptical whether the 1M context window will provide material gains as current Codex/Opus show weaknesses as its context window is mostly full, but we'll see.

Per updated docs (https://developers.openai.com/api/docs/guides/latest-model), it supercedes GPT-5.3-Codex, which is an interesting move.

damsta 1 hour ago||
There is extra cost for >272K:

> For models with a 1.05M context window (GPT-5.4 and GPT-5.4 pro), prompts with >272K input tokens are priced at 2x input and 1.5x output for the full session for standard, batch, and flex.

Taken from https://developers.openai.com/api/docs/models/gpt-5.4

minimaxir 1 hour ago|||
Good find, and that's too small a print for comfort.
ValentineC 22 minutes ago||
It's also in the linked article:

> GPT‑5.4 in Codex includes experimental support for the 1M context window. Developers can try this by configuring model_context_window and model_auto_compact_token_limit. Requests that exceed the standard 272K context window count against usage limits at 2x the normal rate.

glenstein 1 hour ago||||
Wow, that's diametrically the opposite point: the cost is *extra*, not free.
apetresc 29 minutes ago||
Diametrically opposite to tokens beyond 200K being literally free? As in, you only pay for the first 200K tokens and the remaining 800K cost $0.00?

I don't think that's a fair reading of the original post at all, obviously what they meant by "no cost" was "no increase in the cost".

fragmede 1 hour ago|||
Which, Claude has the same deal. You can get a 1M context window, but it's gonna cost ya. If you run /model in claude code, you get:

    Switch between Claude models. Applies to this session and future Claude Code sessions. For other/previous model names, specify with --model.
    
       1. Default (recommended)   Opus 4.6 · Most capable for complex work
       2. Opus (1M context)        Opus 4.6 with 1M context · Billed as extra usage · $10/$37.50 per Mtok
       3. Sonnet                   Sonnet 4.6 · Best for everyday tasks
       4. Sonnet (1M context)      Sonnet 4.6 with 1M context · Billed as extra usage · $6/$22.50 per Mtok
       5. Haiku                    Haiku 4.5 · Fastest for quick answers
tedsanders 3 hours ago|||
Yeah, long context vs compaction is always an interesting tradeoff. More information isn't always better for LLMs, as each token adds distraction, cost, and latency. There's no single optimum for all use cases.

For Codex, we're making 1M context experimentally available, but we're not making it the default experience for everyone, as from our testing we think that shorter context plus compaction works best for most people. If anyone here wants to try out 1M, you can do so by overriding `model_context_window` and `model_auto_compact_token_limit`.

Curious to hear if people have use cases where they find 1M works much better!

(I work at OpenAI.)

sillysaurusx 1 hour ago|||
You may want to look over this thread from cperciva: https://x.com/cperciva/status/2029645027358495156

I too tried Codex and found it similarly hard to control over long contexts. It ended up coding an app that spit out millions of tiny files which were technically smaller than the original files it was supposed to optimize, except due to there being millions of them, actual hard drive usage was 18x larger. It seemed to work well until a certain point, and I suspect that point was context window overflow / compaction. Happy to provide you with the full session if it helps.

I’ll give Codex another shot with 1M. It just seemed like cperciva’s case and my own might be similar in that once the context window overflows (or refuses to fill) Codex seems to lose something essential, whereas Claude keeps it. What that thing is, I have no idea, but I’m hoping longer context will preserve it.

FrankBooth 1 hour ago|||
What’s the connection with context size in that thread? It seems more like an instruction following problem.
woadwarrior01 1 hour ago|||
Please don't post links with tracking parameters (t=jQb...).

https://xcancel.com/cperciva/status/2029645027358495156

sillysaurusx 1 hour ago||
Haha. This was the second time in like a year that I’ve posted a Twitter link, and the second time someone complained. Okay, I’ll try to remove those before posting, and I’ll edit this one out.

Feels like a losing battle, but hey, the audience is usually right.

woadwarrior01 1 hour ago||
I'm sorry, but it's my pet peeve. If you're on iOS/macOS I built a 100% free and privacy-friendly app to get rid of tracking parameters from hundreds of different websites, not just X/Twitter.

https://apps.apple.com/us/app/clean-links-qr-code-reader/id6...

monocularvision 41 minutes ago|||
This is great! I have been meaning to implement this sort of thing in my existing Shortcuts flow but I see you already support it in Shortcuts! Thank you for this!

Anywhere I can toss a Tip for this free app?

sillysaurusx 1 hour ago||||
It works on iOS? That’s cool. I’ll give it a go.
pmarreck 48 minutes ago|||
So what is your motivation for doing this, incidentally? Can you be explicit about it? I am genuinely curious.

Especially when it’s to the point of, you know, nagging/policing people to do it the way you’d prefer, when you could just redirect your router requests from x.com to xcancel.com

asabla 10 minutes ago||||
I really don't have any numbers to back this up. But it feels like the sweet spot is around ~500k context size. Anything larger then that, you usually have scoping issues, trying to do too much at the same time, or having having issues with the quality of what's in the context at all.

For me, I would say speed (not just time to first token, but a complete generation) is more important then going for a larger context size.

akiselev 2 hours ago||||
> Curious to hear if people have use cases where they find 1M works much better!

Reverse engineering [1]. When decompiling a bunch of code and tracing functionality, it's really easy to fill up the context window with irrelevant noise and compaction generally causes it to lose the plot entirely and have to start almost from scratch.

(Side note, are there any OpenAI programs to get free tokens/Max to test this kind of stuff?)

[1] https://github.com/akiselev/ghidra-cli

nowittyusername 1 hour ago||||
Personally what I am more interested about is effective context window. I find that when using codex 5.2 high, I preferred to start compaction at around 50% of the context window because I noticed degradation at around that point. Though as of a bout a month ago that point is now below that which is great. Anyways, I feel that I will not be using that 1 million context at all in 5.4 but if the effective window is something like 400k context, that by itself is already a huge win. That means longer sessions before compaction and the agent can keep working on complex stuff for longer. But then there is the issue of intelligence of 5.4. If its as good as 5.2 high I am a happy camper, I found 5.3 anything... lacking personally.
simianwords 3 hours ago||||
Do you maybe want to give us users some hints on what to compact and throw away? In codex CLI maybe you can create a visual tool that I can see and quickly check mark things I want to discard.

Sometimes I’m exploring some topic and that exploration is not useful but only the summary.

Also, you could use the best guess and cli could tell me that this is what it wants to compact and I can tweak its suggestion in natural language.

Context is going to be super important because it is the primary constraint. It would be nice to have serious granular support.

Someone1234 2 hours ago||||
That's an interesting point regarding context Vs. compaction. If that's viewed as the best strategy, I'd hope we would see more tools around compaction than just "I'll compact what I want, brace yourselves" without warning.

Like, I'd love an optional pre-compaction step, "I need to compact, here is a high level list of my context + size, what should I junk?" Or similar.

thyb23 1 hour ago||
This is exactly how it should work. I imagine it as a tree view showing both full and summarized token counts at each level, so you can immediately see what’s taking up space and what you’d gain by compacting it.

The agent could pre-select what it thinks is worth keeping, but you’d still have full control to override it. Each chunk could have three states: drop it, keep a summarized version, or keep the full history.

That way you stay in control of both the context budget and the level of detail the agent operates with.

Folcon 1 hour ago||
I do find it really interesting that more coding agents don't have this as an toggleable feature, sometimes you really need this level of control to get useful capability
Someone1234 1 hour ago||
Yep; I've actually had entire jobs essentially fail due to a bad compaction. It lost key context, and it completely altered the trajectory.

I'm now more careful, using tracking files to try to keep it aligned, but more control over compaction regardless would be highly welcomed. You don't ALWAYS need that level of control, but when you do, you do.

gspetr 1 hour ago|||
I have found a bigger context window qute useful when trying to make sense of larger codebases. Generating documentation on how different components interact is better than nothing, especially if the code has poor test coverage.

I've also had it succeed in attempts to identify some non-trivial bugs that spanned multiple modules.

netinstructions 2 hours ago|||
People (and also frustratingly LLMs) usually refer to https://openai.com/api/pricing/ which doesn't give the complete picture.

https://developers.openai.com/api/docs/pricing is what I always reference, and it explicitly shows that pricing ($2.50/M input, $15/M output) for tokens under 272k

It is nice that we get 70-72k more tokens before the price goes up (also what does it cost beyond 272k tokens??)

Flashtoo 1 hour ago||
> Prompts with more than 272K input tokens are priced at 2x input and 1.5x output for the full session for standard, batch, and flex.
netinstructions 1 hour ago||
Thanks, it looks like the pricing page keeps getting updated.

Even right now one page refers to prices for "context lengths under 270K" whereas another has pricing for "<272K context length"

andai 1 hour ago|||
It's a little hard to compare, because Claude needs significantly fewer tokens for the same task. A better metric is the cost per task, which ends up being pretty similar.

For example on Artificial Analysis, the GPT-5.x models' cost to run the evals range from half of that of Claude Opus (at medium and high), to significantly more than the cost of Opus (at extra high reasoning). So on their cost graphs, GPT has a considerable distribution, and Opus sits right in the middle of that distribution.

The most striking graph to look at there is "Intelligence vs Output Tokens". When you account for that, I think the actual costs end up being quite similar.

According to the evals, at least, the GPT extra high matches Opus in intelligence, while costing more.

Of course, as always, benchmarks are mostly meaningless and you need to check Actual Real World Results For Your Specific Task!

For most of my tasks, the main thing a benchmark tells me is how overqualified the model is, i.e. how much I will be over-paying and over-waiting! (My classic example is, I gave the same task to Gemini 2.5 Flash and Gemini 2.5 Pro. Both did it to the same level of quality, but Gemini took 3x longer and cost 3x more!)

luca-ctx 1 hour ago|||
Context rot is definitely still a problem but apparently it can be mitigated by doing RL on longer tasks that utilize more context. Recent Dario interview mentions this is part of Anthropic’s roadmap.
smusamashah 37 minutes ago|||
Gemini already has 1M or 2M context window right?
AtreidesTyrant 1 hour ago|||
token rot exists for any context window at above 75% capacity, thats why so many have pushed for 1 mil windows
thehamkercat 3 hours ago|||
GPT 5.3 codex had 400K context window btw
simianwords 3 hours ago|||
Why would some one use codex instead?
lmeyerov 1 hour ago|||
In our evals for answering cybersecurity incident investigation questions and even autonomously doing the full investigation, gpt-5.2-codex with low reasoning was the clear winner over non-codex or higher reasoning. 2X+ faster, higher completion rates, etc.

It was generally smarter than pre-5.2 so strategically better, and codex likewise wrote better database queries than non-codex, and as it needs to iteratively hunt down the answer, didn't run out the clock by drowning in reasoning.

Video: https://media.ccc.de/v/39c3-breaking-bots-cheating-at-blue-t...

We'll be updating numbers on 5.3 and claude, but basically same thing there. Early, but we were surprised to see codex outperform opus here.

jeswin 2 hours ago||||
When it comes to lengthy non-trivial work, codex is much better but also slower.
synergy20 50 minutes ago||||
in my testing codex actually planned worse than claude but coded better once the plan is set, and faster. it is also excellent to cross check claude's work, always finding great weakness each time.
pmarreck 45 minutes ago||
That’s why I think the sweet spot is to write up plans with Claude and then execute them with Codex
surgical_fire 3 hours ago||||
I've been using Codex for software development personally (I have a ChatGPT account), and I use Claude at work (since it is provided by my employer).

I find both Codex and Claude Opus perform at a similar level, and in some ways I actually prefer Codex (I keep hitting quota limits in Opus and have to revert back to Sonnet).

If your question is related to morality (the thing about US politics, DoD contract and so on)... I am not from the US, and I don't care about its internal politics. I also think both OpenAI and Anthropic are evil, and the world would be better if neither existed.

hnsr 45 minutes ago|||
> I've been using Codex for software development personally (I have a ChatGPT account), and I use Claude at work (since it is provided by my employer).

Exact same situation here. I've been using both extensively for the last month or so, but still don't really feel either of them is much better or worse. But I have not done large complex features with it yet, mostly just iterative work or small features.

I also feel I am probably being very (overly?) specific in my prompts compared to how other people around me use these agents, so maybe that 'masks' things

athrowaway3z 1 hour ago||||
They perform at a somewhat equal level on writing single files. But Codex is absolute garbage at theory of self/others. That quickly becomes frustrating.

I can tell claude to spawn a new coding agent, and it will understand what that is, what it should be told, and what it can approximately do.

Codex on the other hand will spawn an agent and then tell it to continue with the work. It knows a coding agent can do work, but doesn't know how you'd use it - or that it won't magically know a plan.

You could add more scaffolding to fix this, but Claude proves you shouldn't have to.

I suspect this is a deeper model "intelligence" difference between the two, but I hope 5.4 will surprise me.

surgical_fire 1 hour ago||
> They perform at a somewhat equal level on writing single files.

That's not the experience I have. I had it do more complex changes spawning multiple files and it performed well.

I don't like using multiple agents though. I don't vibe code, I actually review every change it makes. The bottleneck is my review bandwidth, more agents producing more code will not speed me up (in fact it will slow me down, as I'll need to context switch more often).

simianwords 3 hours ago|||
No my question was why would I use codex over gpt 5.4
surgical_fire 2 hours ago||
Ahh, good question. I misunderstood you, apologies.

There's no mention of pricing, quotas and so on. Perhaps Codex will still be preferable for coding tasks as it is tailored for it? Maybe it is faster to respond?

Just speculation on my part. If it becomes redundant to 5.4, I presume it will be sunset. Or maybe they eventually release a Codex 5.4?

landtuna 2 hours ago||
5.3 Codex is $1.75/$14, and 5.4 is $2.50/$15.
surgical_fire 29 minutes ago||
There you go. It makes perfect sense to keep it around then.
embedding-shape 3 hours ago|||
Why would someone use Claude Code instead? Or any other harness? Or why only use one?

My own tooling throws off requests to multiple agents at the same time, then I compare which one is best, and continue from there. Most of the time Codex ends up with the best end results though, but my hunch is that at one point that'll change, hence I continue using multiple at the same time.

paulddraper 1 hour ago||
I don’t know about 5.4 specifically, but in the past anything over 200k wasn’t that great anyway.

Like, if you really don’t want to spend any effort trimming it down, sure use 1m.

Otherwise, 1m is an anti pattern.

creamyhorror 2 hours ago||
I've only used 5.4 for 1 prompt (edit: 3@high now) so far (reasoning: extra high, took really long), and it was to analyse my codebase and write an evaluation on a topic. But I found its writing and analysis thoughtful, precise, and surprisingly clearly written, unlike 5.3-Codex. It feels very lucid and uses human phrasing.

It might be my AGENTS.md requiring clearer, simpler language, but at least 5.4's doing a good job of following the guidelines. 5.3-Codex wasn't so great at simple, clear writing.

sampton 1 minute ago||
That's been my experience as well switching from Opus to Codex. Reasoning takes longer but answers are precise. Claude is sloppy in comparison.
irishcoffee 33 minutes ago||
> It might be my AGENTS.md requiring clearer, simpler language

If you gave the exact same markdown file to me and I posted ed the exact same prompts as you, would I get the same results?

kgeist 1 hour ago||
>Today, we’re releasing <..> GPT‑5.3 Instant

>Today, we’re releasing GPT‑5.4 in ChatGPT (as GPT‑5.4 Thinking),

>Note that there is not a model named GPT‑5.3 Thinking

They held out for eight months without a confusing numbering scheme :)

XCSme 3 minutes ago||
What I'm most confused, is why call it both GPT-5.3 Instant and gpt-5.3-chat?
gallerdude 1 hour ago||
Tbf there was a 5.3 codex
Alifatisk 31 minutes ago||
So let me get this straight, OpenAi previously had an issue with LOTS of different models snd versions being available. Then they solved this by introducing GPT-5 which was more like a router that put all these models under the hood so you only had to prompt to GPT-5, and it would route to the best suitable model. This worked great I assume and made the ui for the user comprehensible. But now, they are starting to introduce more of different models again?

We got:

- GPT-5.1

- GPT-5.2 Thinking

- GPT-5.3 (codex)

- GPT-5.3 Instant

- GPT-5.4 Thinking

- GPT-5.4 Pro

Who’s to blame for this ridiculous path they are taking? I’m so glad I am not a Chat user, because this adds so much unnecessary cognitive load.

The good news here is the support for 1M context window, finally it has caught up to Gemini.

361994752 21 minutes ago|
i guess you still have the "auto" as an option to route your request
Chance-Device 3 hours ago||
I’m sure the military and security services will enjoy it.
theParadox42 2 hours ago||
The self reported safety score for violence dropped from 91% to 83%.
skrebbel 1 hour ago||
What the hell is a "safety score for violence"?
I-M-S 1 hour ago|||
It's making sure AI condemns violence perpetuated by people without power and sanctifies violence of those who have it.
Waterluvian 47 minutes ago|||
So long as those who have it deem it legal to perpetuate.
Computer0 29 minutes ago|||
ChatGPT will gladly defend any actions of the 'US government' from my testing.
murat124 1 hour ago||||
I asked an AI. I thought they would know.

What the hell is a "safety score for violence"?

A “safety score for violence” is usually a risk rating used by platforms, AI systems, or moderation tools to estimate how likely a piece of content is to involve or promote violence. It’s not a universal standard—different companies use their own versions—but the idea is similar everywhere.

What it measures

A safety score typically evaluates whether text, images, or videos contain things like:

Threats of violence (“I’m going to hurt someone.”) Instructions for harming people Glorifying violent acts Descriptions of physical harm or abuse Planning or encouraging attacks

0xffff2 8 minutes ago||
I still can't tell which direction this score goes... Does a decreasing score mean it is "less safe" (i.e. "more violent") or does it mean it is "less violent" (i.e. "more safe")?
0123456789ABCDE 1 hour ago|||
read here: https://deploymentsafety.openai.com/gpt-5-4-thinking/disallo...
ozgung 1 hour ago|||
Did they publish its scores on military benchmarks, like on ArtificialSuperSoldier or Humanity's Last War?
yoyohello13 1 hour ago|||
Also advertisers, don't forget those sweet, sweet ads.
varispeed 3 hours ago||
prompt> Hi we want to build a missile, here is the picture of what we have in the yard.
mirekrusin 2 hours ago||

    { tools: [ { name: "nuke", description: "Use when sure.", ... { lat: number, long: number } } ] }
Insanity 1 hour ago||
Just remember an ethical programmer would never write a function “bombBagdad”. Rather they would write a function “bombCity(target City)”.
jakeydus 1 hour ago||
class CityBomberFactory(RapidInfrastructureDeconstructionTemplateInterface): pass
gavinray 3 hours ago||
The "RPG Game" example on the blogpost is one of the most impressive demo's of autonomous engineering I've seen.

It's very similar to "Battle Brothers", and the fact that RPG games require art assets, AI for enemy moves, and a host of other logical systems makes it all the more impressive.

casid 39 minutes ago||
I don't know. It looks shallow and simple, not even a demo.
hu3 1 hour ago|||
indeed and I suspect it can be attributed to, at least in part, the improved playwright integration.

> we’re also releasing an experimental Codex skill called “Playwright (Interactive) (opens in a new window)”. This allows Codex to visually debug web and Electron apps; it can even be used to test an app it’s building, as it’s building it.

hungryhobbit 1 hour ago||
[flagged]
OsrsNeedsf2P 1 hour ago||
Low quality off-topic comment. It's not murder when they're American soldiers.
squibonpig 1 hour ago||
Murder in spirit if not by the letter
senko 20 minutes ago||
Just tested it with my version of the pelican test: a minimal RTS game implementation (zero-shot in codex cli): https://gist.github.com/senko/596a657b4c0bfd5c8d08f44e4e5347... (you'll have to download and open the file, sadly GitHub refuses to serve it with the correct content type)

This is on the edge of what the frontier models can do. For 5.4, the result is better than 5.3-Codex and Opus 4.6. (Edit: nowhere near the RPG game from their blog post, which was presumably much more specced out and used better engineering setup).

I also tested it with a non-trivial task I had to do on an existing legacy codebase, and it breezed through a task that Claude Code with Opus 4.6 was struggling with.

I don't know when Anthropic will fire back with their own update, but until then I'll spend a bit more time with Codex CLI and GPT 5.4.

mattas 3 hours ago|
"GPT‑5.4 interprets screenshots of a browser interface and interacts with UI elements through coordinate-based clicking to send emails and schedule a calendar event."

They show an example of 5.4 clicking around in Gmail to send an email.

I still think this is the wrong interface to be interacting with the internet. Why not use Gmail APIs? No need to do any screenshot interpretation or coordinate-based clicking.

bottlepalm 51 minutes ago||
The vast majority of websites you visit don’t have usable APIs and very poor discovery of the those APIs.

Screenshots on the other hand are documentation, API, and discovery all in one. And you’d be surprised how little context/tokens screenshots consumer compared to all the back and forth verbose json payloads of APIs

LUmBULtERA 36 minutes ago||
>The vast majority of websites you visit don’t have usable APIs and very poor discovery of the those APIs.

I think an important thing here is that a lot of websites/platforms don't want AIs to have direct API access, because they are afraid that AIs would take the customer "away" from the website/platform, making the consumer a customer of the AI rather than a customer of the website/platform. Therefore for AIs to be able to do what customers want them to do, they need their browsing to look just like the customer's browsing/browser.

npilk 2 hours ago|||
It feels like building humanoid robots so they can use tools built for human hands. Not clear if it will pay off, but if it does then you get a bunch of flexibility across any task "for free".

Of course APIs and CLIs also exist, but they don't necessarily have feature parity, so more development would be needed. Maybe that's the future though since code generation is so good - use AI to build scaffolding for agent interaction into every product.

packetlost 52 minutes ago||
I don't see how an API couldn't have full parity with a web interface, the API is how you actually trigger a state transition in the vast majority of cases
f0e4c2f7 2 hours ago|||
Lots of services have no desire to ever expose an API. This approach lets you step right over that.

If an API is exposed you can just have the LLM write something against that.

coffeemug 2 hours ago|||
A model that gets good at computer use can be plugged in anywhere you have a human. A model that gets good at API use cannot. From the standpoint of diffusion into the economy/labor market, computer use is much higher value.
modeless 3 hours ago|||
A world where AIs use APIs instead of UIs to do everything is a world where us humans will soon be helpless, as we'll have to ask the AIs to do everything for us and will have limited ability to observe and understand their work. I prefer that the AIs continue to use human-accessible tools, even if that's less efficient for them. As the price of intelligence trends toward zero, efficiency becomes relatively less important.
TheAceOfHearts 3 hours ago|||
I think the desire is that in the long-term AI should be able to use any human-made application to accomplish equivalent tasks. This email demo is proof that this capability is a high priority.
MattDaEskimo 1 hour ago|||
Same reason why Wikipedia deals with so many people scraping its web page instead of using their API:

Optimizations are secondary to convenience

kristianp 1 hour ago|||
This opens up a new question: how does bot detection work when the bot is using the computer via a gui?
itintheory 1 hour ago||
On it's face, I'm not sure that's a new question. Bots using browser automation frameworks (puppeteer, selenium, playwright etc) have been around for a while. There are signals used in bot detection tools like cursor movement speed, accuracy, keyboard timing, etc. How those detection tools might update to support legitimate bot users does seem like an open question to me though.
PaulHoule 3 hours ago|||
APIs have never been a gift but rather have always been a take-away that lets you do less than you can with the web interface. It’s always been about drinking through a straw, paying NASA prices, and being limited in everything you can do.

But people are intimidated by the complexity of writing web crawlers because management has been so traumatized by the cost of making GUI applications that they couldn’t believe how cheap it is to write crawlers and scrapers…. Until LLMs came along, and changed the perceived economics and created a permission structure. [1]

AI is a threat to the “enshittification economy” because it lets us route around it.

[1] that high cost of GUI development is one reason why scrapers are cheap… there is a good chance that the scraper you wrote 8 years ago still works because (a) they can’t afford to change their site and (b) if they could afford to change their site changing anything substantial about it is likely to unrecoverably tank their Google rankings so they won’t. A.I. might change the mechanics of that now that you Google traffic is likely to go to zero no matter what you do.

Traster 1 hour ago|||
You can buy a Claude Code subscription for $200 bucks and use way more tokens in Claude Code than if you pay for direct API usage. Anthopic decided you can't take your Auth key for Claude code and use it to hit the API via a different tool. They made that business decision, because they thought it was better for them strategically to do that. They're allowed to make that choice as a business.

Plenty of companies make the same choice about their API, they provide it for a specific purpose but they have good business reasons they want you using the website. Plenty of people write webcrawlers and it's been a cat and mouse game for decades for websites to block them.

This will just be one more step in that cat and mouse game, and if the AI really gets good enough to become a complete intermediary between you and the website? The website will just shutdown. We saw it happen before with the open web. These websites aren't here for some heroic purpose, if you screw their business model they will just go out of business. You won't be able to use their website because it won't exist and the website that do exist will either (a) be made by the same guys writing your agent, and (b) be highly highly optimized to get your agent to screw you.

disqard 3 hours ago||||
> AI is a threat to the “enshittification economy” because it lets us route around it.

This is prescient -- I wonder if the Big Tech entities see it this way. Maybe, even if they do, they're 100% committed to speedrunning the current late-stage-cap wave, and therefore unable to do anything about it.

PaulHoule 2 hours ago||
They are not a single thing.

Google has a good model in the form of Gemini and they might figure they can win the AI race and if the web dies, the web dies. YouTube will still stick around.

Facebook is not going to win the AI race with low I.Q. Llama but Zuck believed their business was cooked around the time it became a real business because their users would eventually age out and get tired of it. If I was him I'd be investing in anything that isn't cybernetic let it be gold bars or MMA studios.

Microsoft? They bought Activision for $69 billion. I just can't explain their behavior rationally but they could do worse than their strategy of "put ChatGPT in front of laggards and hope that some of them rise to the challenge and become slop producers."

Amazon is really a bricks-and-mortar play which has the freedom to invest in bricks-and-mortar because investors don't think they are a bricks-and-mortar play.

Netflix? They're cooked as is all of Hollywood. Hollywood's gatekeeping-industrial strategy of producing as few franchise as possible will crack someday and our media market may wind up looking more like Japan, where somebody can write a low-rent light novel like

https://en.wikipedia.org/wiki/Backstabbed_in_a_Backwater_Dun...

and J.C. Staff makes a terrible anime that convinces 20k Otaku to drop $150 on the light novels and another $150 on the manga (sorry, no way you can make a balanced game based on that premise!) and the cost structure is such that it is profitable.

lostmsu 2 hours ago|||
> AI is a threat to the “enshittification economy” because it lets us route around it.

I am not sure about that. We techies avoid enshittification because we recognize shit. Normies will just get their syncopatic enshittified AI that will tell them to continue buying into walled gardens.

jstummbillig 3 hours ago|||
Because the web and software more generally if full of not APIs and you do, in fact, need the clicking to work to make agents work generally
satvikpendem 3 hours ago|||
The ideal of REST, the HTML and UI is the API.
Jacques2Marais 3 hours ago|||
I guess a big chunk of their target market won't know how to use APIs.
spongebobstoes 3 hours ago|||
not everything has an API, or API use is limited. some UIs are more feature complete than their APIs

some sites try to block programmatic use

UI use can be recorded and audited by a non-technical person

steve1977 3 hours ago||
One could argue that LLMs learning programming languages made for humans (i.e. most of them) is using the wrong interface as well. Why not use machine code?
embedding-shape 3 hours ago|||
Why would human language by the wrong interface when they're literally language models? Why would machine code be better when there is probably magnitude less of training material with machine code?

You can also test this yourself easily, fire up two agents, ask one to use PL meant for humans, and one to write straight up machine code (or assembly even), and see which results you like best.

adwn 1 hour ago||||
> One could argue that LLMs learning programming languages made for humans (i.e. most of them) is using the wrong interface as well.

Then go ahead and make an argument. "Why not do X?" is not an argument, it's a suggestion.

BoredPositron 3 hours ago|||
because they are inherently text based as is code?
steve1977 3 hours ago||
But they are abstractions made to cater to human weaknesses.
More comments...