Top
Best
New

Posted by hansonw 8 hours ago

Building more with GPT-5.1-Codex-Max(openai.com)
343 points | 198 comments
johnfn 7 hours ago|
I've been using a lot of Claude and Codex recently.

One huge difference I notice between Codex and Claude code is that, while Claude basically disregards your instructions (CLAUDE.md) entirely, Codex is extremely, painfully, doggedly persistent in following every last character of them - to the point that i've seen it work for 30 minutes to convolute some solution that was only convoluted because of some sentence I threw in the instructions I had completely forgotten about.

I imagine Codex as the "literal genie" - it'll give you exactly what you asked for. EXACTLY. If you ask Claude to fix a test that accidentally says assert(1 + 1 === 3), it'll say "this is clearly a typo" and just rewrite the test. Codex will rewrite the entire V8 engine to break arithmetic.

Both these tools have their uses, and I don't think one approach is universally better. Because Claude just hacks its way to a solution, it is really fast, so I like using it for iterate web work, where I need to tweak some styles and I need a fast iterative loop. Codex is much worse at that because it takes like 5 minutes to validate everything is correct. Codex is much better for longer, harder tasks that have to be correct -- I can just write some script to verify that what it did work, and let it spin for 30-40 minutes.

hadlock 7 hours ago||
I've been really impressed with codex so far. I have been working on a flight simulator hobby project for the last 6 months and finally came to the conclusion that I need to switch from floating origin, which my physics engine assumes with the coordinate system it uses, to a true ECEF coordinate system (what underpins GPS). This involved a major rewrite of the coordinate system, the physics engine, even the graphics system and auxilary stuff like asset loading/unloading etc. that was dependent on local X,Y,Z. It even rewrote the PD autopilot to account for the changes in the coordinate system. I gave it about a paragraph of instructions with a couple of FYIs and... it just worked! No major graphical glitches except a single issue with some minor graphical jitter, which it fixed on the first try. In total took about 45 minutes but I was very impressed.

I was unconvinced it had actually, fully ripped out the floating origin logic, so I had it write up a summary and then used that as a high level guide to pick through the code and it had, as you said, followed the instructions to the letter. Hugely impressive. In march of 2023 OpenAI's products struggled to draw a floating wireframe cube.

jama211 39 minutes ago||
That’s a perfect example and interesting to read, thank you for sharing
nico 7 hours ago|||
> Claude basically disregards your instructions (CLAUDE.md) entirely

A friend of mine tells Claude to always address him as “Mr Tinkleberry”, he says he can tell when Claude is not paying attention to the instructions on CLAUDE.md when Claude stops calling him “Mr Tinkleberry” consistently

benzible 7 hours ago|||
Yep, it's David Lee Roth's brown M&M trick https://www.smithsonianmag.com/arts-culture/why-did-van-hale...
awad 7 hours ago||||
Highly recommend adding some kind of canary like this in all LLM project instructions. I prefer my instructions to say 'always start output with an (uniquely decided by you) emoji' as it's easier to visually scan for one when reading a wall of LLM output, and use a different emoji per project because what's life without a little whim?
wahnfrieden 3 hours ago||
This stuff also becomes context poison however
Uehreka 2 hours ago|||
Does it actually? One sentence telling the agent to call me “Chris the human serviette” plus the times it calls me that is not going to add that much to the context. What kills the context IME is verbose logs with timestamps.
IsopropylMalbec 2 hours ago|||
Sorry, what do you mean?
question_AK91 2 hours ago||
https://www.dbreunig.com/2025/06/22/how-contexts-fail-and-ho...

This guy has a good write up on the topic

nullc 2 hours ago||
Irrelevant nonsense can also poison the context. That's part of the magic formula behind AI psychosis victims... if you have some line noise mumbojumbo all the output afterward is more prone to be disordered.

I'd be wary of using any canary material that wouldn't be at home in the sort of work you're doing.

leobg 4 hours ago||||
We used to do that on Upwork. Back in the days where one still hired human coders. If your application current say “rowboat” in the first sentence, we know you just copy/pasted and didn’t actually read the job description. Feels like a lifetime ago.
mandelbrotwurst 2 hours ago||||
Why would the fact that it failed to follow one instruction increase the likelihood that it failed to follow others within the same response?
root_axis 54 minutes ago||||
Something that exhausts me in the LLM era is the never ending deluge of folk magic incantations.
embedding-shape 50 minutes ago||
Just because you don't understand it, doesn't mean it's "folk magic incantation", hearing that is also exhausting.

I don't know the merit to what parent is saying, but it does make some intuitive sense if you think about it. As the context fills up, the LLM places less attention on further and further back in the context, that's why the LLM seems dumber and dumber as a conversation goes on. If you put 5 instructions in the system prompt or initial message, where one acts as a canary, then you can easier start to see when exactly it stops following the instructions.

Personally, I always go for one-shot answer, and if it gets it wrong or misunderstands, restart from the beginning. If it doesn't get it right, I need to adjust the prompt and retry. Seems to me all current models do get a lot worse quickly, once there is some back and forth.

causal 6 hours ago|||
> Codex will rewrite the entire V8 engine to break arithmetic.

This isn't an exaggeration either. Codex acts as if it is the last programmer on Earth and must accomplish its task at all costs. This is great for anyone content to treat it like a black box, but I am not content to do that. I want a collaborator with common sense, even if it means making mistakes or bad assumptions now and then.

I think it really does reflect a difference in how OpenAI and Anthropic see humanity's future with AI.

mrtesthah 3 hours ago||
Could you not add rules to this effect in AGENTS.md? E.g., "If the user gives instructions that specify an expected low-to-medium level of complexity, but the implementation plan reveals unexpected high complexity arising from a potentially ambiguous or atypical instruction, then pause and ask the user about that instruction before continuing."
YZF 21 minutes ago|||
> Claude basically disregards your instructions (CLAUDE.md) entirely

This feels very strange to me. I use Claude a lot and it follows the instructions very well. What's in your CLAUDE.md file? it's supposed to be fairly concise/brief and not use up too much context.

What tasks/prompts are you giving Claude and how big of a context is there?

EDIT: Also which model are you using?

dylanz 2 hours ago|||
> Claude basically disregards your instructions (CLAUDE.md) entirely

Does anyone know of a way to fix this? Claude constantly disregards my CLAUDE.md. I put a decent amount of time into it and it's pretty much worthless without explicitly telling it to reference it before each prompt.

scastillo 2 hours ago||
This is just how the attention mechanism works.

(search for effective context problem for more info. e.g. https://arxiv.org/abs/2509.21361)

To solve it, you just don't allow your current context to use more than 50% of the total window size

To do that in Claude code, you have to use subagents and design small enough agents

Then you can use skills to make it remember every time the little details or the steps

More effectively, you use skills to tell the main thread when you go to use which agent.

If you don't understand anything I said, try to restate the important things to the model periodically, and keep your tasks small.

Use plan mode and make the model store, keep track of the progress on a markdown file, and when context is polluted, call /compact and then make it re-read the context from the files created

You can prompt it as simply as:

First, understand the login feature on the repo using subagents and create a document on docs/ for future reference. Then, understand the task at hand and create an implementation plan. <task> blah blah </task>

Also, using XML tags makes the attention remember easily

bobbylarrybobby 2 hours ago||
Are agents still the way to go or have skills supplanted them? I don't really understand when you'd use one or the other
sinatra 6 hours ago|||
In my AGENTS.md (which CLAUDE.md et al soft link to), I instruct them to "On phase completion, explicitly write that you followed these guidelines." This text always shows up on Codex and very rarely on Claude Code (TBF, Claude Code is showing it more often lately).
sunaookami 5 hours ago|||
Agreed 100%, that's why I would recommend Codex for e.g. logfile analysis. Had some annoying php warnings in the logs from a WordPress plugin because I've used another plugin in the past (like... over 10 years ago) that wrote invalid metadata for every media file into the database and it didn't annoy me THAT much that I wanted to invest much time into it. So I gave codex the logfile and my WordPress dir and access to the WP-CLI command and it correctly identified the issue and wrote scripts to delete the old metadata (I did check it & make backups of course). Codex took a LOT of time though, it's veeeeeeery slow as you said. But I could do other things in the meantime.
fakedang 3 hours ago||
This is what I've observed too. Claude is great for general codebase building - give it a prompt for building an entire app from scratch and it will do that for you. Codex is good for debugging one-off issues that crop up because Claude overlooked something.
ramoz 3 hours ago|||
Ultimately, relying on system level instructions is unreliable over time.

Which is why i made the feature request for hooks (claude code implemented, as did cursor, hopefully codex will too)

And will soon release https://github.com/eqtylab/cupcake

aerhardt 6 hours ago|||
Well surely that's a good thing.

In my experience, for some reason adherence is not even close to 100%. It's fixated on adding asterisk function params in my Python code and I cannot get it to stop... Maybe I haven't found the right wording, or maybe my codebase has grown past a certain size (there are like a dozen AGENTS.md files dancing around).

I'm still very happy with the tool, though.

johnfn 6 hours ago||
It's a fantastic thing! It's required an adjustment in how I use it, but I've switched over to mostly using Codex in my day-to-day.
tekacs 4 hours ago|||
Yeah, Gemini 2.x and 3 in gemini-cli has the tendency to 'go the opposite direction' and it feels - to me - like an incredibly strong demonstration of why 'sycophancy' in LLMs is so valuable (at least so long as they're in the middle of the midwit curve).

I'll give Gemini direction, it'll research... start trying to solve it as I've told it to... and then exclaim, "Oh! It turns out that <X> isn't what <user> thought!" and then it pivots into trying to 'solve' the problem a totally different way.

The issue however... is that it's:

1) Often no longer solving the problem that I actually wanted to solve. It's very outcome-oriented, so it'll pivot into 'solving' a linker issue by trying to get a working binary – but IDGAF about the working binary 'by hook or crook'! I'm trying to fix the damn linker issue!

2) Just... wrong. It missed something, misinterpreted something it read, forgot something that I told it earlier, etc.

So... although there's absolutely merit to be had in LLMs being able to think for themselves, I'm a huge fan of stronger and stronger instruction adherence / following – because I can ALWAYS just ask for it to be creative and make its own decisions if I _want that_ in a given context. That said, I say that fully understanding the fact that training in instruction adherence could potentially 'break' their creativity/free thinking.

Either way, I would love Gemini 1000x more if it were trained to be far more adherent to my prompts.

buu700 4 hours ago|||
I haven't had that particular experience with Gemini 2.5, but did run into it during one of my first few uses of Gemini 3 yesterday.

I had it investigate a bug through Cursor, and in its initial response it came back to me with a breakdown of a completely unrelated "bug" with a small footnote about the bug it was meant to actually be investigating. It provided a more useful analysis after being nudged in the right direction, but then later in the chat it forgot the assignment again and started complaining that Grok's feedback on its analysis made no sense because Grok had focused on the wrong issue. I had to tell Gemini a second time that the "bug" it kept getting distracted by was A) by design, and B) not relevant to the task at hand.

Ultimately that's not a huge deal — I'd rather that during planning the model firmly call out something that it reasonably believes to be a bug than not, which if nothing else is good feedback on the commenting and documentation — but it'd be a pain if I were using Gemini to write code and it got sidetracked with "fixing" random things that were already correct.

tekacs 4 hours ago|||
Immediately rebutting myself: a major caveat to this that I'm discovering with Gemini is that... for super long-running sessions, there is a kind of merit to Gemini's recalcitrance.

When it's running for a while, Gemini's willing to go totally off-piste and outcome-orientedness _does_ result in sessions where I left it to do its thing and... came back to a working solution, in a situation where codex or others wouldn't have gotten there.

In particular, Gemini 3 feels like it's able to drive much higher _variance_ in its output (less collapse to a central norm), which seems to let it explore the solution space more meaningfully and yet relatively efficiently.

bugglebeetle 4 hours ago|||
The solution to this if you want less specification in advance is to simply ask Codex a series of leading questions about a feature of fix. I typically start with something like “it seems like X could be improved with the addition of Y? Can you review the relevant parts of the codebase in a, b, and c to assess?” It will then do so and come back with a set of suggestions that follow this guidance, which you can revise and selectively tell it to implement. In my experience, this fills the context with the appropriate details to then let it make more of its own decisions in a generally correct way without as much handholding.
stavros 3 hours ago||
No it won't, it'll spend ten minutes and come back with "OK I've implemented a solution". I really wish it had a plan mode.
bugglebeetle 1 hour ago||
Mileage may vary, but I do the above all day long without issue.
energy123 5 hours ago|||
GPT-5 is like that
gtrealejandro 1 hour ago||
[dead]
hansonw 8 hours ago||
Rest assured that we are better at training models than naming them ;D

- New benchmark SOTAs with 77.9% on SWE-Bench-Verified, 79.9% on SWE-Lancer, and 58.1% on TerminalBench 2.0

- Natively trained to work across many hours across multiple context windows via compaction

- 30% more token-efficient at the same reasoning level across many tasks

Let us know what you think!

sinatra 6 hours ago||
I currently use GPT‑5.1-Codex High and have a workflow that works well with the 5-hour/weekly limits, credits, et al. If I use GPT‑5.1-Codex-Max Medium or GPT‑5.1-Codex-Max High, how will that compare cost / credits / limits wise to GPT‑5.1-Codex High? I don't think that's clear. "Reduced tokens" makes me think it'll be priced similarly / lower. But, "Max" makes me think it'll be priced higher.
agentifysh 8 hours ago|||
did you address this https://github.com/openai/codex/issues/6426 ?

how much more token efficient is this compared to 5.0

had to use 5.0 because 5.1 was eating tokens like crazy and seemed like a slight incremental improvement barely noticeable

qsort 7 hours ago|||
Codex is an outstanding product and incremental upgrades are always welcome. I'll make sure to give it a try in the coming days. Great work! :)
carbocation 6 hours ago|||
It would be great to have access to this model via the chat interface, even if it was gated behind the "other models" dropdown or something.
iyn 8 hours ago|||
Looks like a great change! I'll take it for a spin in a moment.

I really like the "subagent" feature in Claude Code — it's super useful to manage context in complex codebases. Here are some examples of agents that can be useful: https://github.com/humanlayer/humanlayer/tree/main/.claude/a...

Would it make sense to have a similar feature in Codex CLI? I often do "spec-driven development", which is basically a loop of:

    research -> implementation plan -> actual implementation (based on research + plan) -> validation
I have multiple subagents that I use for each phase that (based on subjective judgement) improve the output quality (vs keeping everything, every tool use etc. in the "main" context window).

Codex CLI is great and I use it often but I'd like to have more of these convenient features for managing context from CC. I'm super happy that compaction is now available, hopefully we'll get more features for managing context.

baby 3 hours ago|||
Did you guys fix not being able to enable websearches or configure no timeouts for specific commands in the SDk (error 124 is way too common for long running tasks)
NitpickLawyer 8 hours ago|||
Will -minis come for the codex family of models? About two months ago I used 5-mini as a daily driver for a few weeks and quite liked it, it seemed capable enough on small tasks with some hand holding and the speed/price were great as well.
coder543 7 hours ago||
codex-mini was released a couple of weeks ago: https://platform.openai.com/docs/models/gpt-5.1-codex-mini
NitpickLawyer 7 hours ago||
Thanks! I somehow missed that. Will check it out.
andai 6 hours ago|||
So context window is still 400k but the model got good at removing irrelevant context?
baby 3 hours ago||
Or is more succinct in its thoughts
robotswantdata 7 hours ago|||
Sorry don’t like the max model, feels like it needs a lot more guiding. The plans it writes however are better, so I tried feeding it back in (meta prompt style) and working okay so far. Very large repository.
SoKamil 3 hours ago|||
> Natively trained

What does it even mean?

kaveh_h 3 hours ago||
Probably that before it was given system instructions on how to do compaction and now the compaction is learned by the model making it a native ability of the model without any extra instruction used in the prompt.
EnPissant 8 hours ago|||
Compaction is just what Claude Code has done forever, right?
GardenLetter27 8 hours ago|||
I think the point here is not that it does compaction (which Codex also already does) - but that the model was trained with examples of the Codex compaction, so it should perform better when compaction has taken place (a common source for drops in performance for earlier models).
EnPissant 8 hours ago||
Codex previously did only manual compaction, but yeah, maybe some extra training for compaction, too?
baby 3 hours ago||||
Yes. It was missing in codex until now
enraged_camel 8 hours ago|||
I am also trying to understand the difference between compaction, and what IDEs like Cursor do when they "summarize" context over long-running conversations.

Is this saying that said summarization now happens at the model level? Or are there other differences?

baby 3 hours ago||
Codex couldnt do what claude did before when reaching full context window
blks 5 hours ago||
I think your company will fail soon.
meowface 5 hours ago||
I would bet a lot of money it will not.
boole1854 5 hours ago||
Today I did some comparisons of GPT-5.1-Codex-Max (on high) in the Codex CLI versus Gemini 3 Pro in the Gemini CLI.

- As a general observation, Gemini is less easy to work with as a collaborator. If I ask the same question to both models, Codex will answer the question. Gemini will read some intention behind the question, write code to implement the intention, and only then answer the question. In one case, it took me five rounds of repeatedly rewriting my prompt in various ways before I could get it to not code but just answer the question.

- Subjectively, it seemed to me that the code that Gemini wrote was more similar to code that I, as a senior-level developer, would have written than what I have been used to from recent iterations of GPT-5.1. The code seemed more readable-by-default and not merely technically correct. I was happy to see this.

- Gemini seems to have a tendency to put its "internal dialogue" into comments. For example, "// Here we will do X because of reason Y. Wait, the plan calls for Z instead. Ok, we'll do Z.". Very annoying.

I did two concrete head-to-head comparisons where both models had the same code and the same prompt.

First, both models were told to take a high-level overview of some new functionality that we needed and were told to create a detailed plan for implementing it. Both models' plans were then reviewed by me and also by both models (in fresh conversations). All three of us agreed that Codex's plan was better. In particular, Codex was better at being more comprehensive and at understanding how to integrate the new functionality more naturally into the existing code.

Then (in fresh conversations), both models were told to implement that plan. Afterwards, again, all three of us compared the resulting solutions. And, again, all three of us agreed that Codex's implementation was better.

Notably, Gemini (1) hallucinated database column names, (2) ignored parts of the functionality that the plan called for, and (3) did not produce code that was integrated as well with the existing codebase. In its favor, it did produce a better version of a particular finance-related calculation function than Codex did.

Overall, Codex was the clear winner today. Hallucinations and ignored requirements are big problems that are very annoying to deal with when they happen. Additionally, Gemini's tendencies to include odd comments and to jump past the discussion phase of projects both make it more frustrating to work with, at this stage.

jadbox 2 hours ago|
Try checking your temp for any tool using Gemini.

"For Gemini 3, we strongly recommend keeping the temperature parameter at its default value of 1.0.While previous models often benefited from tuning temperature to control creativity versus determinism, Gemini 3's reasoning capabilities are optimized for the default setting. Changing the temperature (setting it below 1.0) may lead to unexpected behavior, such as looping or degraded performance, particularly in complex mathematical or reasoning tasks."

https://ai.google.dev/gemini-api/docs/gemini-3?thinking=high

Reubend 8 hours ago||
OpenAI likes to time their announcements alongside major competitor announcements to suck up some of the hype. (See for instance the announcement of GPT-4o a single day before Google's IO conference)

They were probably sitting on this for a while. That makes me think this is a fairly incremental update for Codex.

Palmik 7 hours ago||
GPT 5.1 / Codex already beats Gemini 3 on SWE Bench Verified and Terminal Bench and this pushes the gap further. Seems like a decent improvement.
criemen 3 hours ago|||
Anthropic released the Opus 4.1 (basically, a new Opus 4 checkpoint) right around the big GPT-5 release date too, if I remember correctly. At this point, anything goes to stay relevant.
bugglebeetle 7 hours ago|||
That’s how the game is played. We should be grateful for all the competition that is driving these improvements, not whinging about the realities of what companies have to do to contest each other’s position.
johnecheck 4 hours ago||
It's funny, this release comes right after the Gemini 3 release that coincided with day 1 of Microsoft's Ignite conference.
johnwheeler 7 hours ago|||
Gemini is eating their lunch, and OpenAI knows it.
peab 7 hours ago||
it's really getting old
simonw 6 hours ago||
Thinking level medium: https://tools.simonwillison.net/svg-render#%3Csvg%20xmlns%3D...

Thinking level xhigh: https://tools.simonwillison.net/svg-render#%20%20%3Csvg%20xm...

ineedasername 6 hours ago||
Medium has things dialed in. When both high and low are coherent but medium goes to cubism? That’s intent. Or it had a miscue on proportions vs shape placement. Either way, it’s great, sandwiched the way it is, between the other two. Did it put a comment in all of them or just the one w/ the hat?

Also, thanks for the posts— it’s hugely helpful to have a continuity of insightful perspective throughout.

taurath 8 hours ago||
These 2 sentences right next to each other stood out to me:

> a new step towards becoming a reliable coding partner

> GPT‑5.1-Codex-Max is built for long-running, detailed work

Does this not sound contradictory? It’s been the shorter form work that has built what little confidence I have in these as a coding partner - a model that goes off and does work without supervision is not a partner to me.

causal 8 hours ago||
Absolutely contradictory. The long-running tendency for Codex is why I cannot understand the hype around it: if you bother to watch what it does and read its code the approaches it takes are absolutely horrifying. It would rather rewrite a TLS library from scratch than bother to ask you if the network is available.
meowface 5 hours ago|||
>It would rather rewrite a TLS library from scratch than bother to ask you if the network is available.

This is definitely one of the biggest issues with coding agents at the moment.

That said, from my experience, Codex so often does things that are so useful and save me so much time that the occasional "oh god what the hell did it just go off and do" are an acceptable cost for me.

I regularly get great results with open-ended prompts and agents that spend 15+ minutes working on the task. I'm sure they'll eventually get better at common sense understanding of what kind of work is wasteful/absurd.

keeganpoppen 7 hours ago|||
these things are actually fixable with prompting. is it easy? no. is it PEBKaC if you don’t do anything to change course as it builds a TLS library? yes, but paperclip maximized! xD
causal 6 hours ago||
Or you can have a model with some semblance of common sense that will stop and say "Hey I can I have access to the network to do X?"

Codex feels like a tool designed to run after all the humans are gone.

embirico 7 hours ago|||
(Disclaimer: Am on the Codex team.) We're basically trying to build a teammate that can do both short, iterative work with you, then as you build trust (and configuration), you can delegate longer tasks to it.

The "# of model-generated tokens per response" chart in [the blog introducing gpt-5-codex](https://openai.com/index/introducing-upgrades-to-codex/) shows an example of how we're improving the model good at both.

ntonozzi 8 hours ago||
If you haven't, give Cursor's Composer model a shot. It might not be quite as good as the top models, but in my experience it's almost as good, and the lightning fast feedback is more than worth the tradeoff. You can give it a task, wait ten seconds, and evaluate the results. It's quite common for it to not be good enough, but no worse than Sonnet, and if it doesn't work you just wasted 30 seconds instead of 10 minutes.
amluto 8 hours ago||
I would love to see all the big players put 1% of the effort they put into model training into making the basic process of paying and signing in suck less.

Claude: they barely have a signin system at all. Multiple account support doesn’t exist. The minimum seat count for business is nonsense. The data retention policies are weak.

OpenAI: Make ZDR a thing you can use or buy without talking to sales, already. And for those using containers or a remote system or really anything other than local development with the codex CLI, you really really need to fix this bug. I bet Codex could do at least the client part for you!

https://github.com/openai/codex/issues/2798

(Hint: Claude Code gets this right by default, despite the fact that everything else about Claude sign-in is a joke.)

Google: get all your B2B AI product managers in one room and tell them that they need to make one single product menu on one single webpage with all the pricing on that page and that the Google Cloud people are not permitted to make anything that isn’t actually logically Google Cloud depend on Google Cloud Billing. Your product cannot compete with OpenAI or Anthropic if people need to ask an LLM to figure out what your product is and if your own fancy LLMs can’t give a straight answer. My company pays for a non-Google product primarily because it’s too complicated to pay for the Google product! Right now, trying to use Google’s AI is like trying to ride Bay Area public transit before the Clipper Card.

atonse 7 hours ago||
Agree 1,000%.

I just won’t even waste my time with the google stuff cuz I can’t figure out how to pay with it.

And that’s a problem everywhere at google. Our google play account is suspended cuz I can’t verify the company. It won’t let me cuz it says I’m not the owner. I’ve always been the owner of my company. For 18 years. There is no one else.

Once some error said make sure the owner email matches your profile in google payments and I was like, what is google payments and where do I even begin with that? I’ve never paid for google play so what does payments have to do with anything?

It’s totally random stuff. Get your shit together, google. Make your products and payment systems coherent, rather than it obviously looking like it was designed by a fiefdom full of territorial managers.

joshstrange 7 hours ago|||
The "Owner" accounts in Google Play and Apple's App Store are so freaking annoying. The only time they make sense is for solo-founders and even then I've had issues. Now expand it to working at a larger company and it's a joke, a bad one. Oh sure, I'll just get the CEO (or other higher-up) to login and accept new agreements, that will be easy. Even more fun when you tell a client (who logged in exactly 1 time to set up the account) that they need to use a generic email (not a personal one or an employee-specific one), the ignore your suggestion, and then they can't get back in because the person who set up the account left the company. It's a mess.

Also, re "Google Payments", I tried to transfer an app from my personal/solo Google Play account to a new business one I set up for my LLC and it was like pulling teeth. They wanted me to find some payment id from the original $20 purchase I made to get access to Google Play, something I did right around when they first launched and while I still have/use the same email, Google came out with approximately 1 googol different "payment solutions" in the interim and their engineers don't care about data migrations. Finally, after many support emails, they just transferred it without me giving that code which just shows how silly the whole thing was from the start.

tarsinge 4 hours ago|||
I don’t have experience in big tech but in the few SaaS companies I’ve seen the issue is UX designers and Product managers overwhelmingly have a B2C culture.
szundi 6 hours ago|||
[dead]
nico 7 hours ago||||
Can relate. My inactive google ads account all of a sudden got banned. No explanation except some generic link to their terms of service. Appealed, got automatic denial, no reason given. Have retried multiple times, same result
AuryGlenz 4 hours ago||
Same thing happened to me. Guess who didn’t start spending $100 a month with them again?

Utterly ridiculous.

nl 54 minutes ago||||
> what is google payments

YES I had this and eventually fixed it. I really don't know what I did but lots of clicking on random links and signing into things in different orders and then one day it somehow worked.

So frustrating.

swivelmaster 6 hours ago||||
> designed by a fiefdom full of territorial managers

What's harder than herding cats? Herding cats with MBAs and OKRs.

redler 6 hours ago||||
Conway’s Law strikes again.
computerex 7 hours ago|||
Couldn't agree more about the google product offerings. Vertex AI? AI Studio? Maker studio? Gemini? The documentation is fragmented with redundant offerings making it confusing to determine what is what. GCS billing is complicated to figure out vs OpenAI billing or anthropic.

Sad part is Google does offer a ChatML/OpenAI compliant endpoint to do LLM calls and I believe they in an experiment also reduced friction in getting an API key to start making calls right away but discoverability ever remains a challenge with google services.

int_19h 4 hours ago|||
> I believe they in an experiment also reduced friction in getting an API key to start making calls right away

This part is very easy now: you sign into https://aistudio.google.com/ and then click "Get API key" in the lower left corner.

The problem is that features and docs are still scattered all over. Some thing can only be done via Vertex, for example.

byefruit 7 hours ago|||
I've just found myself using OpenRouter if we need Google models for a project, it's worth the extra 5% just not to have to deal with the utter disaster that is their product offering.
IanCal 5 hours ago||
FWIW I had to bail on the same thing because my results were drastically different. There was something happening with images through open router. Although outside of that I’d absolutely do the same thing, their apis are awful and billing worse. Maybe it makes sense for huge orgs but it’s a nightmare on the smaller scale.
timtimmy 7 hours ago|||
Google keeps changing their privacy and “don’t train on my data/code” options. When gemini-cli launched, there was a clear toggle for “don’t train on my code.” That’s now gone; it just links to a generic privacy page for me. Maybe something with my account changed, I can't figure it out. Deep in the Cloud Gemini console, there’s another setting that might control training, but it’s not clear what products it actually covers.

Trying to pay for Gemini-3 is confusing. Maybe an AI Ultra personal subscription? I already pay for OpenAI and Anthropic’s pro/max plans and would happily pay Google too. But the only obvious option is a $250/month tier, and its documentation indicates Google can train on your code unless you find and enable the correct opt-out. If that opt-out exists in all the products, it’s not obvious where it lives or what products it applies to.

Workspace complicates it further. Google advertises that with business workspace accounts your data isn’t used for training. So, I was going to try Antigravity on our codebase. At this point I know I can't trust Google, so I read the ToS carefully. They train on your prompts and source code, and there doesn't appear to be a way to pay them and opt out right now. Be careful, paying for Google Workspace does not protect you, always read the ToS.

Be careful with AI-studio and your Google Workspace accounts. They train on your prompts unless you switch it to API mode.

The result is a lot of uncertainty. I genuinely have no idea how to pay Google for Gemini without risking my code being used for training. And if I do pay, I can’t tell whether they’ll train on my prompts anyway.

The marketing for their coding products does not clearly state when they do or do not train on your prompts and code.

I had to run deep research to understand the risks with using Gemini 3 for agentic work, and I still don't feel confident that I understand the risks. I might have said some incorrect things above, but I am just so confused. I feel like I have a <75% grasp on the situation.

I don't have a lot of trust. And honestly, this feels confusing and deceptive. One could easily confuse it as deliberate strategy to gather training data through ambiguity and dark patterns, it certainly looks like this could be Google's strategy to win the AI race. I assume this is just how it looks, and that they aren't being evil on purpose.

OpenAI in particular has my trust. They get it. They are carefully building the customer experience, they are product and customer driven from the top.

pama 35 minutes ago|||
Personal antigravity hack: add a GPL license to every file, so google filters them before training to avoid legal complications. IANAL.
bossyTeacher 6 hours ago|||
>OpenAI in particular has my trust.

I wouldn't trust Sam Altman. Or any of the big players really.

fishmicrowaver 5 hours ago||
> trust

Hahaha...HAHAhaha. HAHAHHAHAHAHAHAHAHA!!!

halifaxbeard 7 hours ago|||
At this point I’m not convinced that Gemini 3 Pro was post-trained on data Google had permission to use, going by the myriad of issues on the Gemini CLI tracker around Google AI/Google One/Google Cloud/Google Workspaces.

https://github.com/google-gemini/gemini-cli/issues/12121

It is far too easy to accidentally end up under the wrong privacy agreement, to the point of where some workplaces are banning use of the Gemini CLI!

unreal6 7 hours ago|||
> Claude: they barely have a signin system at all. Multiple account support doesn’t exist. The minimum seat count for business is nonsense. The data retention policies are weak.

Please give me an option for a password (or passkey) or literally anything else that doesn't require either linking with google or going through an email flow for every login

sophiebits 6 hours ago|||
ZDR is a risk thing for them. They want to make sure you're a legitimate company and have monitoring in place on your side to reduce the chance you're using them for illegal things.
hassleblad23 7 hours ago|||
Adding to this, Google's models can only be used with GCP while OpenAI's models can be used with Azure, Anthropic's models can be used with AWD Bedrock, in addition to their own platforms.

I'd love to see the Gemini models being available by other providers :) or if they just build a simple prepaid wallet like OpenAI and Anthropic.

temp0826 7 hours ago||
Didn't realize these stipulations for the models. Looking at devops-y job descriptions the last few months I noticed nearly everyone has some kind of Azure requirement now (which I've mostly avoided because I don't want to end up managing someone's AD), but is openai the actual reason for it?
sethhochberg 7 hours ago||
We're just using Github Copilot as our primary entrypoint for all of the model families. Its the only way we can easily offer our devs some level of Claude, Gemini, and Codex all in one place.
gigatree 6 hours ago|||
It seems pretty clear the moat is built at the application layer, how enjoyable/easy the actual application is to use, but these applications seem to be getting worse over time even as the models get better. Is it really that hard to do both? Isn’t the point of agentic coding to do more better (not just more)?
sumedh 5 hours ago|||
Its the same with Cursor. As a Cursor Admin I want the ability to enable only specific models and disable the rest to save costs but I cannot do that. It should be pretty simple to do it but for some reason Cursor wont add that functionality in their Admin tools.
skerit 7 hours ago|||
Last night, just after Gemini 3 was released and became available for Gemini-CLI, I saw Gemini-CLI's team post that you could access Gemini 3 with either an API key OR with _Gemini AI Ultra_, so I thought: great, I'll get that!

Now you CAN NOT get the Google One stuff if your account is part of a workspace. I thought: how awful. I want to pay, but I simply can't?

Oh, but then I noticed: You CAN add a _Gemini AI Ultra_ license via the Google Workspace Admin area, great!

Turns out: you fucking can't. That's _Google AI Ultra FOR BUSINESS_ and that IS NOT supported.

So I had to get the Google One subscription on my personal account after all.

Combine that with the _pathetic_ usage limits: somehow not token-based, but amount of requests per 24 hour window (which is 500 for Gemini 3) and Gemini 3's incredible chattiness (it uses A LOT more requests to get something done compared to Claude) and you hit the usage limits in just 2 hours.

timtimmy 6 hours ago|||
Careful, their ToS makes it clear they train on your Antigravity prompts (even on AI Ultra) and there is no opt-out that I can find.
victor106 6 hours ago|||
the microsoftication of Google. Fighting evil with evil...
leetrout 6 hours ago|||
And stop asking for phone numbers for "fraud prevention" when I've already given you my name, address and credit card.
lucasban 6 hours ago|||
The fun one for me is that I moved countries and last I checked there’s still no way to change your phone number on ChatGPT short making a new account, so now my account is associated with a phone number that I no longer have access to and will eventually be reassigned to someone else.
oblio 5 hours ago|||
Can't people spoof the first two and use a stolen credit card number?
fHr 6 hours ago|||
Google listen to this man and fire 90% of your useless product managers!
brobdingnagians 6 hours ago||
Such great case studies of how LLM coding will make all of your employees 1000x more productive at coding, design, and UX. They really are leading the way showing us into the brighter future of AI software /s
jiggawatts 5 hours ago||
Nobody claimed AIs will make office politics go away.

Peering into my crystal ball: once all "workers" have been replaced, all humans will spend all of their working hours on nothing but office politics.

highfrequency 1 hour ago||
Is GPT-5.1-Codex better or worse than GPT-5.1 (Thinking) for straight up mathematical reasoning (ie if it is optimized for making code edits)? Said another way: what is the set of tasks where you expect GPT 5.1 to be better suited than GPT-5.1 Codex? Is it non-coding problems or non-technical problems?
jwpapi 2 hours ago||
I really hope one day Ill work on challenges that need these new type of agents.

Currently, I either need a fast agent that does what I want faster than I can type it (CRUD, forms, etc) or I need an agent to discuss a plan, ups and downs.

Whenever I try to give it a bigger task it takes a lot of time, and often is not what I’ve expected, which might be totally my fault or context specific, but as soon as I’m able to define the task properly I would prefer a faster model as it will be good enough, but faster. I really don’t have problems anymore that I can’t reasonable solve fast enough with this approach.

I’ve run multiple gpt-5 codex concurrent sessions in the cloud, but I didn’t accept one thing they did.

Eventually thinking through it, reading hack boom is faster than outsourcing the work for 30 minutes + 30 minutes to digest +30 minutes to change..

the_duke 31 minutes ago||
The key is learning how to provide proper instructions.

Treat it as a developer that just joined the project and isn't aware of the conventions.

Provide hints for the desired API design, mention relevant code locations that should be read to gain context on the problem, or that do similar things.

An AGENTS.md that explains the project and provides some general guidelines also helps a lot.

Codex can be incredibly strong when prompted the right way.

spruce_tips 1 hour ago|||
100% agree. composer-1 really has been the sweet spot for me of capability, reliability, and speed. i dont ask it to do too much at once, and this approach + its speed, materially speeds my work up. i generally find i get the most out of models when i feel like im slightly underutilizing their capabilities. the term i use for this is "staying in the pocket"
bn-l 1 hour ago||
That’s the bet cursor took with composer 1. It’s dumb but very fast and that makes it better
atonse 4 hours ago|
I just tried this out, and was VERY impressed with the speed of the plan mode. I was also totally fine with the code it wrote.

Then I made the mistake of saying "run npm run build and fix all issues" (something I've run probably 50 times across codex and cc in the past 2 months). CC does it pretty much 100% of the time. I walked away from Codex, and when I came back, it had installed 2 new node packages, and gone down some crazy rabbit hole with eslint and something else. (this was for 2 minor typescript errors)

After I reverted all its changes, had CC do it and it fixed it in about 30-60 seconds.

I'll try a few more times. Let's see.

More comments...