Top
Best
New

Posted by meetpateltech 12 hours ago

GPT‑5.3‑Codex‑Spark(openai.com)
601 points | 244 commentspage 5
modeless 10 hours ago|
Why are they obscuring the price? It must be outrageously expensive.
chaos_emergent 10 hours ago|
I think it's a beta so they're trying to figure out pricing by deploying it.
Aeroi 8 hours ago||
open ai naming is a meme at this point
anonzzzies 11 hours ago||
Been using glm 4.7 for this with opencode. Works really well.
system2 10 hours ago||
I stopped using OpenAI tools recently after they increased the censorship. I can't even tell it to read a screencapture software I am building because it thinks I might use it for evil purposes.
tsss 11 hours ago||
Does anyone want this? Speed has never been the problem for me, in fact, higher latency means less work for me as a replaceable corporate employee. What I need is the most intelligence possible; I don't care if I have to wait a day for an answer if the answer is perfect. Small code edits, like they are presented as the use case here, I can do much better myself than trying to explain to some AI what exactly I want done.
Havoc 6 hours ago||
Speed is absolutely nice though not sure I need 1k tps
vessenes 10 hours ago||
Yes, we want this.
cjbarber 11 hours ago||
For a bit, waiting for LLMs was like waiting for code to compile: https://xkcd.com/303/

> more than 1000 tokens per second

Perhaps, no more?

(Not to mention, if you're waiting for one LLM, sometimes it makes sense to multi-table. I think Boris from Anthropic says he runs 5 CC instances in his terminal and another 5-10 in his browser on CC web.)

deskithere 11 hours ago||
Anyway token eaters are upgrading their consumption capabilities.
Computer0 7 hours ago||
128k context window!
allisdust 11 hours ago||
Normal codex it self is sub par compared to opus. This might be even worse
More comments...