Top
Best
New

Posted by meetpateltech 14 hours ago

GPT‑5.3‑Codex‑Spark(openai.com)
642 points | 261 commentspage 6
desireco42 10 hours ago|
Is it not available in Codex? I think this is fantastic and can't wait to try it, this is exactly the usecase I need, something fast, perform based on my instruction.

Cerebras is a winner here.

arpinum 10 hours ago|
update codex, it's there.
system2 11 hours ago||
I stopped using OpenAI tools recently after they increased the censorship. I can't even tell it to read a screencapture software I am building because it thinks I might use it for evil purposes.
cactusplant7374 12 hours ago||
I was really hoping it would support codex xhigh first.
jauntywundrkind 13 hours ago||
Wasn't aware there was an effort to move to websockets. Is there any standards work for this, or is this just happening purely within the walled OpenAI garden?

> Under the hood, we streamlined how responses stream from client to server and back, rewrote key pieces of our inference stack, and reworked how sessions are initialized so that the first visible token appears sooner and Codex stays responsive as you iterate. Through the introduction of a persistent WebSocket connection and targeted optimizations inside of Responses API, we reduced overhead per client/server roundtrip by 80%, per-token overhead by 30%, and time-to-first-token by 50%. The WebSocket path is enabled for Codex-Spark by default and will become the default for all models soon.

itsTyrion 8 hours ago||
Now we can produce loveless automated slop 15x faster, in excited
rvz 13 hours ago||
> Today, we’re releasing a research preview of GPT‑5.3-Codex-Spark, a smaller version of GPT‑5.3-Codex, and our first model designed for real-time coding. Codex-Spark marks the first milestone in our partnership with Cerebras, which we announced in January .

Nevermind. [0]

[0] https://news.ycombinator.com/item?id=35490837

kittbuilds 7 hours ago||
[dead]
itsTyrion 8 hours ago||
finally we can produce automated slop 15x faster, excited
cowpig 11 hours ago|
> Today, we’re releasing

Releasing for real? Is it an open model?