February 13th news, this morningOpenAI Announce new funds in real timeProgramming Model"GPT-5.3-Codex-Spark."。

Officially, GPT-5.3-Codex-Spark is a smaller version of GPT-5.3-Codex, and the launch of this model marks the first milestone of cooperation between Openai and Crebras, an AI chip-starter。
It is worth mentioning that Cerebras was seen as a challenger in Inverda and was created to address the problems of speed and delay in AI reasoning。
OpenAI official presentation, Codex-Spark, optimized to provide a near-immediate response when running on a very low delay hardware --more than 1,000 tokens per second, while maintaining a strong capacity in actual programming tasks。
Codex-Spark owns 128k context window, only supportedText InputI don't know. During the research preview, Codex-Spark will have an independent speed limit, and its use will not be factored into the standard rate limit。
Performance, yes In the SWE-Bench Pro baseline testCodex-Spark, while slightly below the full GPT-5.3-Codex accuracy rate, significantly reduced the time-consuming nature of the mission (twice times) and performed significantly better than GPT-51-Codex-mini。
It is of concern that this GPT-5.3-Codex-Spark runs at Cerebras Wafer Scale Engineering 3 (WSE-3)Go on. This post is part of our special coverage Syria Protests 2011WSE-3 possession 4 trillion transistors and 900,000 AI cores and can provide a very low delay for Codex-Spark。
GT-5.3-Codex-SparkResearch previewTowards ChatGPT ProUser roll-out。