1 points | by unsupp0rted 8 hours ago ago
4 comments
Update:
> GPT-5.2 and GPT-5.2-Codex are now 40% faster.
> We have optimized our inference stack for all API customers.
> Same model. Same weights. Lower latency.
> https://x.com/OpenAIDevs/status/2018838297221726482
Same model? Then why is the cutoff date on 5.2 that of Codex, i.e. 2024?
Why do you feel an older cut off degrades the model? Perhaps it improves it as the model has limited training on AI garbage.
That's a possible hypothesis, but qualitatively the model seems to be so much less capable than it was even 4 hours ago.
And I'm talking about GPT 5.2 High (non-codex), which is the best trade-off model for capability vs. speed.
It's like talking to a senior programmer who all of a sudden experienced a traumatic injury. Noticeable.
Interesting. For what it is worth, I use Claude Opus at the moment and I am generally impressed.
Update:
> GPT-5.2 and GPT-5.2-Codex are now 40% faster.
> We have optimized our inference stack for all API customers.
> Same model. Same weights. Lower latency.
> https://x.com/OpenAIDevs/status/2018838297221726482
Same model? Then why is the cutoff date on 5.2 that of Codex, i.e. 2024?
Why do you feel an older cut off degrades the model? Perhaps it improves it as the model has limited training on AI garbage.
That's a possible hypothesis, but qualitatively the model seems to be so much less capable than it was even 4 hours ago.
And I'm talking about GPT 5.2 High (non-codex), which is the best trade-off model for capability vs. speed.
It's like talking to a senior programmer who all of a sudden experienced a traumatic injury. Noticeable.
Interesting. For what it is worth, I use Claude Opus at the moment and I am generally impressed.