GPT5, exposed, GPT4o, model training for two and a half years

According to media reports, since the publication of GPT-4o, the core team of OpenAI has not conducted any large-scale pre-training for the next generation of forward models. Over the past two and a half years, OpenAI has made no substantial progress in the expansion of pre-training. The pre-training process of the GPT series model, or the introduction of milestones at the GPT-4o stage, also provides a key explanation for GPT-5 performance not meeting industry expectations. The current launch of the fifth-generation flagship model GPT-5, including its subsequent version GPT-5.1, has not gone beyond the framework established by GPT-4o in terms of its technical foundation. In August this year, Sam Altmann positioned the GPT-5 launch as a “Doctoral AI, an important milestone for AGI”. However, the actual resonance of the model in the industry is relatively flat. It was generally expected that GPT-5 would make an intergenerational leap, but the actual release was closer to the depth optimization version of GPT-4.5 and did not show a subversive breakthrough. The construction of a GPT-5 based on which version of the GPT is to be further validated。

Search