The news of January 20thZhipu GLM-4.7-Flash Model officially released today (20 January)Open Source.

GM-4.7-FlashMixed Thinking Model,THE TOTAL PARAMETER IS 30B AND THE ACTIVATED PARAMETER IS 3BIT IS STATED THAT AS A HOMOGENOUS SOTA MODEL, IT PROVIDES A NEW OPTION TO BALANCE PERFORMANCE AND EFFICIENCY IN LIGHT QUANTITATIVE DEPLOYMENTS。
As of this day, the GLM-4.7-Flash will replace GLM-4.5-Flash, which will be on-line at BigModel.cn on the open spectroscopy platformAnd for free call.
GLM-4.7-Flash combined outperformed gpt-oss-20b, Qwen3-30B-A3B-Thinking-2507 in mainstream benchmark tests such as SWE-bench Verified, 2-Bench, etcOPEN SOURCE SOTA FRACTIONS IN THE SAME AND APPROXIMATE SIZE MODEL SERIES.
In internal programming exercises, GM-4.7-Flash performed well on front- and back-end missions. In addition to programming scenes, users are also recommended to experience GM-4.7-Flash in common settings such as Chinese writing, translation, long text, emotional/role play。
It needs to be noted that the last generation of free language models GM-4.5-Flash will be offline on January 30, 2026, the user needs to update the model code in a timely manner. GLM-4.5-Flash will be automatically routed to GLM-4.7-Flash when officially offline。
1AI with GLM-4.7-Flash open source addresses as follows:
- Hugging Face: https://huggingface.co/zai-org/GLM-4.7-Flash
- Modified Community: https://moderscope.cn/models/ZhipuAI/GLM-4.7-Flash