According to information received on 4 February, journalists were informed that two weeks after the publication of the smart-form GLM-4.7-Flash, downloads had exceeded 1 million in Hugging Face. GLM-4.7-Flash was described as a 30B-A3B hybrid thinking model, with combined performances exceeding gpt-oss-20b, Qwen3-30B-A3B-Thinking-2507 in mainstream benchmark tests such as SWE-bench Verified and 2-Bench, and achieving open-source SOTA scores in the same and approximate size model series, providing a new option for light quantitative deployments that takes into account performance and efficiency. "The Daily Journal of Science"
❯
Search
Scan to open current page
Top
Checking in, please wait
Click for today's check-in bonus!
You have earned {{mission.data.mission.credit}} points today!
My Coupons
-
¥CouponsLimitation of useExpired and UnavailableLimitation of use
before
Limitation of usePermanently validCoupon ID:×Available for the following products: Available for the following products categories: Unrestricted use:Available for all products and product types
No coupons available!
Unverify
Daily tasks completed:
