Ant Group, a major AI breakthrough!

Currently, the technical achievement paper "Every FLOP Counts: Scaling 300 Billion Parameter Mixed Expert LING Big Models without Advanced GPUs" from Ant Group's Ling team has been published on the pre-print Arxiv platform. The paper shows that Ant Group has launched two different sizes of MoE big language models - Ling-Lite and Ling-Plus, with the former having a parameter scale of 16.8 billion (2.75 billion activation parameters) and the Plus base model having a parameter scale of up to 290 billion ( 28.8 billion activation parameters), both performance to reach the industry leading level. (Perking synthesized)

Search