Google DeepMind The team announced the launch of SignGemma on May 27, its most powerful yetsign language interpreterModel,Sign language can be translated into spoken textThe open source model will join the Gemma family of models later this year.

Note: The SignGemma model supports multi-language functionality, but is currently deeply optimized primarily for American Sign Language (ASL) and English, and the open source properties mean that developers are free to use and improve it.
With this technology, DeepMind hopes to break down communication barriers for sign language users, allowing them to participate more smoothly at work, school and socially.
DeepMind also introduced the Gemma 3n model this year, which supports the generation of intelligent text from audio, image, video, and text inputs to help developers build real-time interactive applications.
Additionally, Google has partnered with Georgia Tech and the Wild Dolphin Project to launch the DolphinGemma model, which analyzes and generates dolphin sounds, constructed based on data from a long-term study of Atlantic spotted dolphins in the Bahamas.
Meanwhile, MedGemma model, as a new member of the Gemma 3 family, focuses on medical AI, supports clinical reasoning and medical image analysis, and accelerates the fusion of medical and AI innovation.