May 21, 2011 - Technology media marktechpost published a blog post yesterday (May 20), reporting that at the I/O 2025 developer conference, Google launched the MedGemma open source model for multimodal medical text and image understanding. medGemma is based on the Gemma 3 architecture, and provides two configurations of 4B parameterized multimodal models (classification) and 27B parameterized plain text models. 4B models are good at classifying and interpreting medical images to generate diagnostic reports or answer image-related questions; 27B models are good at clinical text to support patient triage and decision-making assistance. Based on the Gemma 3 architecture, MedGemma offers a 4B-parameter multimodal model (classification) and a 27B-parameter text-only model in two configurations: the 4B model specializes in classifying and interpreting medical images, generating diagnostic reports, or answering image-related questions, while the 27B model specializes in clinical text to support patient triage and decision-making assistance. MedGemma 27B, with 27 billion parameters, focuses on medical text understanding and clinical reasoning for tasks requiring in-depth text analysis. The models can be run locally for experiments or deployed as HTTPS endpoints for large-scale applications via Vertex AI in the Google Cloud, and Google provides resources such as Colab notebooks to help fine-tune and integrate the models.
❯
Search
Scan to open current page
Top
Checking in, please wait
Click for today's check-in bonus!
You have earned {{mission.data.mission.credit}} points today!
My Coupons
-
¥CouponsLimitation of useExpired and UnavailableLimitation of use
before
Limitation of usePermanently validCoupon ID:×Available for the following products: Available for the following products categories: Unrestricted use:Available for all products and product types
No coupons available!
Unverify
Daily tasks completed:
