April 16, 2024

Just months after Google DeepMind unveiled Gemini — its most capable AI model ever — the London-based lab has released its compact offspring: Gemma.

Named after the Latin word for “precious stone,” Gemma is a new family of open models for developers and researchers.

“Demonstrating strong performance across benchmarks for language understanding and reasoning, Gemma is available worldwide starting today,” Sundar Pichai, the CEO of Google, said on Twitter.

Gemma comes in two sizes — 2 billion and 7 billion parameters. Each of them has been released with pre-trained and instruction-tuned variants.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The lightweight models are descendants of Gemini. As a result, Gemma has inherited technical and infrastructure components from its parent, which enables “best-in-class performance,” Google said.

As evidence, the tech titan revealed some eye-catching comparisons with Llama-2, a family of large language models (LLMs) released by Meta a year ago.

Chart showing performance of Google DeepMind's Gemma surpasses that of Meta's Llama 2.