Google Unveils Gemma, State-Of-Art Tool To Developers For Responsible AI Development
Google has unveiled Gemma in two versions, namely - Gemma 2B and Gemma 7B. Each version comes with pre-trained and instruction-tuned models.
Google has been venturing into artificial intelligence (AI) for quite some time now and its latest stride comes in the form of Gemma which is a pioneering innovation unveiled by Google DeepMind. Gemma follows closely on the heels of the recently launched Gemini Pro 1.5 and marks a significant step forward in the realm of open AI models. The CEO of Google and Alphabet, Sundar Pichai, announced the advent of Gemma and described it as "a family of lightweight, state-of-the-art open models for their class built from the same research & tech used to create the Gemini models."
Pichai in his tweet wrote, "Introducing Gemma - a family of lightweight, state-of-the-art open models for their class built from the same research & tech used to create the Gemini models. Demonstrating strong performance across benchmarks for language understanding and reasoning, Gemma is available worldwide starting today in two sizes (2B and 7B), supports a wide range of tools and systems, and runs on a developer laptop, workstation or Google Cloud."
Introducing Gemma - a family of lightweight, state-of-the-art open models for their class built from the same research & tech used to create the Gemini models.
— Sundar Pichai (@sundarpichai) February 21, 2024
Demonstrating strong performance across benchmarks for language understanding and reasoning, Gemma is available… pic.twitter.com/qWDAmEw7CP
Google has unveiled Gemma in two versions, namely - Gemma 2B and Gemma 7B. Each version comes with pre-trained and instruction-tuned models. The tech giant asserts that both variants have common foundational elements and infrastructure components with the Gemini models. The primary aim of developing Gemma is to provide developers and researchers with the tools needed for responsible AI development.
How Can Gemma Be Useful
This new AI chatbot offers integration with a wide variety of tools commonly used by Google Cloud developers, including Colab and Kaggle notebooks. Further, it also extends support to popular frameworks such as JAX, PyTorch, Keras 3.0, and Hugging Face Transformers. Google has reiterated that Gemma's versatility enables it to operate across various platforms, from laptops and workstations to the Google Cloud infrastructure.
The tech giant has emphasised that Gemma outperforms significantly larger models across key benchmarks while upholding the safety standards.
When paired with Vertex AI, Gemma opens doors for developers to construct generative AI applications which are tailored specifically for lightweight tasks such as text generation, summarization, and Q&A. It caters to real-time generative AI use cases which demand low latency.
In order to further enhance Gemma's efficiency, Google has teamed up with Nvidia to optimise its compatibility with Nvidia GPUs.
Announced today, we are collaborating as a launch partner with @Google in delivering Gemma, an optimized series of models that gives users the ability to develop with #LLMs using only a desktop #RTX GPU. https://t.co/WgWmC245se
— NVIDIA (@nvidia) February 21, 2024
Developers who are keen on exploring Gemma can do so by leveraging Google Cloud services within Vertex AI and GKE environments.