Google ’s AI sweat are synonymous with Gemini , which has now become an intact ingredient of its most popular products across the Worksuite software system and hardware , as well . However , the company has also released multiple subject - source AI models under the Gemma label for over a year now .

Today , Googlerevealedits third generation open - reference AI models with some impressive claims in towage . The Gemma 3 models come in four variants — 1 billion , 4 billion , 12 billion , and 27 billion parameters — and are designed to ply on equipment order from smartphones to beefy workstation .

Ready for mobile devices

Google says Gemma 3 is the world ’s best single - throttle model , which means it can run on a individual GPU or TPU instead of ask a whole cluster . Theoretically , that intend a Gemma 3 AI model can natively run on the Pixel smartphone ’s Tensor Processing Core ( TPU ) unit , just the way of life it runs theGemini Nano model topically on phones .

The large vantage of Gemma 3 over the Gemini fellowship of AI models is that since it ’s heart-to-heart - source , developer can package and ship it according to their unequaled prerequisite inside wandering apps and screen background software . Another essential welfare is that Gemma defend over 140 linguistic process , with 35 of them coming as part of a pre - trained package .

And just likethe latest Gemini 2.0 serial models , Gemma 3 is also capable of realise text , simulacrum , and picture . In a nutshell , it is multi - multimdal . On the performance side , Gemma 3 is claimed to stand out other popularopen - rootage AI fashion model such as DeepSeek V3,the logical thinking - ready OpenAI o3 - mini , and Meta ’s Llama-405B variant .

Versatile, and ready to deploy

Taking about input range , Gemma 3 offers a context window deserving 128,000 tokens . That ’s enough to cover a full 200 - page script press as an input . For comparison , the linguistic context window for Google ’s Gemini 2.0 Flash Lite model stands at a million souvenir . In the context of use of AI mannequin , an middling English language word is roughly tantamount to 1.3 relic .

Gemma 3 also supports function calling and structured output signal , which basically means it can interact with external datasets and do chore like an automatize agent . The nearest analogy would be Gemini , and how it can get work done across unlike platforms such as Gmail or Docs seamlessly .

The latest open - source AI models from Google can either be deployed locally , or through the company ’s cloud - based platforms such as the Vertex AI retinue . Gemma 3 AI models are now useable via the Google AI Studio , as well as third - party repository such as Hugging Face , Ollama , and Kaggle .

Gemma 3 is part of an industry course where companies are mold on Large Language Models ( Gemini , in Google ’s fount ) and simultaneously push out little language models ( SLMs ) , as well . Microsoft also take after a similar strategywith its open - source Phi serial of small language modelling .

Small oral communication models such as Gemma and Phi are extremely resource efficient , which pull in them an idealistic pick for running on devices such as smartphones . Moroever , as they offer a lower latency , they are particularly well - beseem for fluid applications .