Get the latest tech news
Google’s tiny AI model ‘Gemma 2 2B’ challenges tech giants in surprising upset
Google's Gemma 2 2B, a compact AI model with just 2.6 billion parameters, challenges industry giants by matching or surpassing larger models' performance, revolutionizing AI accessibility and efficiency.
The new language model, containing just 2.6 billion parameters, demonstrates performance on par with or surpassing much larger counterparts, including OpenAI’s GPT-3.5 and Mistral AI’s Mixtral 8x7B. Google reports Gemma 2 2B scores 56.1 on the MMLU(Massive Multitask Language Understanding) benchmark and 36.6 on MBPP(Mostly Basic Python Programming), marking significant improvements over its predecessor. As concerns about the environmental impact and accessibility of large language models increase, tech companies are focusing on creating smaller, more efficient systems that can run on consumer-grade hardware.
Or read this on Venture Beat