Get the latest tech news
How Google's new AI model protects user privacy without sacrificing performance
Google researchers unveil VaultGemma, an LLM designed to generate high-quality outputs without memorizing training data. Here's how it works.
However, at the same time, you run the risk of including sensitive personal information in that dataset, which the model could then republish verbatim, leading to major security compromises for the individuals affected and damaging PR scandals for the developers. New research from Google claims to have found a solution -- a framework for building LLMs that will optimize user privacy without any major degradations in the AI's performance. The key ingredient behind VaultGemma is a mathematical framework known as differential privacy (DP), which is essentially digital noise that scrambles the model's ability to perfectly memorize information found in its training data.
Or read this on ZDNet