Get the latest tech news

How Google's new AI model protects user privacy without sacrificing performance


Google researchers unveil VaultGemma, an LLM designed to generate high-quality outputs without memorizing training data. Here's how it works.

However, at the same time, you run the risk of including sensitive personal information in that dataset, which the model could then republish verbatim, leading to major security compromises for the individuals affected and damaging PR scandals for the developers. New research from Google claims to have found a solution -- a framework for building LLMs that will optimize user privacy without any major degradations in the AI's performance. The key ingredient behind VaultGemma is a mathematical framework known as differential privacy (DP), which is essentially digital noise that scrambles the model's ability to perfectly memorize information found in its training data.

Get the Android app

Or read this on ZDNet

Read more on:

Photo of Google

Google

Photo of Performance

Performance

Photo of user privacy

user privacy

Related news:

News photo

Criminals broke into the system Google uses to share info with cops

News photo

Google rolls out new Windows desktop app with Spotlight-like search tool

News photo

Google unveils master plan for letting AI shop on your behalf