Get the latest tech news

Adversarial attacks on AI models are rising: what should you do now?


With AI’s growing influence across industries, malicious attackers continue to sharpen their tradecraft to exploit ML models.

The quickly accelerating number of connected devices and the proliferation of data put enterprises into an arms race with malicious attackers, many financed by nation-states seeking to control global networks for political and financial gain. In 2023, Gartner warned, “The misuse of model inversion can lead to significant privacy violations, especially in healthcare and financial sectors, where adversaries can extract patient or customer information from AI systems.” Google and IBM use these methods to protect data during collaborative AI model training, while Intel uses hardware-accelerated encryption to secure federated learning environments.

Get the Android app

Or read this on Venture Beat

Read more on:

Photo of AI models

AI models

Photo of Adversarial attacks

Adversarial attacks

Related news:

News photo

LinkedIn is training AI models on your data

News photo

Lionsgate and Runway team up to develop AI models for future films and shows

News photo

Mistral launches a free tier for developers to test its AI models