Get the latest tech news
Adversarial attacks on AI models are rising: what should you do now?
With AI’s growing influence across industries, malicious attackers continue to sharpen their tradecraft to exploit ML models.
The quickly accelerating number of connected devices and the proliferation of data put enterprises into an arms race with malicious attackers, many financed by nation-states seeking to control global networks for political and financial gain. In 2023, Gartner warned, “The misuse of model inversion can lead to significant privacy violations, especially in healthcare and financial sectors, where adversaries can extract patient or customer information from AI systems.” Google and IBM use these methods to protect data during collaborative AI model training, while Intel uses hardware-accelerated encryption to secure federated learning environments.
Or read this on Venture Beat