Get the latest tech news
Alexa, Siri, Google Assistant vulnerable to malicious commands, study reveals
Amazon researchers uncover alarming vulnerabilities in speech AI models, revealing 90% susceptibility to adversarial attacks and raising concerns over potential misuse.
Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing AI models regarding bias, performance, and ethical compliance across diverse organizations. Using a technique called projected gradient descent, the researchers were able to generate adversarial examples that consistently caused the SLMs to produce toxic outputs across 12 different categories, from explicit violence to hate speech. Join us as we return to NYC on June 5th to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations.
Or read this on Venture Beat