Get the latest tech news

Microsoft’s AI Can Be Turned Into an Automated Phishing Machine


Attacks on Microsoft’s Copilot AI allow for answers to be manipulated, data extracted, and security protections bypassed, new research shows.

Ask a question about an upcoming meeting and the company’s Copilot AI system can pull answers from your emails, Teams chats, and files—a potential productivity boon. “I can do this with everyone you have ever spoken to, and I can send hundreds of emails on your behalf,” says Bargury, the cofounder and CTO of security company Zenity, who published his findings alongside videos showing how Copilot could be abused. That demonstration, as with other attacks created by Bargury, broadly works by using the large language model (LLM) as designed: typing written questions to access data the AI can retrieve.

Get the Android app

Or read this on r/technology

Read more on:

Photo of Microsoft

Microsoft

Related news:

News photo

Microsoft is enabling BitLocker device encryption by default on Windows 11

News photo

Microsoft patches scary wormable hijack-my-box-via-IPv6 security bug and others

News photo

Windows Server August updates fix Microsoft 365 Defender issue