Get the latest tech news

Microsoft’s new safety system can catch hallucinations in its customers’ AI apps


AI safety help for anyone who doesn’t have a red team.

Microsoft says these LLM-powered tools can detect potential vulnerabilities, monitor for hallucinations “that are plausible yet unsupported,” and block malicious prompts in real time for Azure AI customers working with any model hosted on the platform. Two other features for directing models toward safe outputs and tracking prompts to flag potentially problematic users will be coming soon. In the case of the Google Gemini images, filters made to reduce bias had unintended effects, which is an area where Microsoft says its Azure AI tools will allow for more customized control.

Get the Android app

Or read this on The Verge

Read more on:

Photo of Microsoft

Microsoft

Photo of Customers

Customers

Photo of ai apps

ai apps

Related news:

News photo

Windows AI PC manufacturers must add a Copilot key, says Microsoft

News photo

Microsoft launches new Azure AI tools to cut out LLM safety and reliability risks

News photo

How a Windows shake-up could position Microsoft to capitalize on AI PCs