Get the latest tech news

Red team AI now to build safer, smarter models tomorrow


AI models are under attack. Traditional defenses are failing. Discover why red teaming is crucial for thwarting adversarial threats.

Rather than treating red teaming as an occasional check, they deploy continuous adversarial testing by combining expert human insights, disciplined automation, and iterative human-in-the-middle evaluations to uncover and reduce threats before attackers can exploit them proactively. Using its Python Risk Identification Toolkit (PyRIT), Microsoft bridges cybersecurity expertise and advanced analytics with disciplined human-in-the-middle validation, accelerating vulnerability detection and providing detailed, actionable intelligence to fortify model resilience. Combining external security specialists’ insights with automated adversarial evaluations and rigorous human validation cycles, OpenAI proactively addresses sophisticated threats, specifically targeting misinformation and prompt-injection vulnerabilities to maintain robust model performance.

Get the Android app

Or read this on Venture Beat

Read more on:

Photo of tomorrow

tomorrow

Photo of smarter models

smarter models

Photo of Red team AI

Red team AI

Related news:

News photo

TechCrunch Sessions: AI begins tomorrow — today’s your last chance to save on tickets

News photo

How to watch the Google I/O 2025 keynote tomorrow

News photo

Tell Congress to Reject New Push for “Nonprofit Killer Bill” in House Ways & Means Markup Tomorrow