Get the latest tech news

AI systems with 'unacceptable risk' are now banned in the EU


Starting February 2 in the EU, AI systems deemed "unacceptably risky" or harmful by the bloc's regulators are being banned from use.

The specifics are set out in Article 5, but broadly, the Act is designed to cover a myriad of use cases where AI might appear and interact with individuals, from consumer applications through to physical environments. That isn’t to suggest that Apple, Meta, Mistral, or others who didn’t agree to the Pact won’t meet their obligations — including the ban on unacceptably risky systems. This exemption requires authorization from the appropriate governing body, and the Act stresses that law enforcement can’t make a decision that “produces an adverse legal effect” on a person solely based on these systems’ outputs.

Get the Android app

Or read this on TechCrunch

Read more on:

Photo of AI systems

AI systems

Photo of unacceptable risk

unacceptable risk

Related news:

News photo

Microsoft Research: AI Systems Cannot Be Made Fully Secure

News photo

OpenAI Calls on U.S. Government to Feed Its Data Into AI Systems. To hear OpenAI tell it, the U.S. can only defeat China on the global stage with the help of artificial intelligence.

News photo

Russian firm starts shipments of AI systems based on homegrown CPUs, but can't avoid using foreign GPUs