BrianOnAI logoBrianOnAI

Prohibited AI Practice

Regulatory

What It Means

Prohibited AI practices are specific AI applications that the EU has completely banned because they pose unacceptable risks to fundamental rights and safety. These include systems that manipulate human behavior, government social scoring systems, and real-time facial recognition in public spaces.

Why Chief AI Officers Care

CAIOs must ensure their organizations never develop or deploy these banned AI systems, as violations can result in fines up to —‚¬35 million or 7% of global annual turnover. This requires implementing governance frameworks to screen all AI projects against the prohibited practices list before development begins.

Real-World Example

A retail company cannot deploy real-time facial recognition cameras in their stores to identify customers as they shop, even for personalized marketing purposes, as this constitutes prohibited real-time biometric identification in publicly accessible spaces.

Common Confusion

Many assume these prohibitions only apply to government use, but they actually restrict private companies as well. The bans are absolute regardless of user consent or business justification.

Industry-Specific Applications

Premium

See how this term applies to healthcare, finance, manufacturing, government, tech, and insurance.

Healthcare: In healthcare, prohibited AI practices would include AI systems that manipulate patients' psychological vulnerabilities ...

Finance: In finance, prohibited AI practices would include AI systems that manipulate consumers into taking harmful financial pro...

Premium content locked

Includes:

  • 6 industry-specific applications
  • Relevant regulations by sector
  • Real compliance scenarios
  • Implementation guidance
Unlock Premium Features

Technical Definitions

Discuss This Term with Your AI Assistant

Ask how "Prohibited AI Practice" applies to your specific use case and regulatory context.

Start Free Trial