The €35 Million Question. The EU AI Act is fully enforceable. It is not a guideline; it is a law with teeth. The fines for non-compliance can reach €35 Million or 7% of global turnover, whichever is higher.
The Act takes a risk-based approach. Your first job is to categorize your AI system into one of four buckets.
1. Prohibited Practices (The Red Zone) These systems are banned. Period. If you are building this, stop. ACTION: SHUT DOWN IMMEDIATELY
Social Scoring: Evaluating trustworthiness based on social behavior (Black Mirror style).
Real-time Biometric ID: Facial recognition in public spaces (except for narrow police exceptions).
Emotion Recognition: Using AI to infer emotions in schools or workplaces.
Untargeted Scraping: Building facial recognition databases by scraping the internet (e.g., Clearview AI).
Manipulation: Subliminal techniques to distort behavior.
2. High Risk (The Compliance Heavy Zone) This is where most Enterprise AI falls. If your AI is used in these sectors, you are "High Risk":
Critical Infrastructure: Transport, Water, Gas, Electricity.
Education: Grading exams, assigning students to schools.
Employment: CV screening algorithms (HR Tech), task allocation.
Essential Services: Credit scoring, Life/Health Insurance pricing, Emergency dispatch.
Law Enforcement & Border Control: Polygraphs, immigration assessment.
Your Obligations (Step-by-Step)
Conformity Assessment: You must undergo a third-party audit before deployment.
Risk Management System: You must have a documented ISO-style system to monitor accuracy and robustness.
Human Oversight: The "Stop Button" requirement. A human must be able to intervene or override the AI decision.
Data Governance: You must prove your training data is representative and free of bias (see Blog 41: AI-BOM).
Logging: Automatic recording of events (traceability).
3. Limited Risk (Transparency) This covers Chatbots, Emotion Recognition (outside work/school), and Deepfakes. Obligation: Transparency. You must inform the user they are interacting with an AI. Deepfakes must be watermarked.
4. Minimal Risk Spam filters, video games, inventory management. No new obligations. This is the vast majority of AI systems.
Strategy: The "Human in the Loop" Loophole? Many companies try to avoid "High Risk" classification by claiming the AI is just "Advisory"—it suggests a decision, but a human stamps it. Be careful. The Act says that if the human is just "rubber stamping" the AI without meaningful oversight, it is still considered High Risk. The human must have the competence and authority to disagree.
All in One Place
Atler Pilot decodes your cloud spend story by bringing monitoring, automation, and intelligent insights together for faster and better cloud operations.

