The EU AI Act entered into force in August 2024. By August 2025, the prohibited practices provisions apply. By August 2026, the high-risk AI system requirements are fully enforceable. If you're deploying AI in the European market—or processing data of EU citizens—this regulation affects you.
The discourse around the AI Act has been dominated by two extremes: breathless panic about "innovation-killing regulation" and dismissive hand-waving that "my use case isn't covered." Both are wrong. The AI Act is significant, but it's also navigable—if you understand what it actually requires.
This guide cuts through the noise. We'll cover what the Act actually says, which of your AI deployments are affected, and what architectural changes you need to make. No legal theory, no policy debates—just practical implementation guidance.
Disclaimer: This guide provides architectural and technical guidance, not legal advice. Consult qualified legal counsel for specific compliance obligations.
The Risk-Based Framework
The AI Act uses a tiered risk framework. Your obligations depend on which tier your AI systems fall into.
Prohibited Practices (Unacceptable Risk)
Some AI applications are banned outright. These include:
Subliminal manipulation: AI systems that deploy subliminal techniques to materially distort behavior in ways that cause harm. Note the qualifier—persuasive AI isn't banned, manipulative AI that causes harm is.
Exploitation of vulnerabilities: Systems that target people's age, disability, or social/economic situation to materially distort their behavior.
Social scoring: Government use of AI to evaluate citizens based on social behavior or personal characteristics for purposes unrelated to the context in which data was collected.
Real-time biometric identification: With narrow exceptions for law enforcement (and even then, heavily restricted).
Emotion inference in workplace/education: AI systems that infer emotions in workplaces or educational institutions, except for safety/medical purposes.
Most enterprise AI deployments don't touch these categories. If yours does, the answer is simple: stop. There's no compliance path for prohibited practices.
High-Risk AI Systems
This is where the substantial obligations live. High-risk AI includes:
| Domain | Examples of High-Risk Use |
|---|---|
| Biometrics | Remote biometric identification, biometric categorization |
| Critical Infrastructure | Safety components in water, gas, heating, electricity management |
| Education | Admission decisions, learning assessment, behavior monitoring |
| Employment | Recruitment, promotion decisions, task allocation, performance monitoring |
| Essential Services | Credit scoring, insurance pricing, emergency services dispatch |
| Law Enforcement | Risk assessment, polygraphs, evidence evaluation |
| Migration | Asylum applications, visa processing, border control |
| Justice | Sentencing assistance, case research, alternative dispute resolution |
If your AI system makes or informs decisions in these domains, you're likely in high-risk territory. The full list is in Annex III of the regulation—check it against your deployment inventory.
Limited Risk (Transparency Obligations)
AI systems that interact with people, generate content, or perform emotion/biometric detection have transparency obligations. Users must be informed they're interacting with AI. Deepfakes must be labeled. This is disclosure, not prohibition.
Minimal Risk
Everything else. Most general-purpose enterprise AI—document summarization, code assistance, search enhancement—falls here. No specific obligations beyond general product safety law.
What High-Risk Actually Requires
If you're deploying high-risk AI, here's what you need:
1. Risk Management System
You need a documented risk management process that identifies, analyzes, and mitigates risks throughout the AI system lifecycle. This isn't a one-time assessment—it's continuous monitoring with documented review cycles.
Architectural implication: Build telemetry into your AI systems from day one. You can't manage risks you can't observe. Log predictions, track outcomes, monitor for drift. The infrastructure for risk management must be part of the system, not bolted on later.
2. Data Governance
Training, validation, and testing datasets must be subject to appropriate governance practices. This includes examination for biases, gaps, and quality issues. Datasets must be "relevant, representative, free of errors and complete."
Architectural implication: Data lineage is mandatory. You need to know where your training data came from, how it was processed, and what quality checks were applied. This is easier with sovereign data infrastructure where you control the full pipeline.
The API problem: When you use cloud AI APIs, you often don't know what training data was used. This creates a compliance gap for high-risk applications. Sovereign models with documented training provenance are easier to justify to regulators.
3. Technical Documentation
You must maintain technical documentation that demonstrates compliance before placing the system on the market. This includes system architecture, design specifications, training methodologies, testing procedures, and risk assessments.
Architectural implication: Document as you build. Retrofitting documentation to existing systems is expensive and error-prone. Build documentation generation into your development pipeline.
4. Record-Keeping
High-risk AI systems must enable automatic recording of events (logs) throughout their lifecycle. Logs must be retained for periods appropriate to the use case—typically at least as long as the system is in use.
Architectural implication: Logging infrastructure must be immutable, tamper-evident, and retention-aware. Consider append-only storage, cryptographic verification, and automated retention policies. This is another area where sovereign infrastructure provides control cloud APIs cannot.
5. Transparency
Users must receive information about the AI system's capabilities, limitations, and risks in a clear and accessible manner. For systems that interact with people, the AI nature must be disclosed.
Architectural implication: Design transparency into the user interface. Don't bury disclosures in terms of service—surface them in the user experience. "This recommendation was generated by AI and may not be accurate" should be visible, not hidden.
6. Human Oversight
High-risk AI must be designed to allow appropriate human oversight. This means humans must be able to understand the system's capabilities, correctly interpret outputs, and intervene or override when necessary.
Architectural implication: Build intervention points into your workflows. AI recommendations should be reviewable before action. Override mechanisms should be accessible. The system should degrade gracefully when humans intervene—not treat human override as an error state.
7. Accuracy, Robustness, and Cybersecurity
High-risk AI must achieve appropriate levels of accuracy, be resilient against errors and inconsistencies, and be protected against attempts to alter behavior through malicious manipulation.
Architectural implication: Adversarial testing is mandatory. You need to red-team your AI systems—test them against prompt injection, data poisoning, and evasion attacks. This testing must be documented and repeated as systems evolve.
General-Purpose AI (GPAI) Rules
The AI Act includes specific provisions for general-purpose AI models—the foundation models that power many enterprise applications. If you're deploying models like GPT-4, Claude, or Llama, these rules affect your supply chain.
Provider Obligations
GPAI providers (OpenAI, Anthropic, Meta) must:
Maintain technical documentation about the model, training process, and evaluation results. This documentation must be available to downstream deployers and regulators.
Provide information to downstream deployers sufficient to enable compliance with the AI Act. If you're deploying high-risk AI built on a GPAI model, you need information from the provider to complete your compliance obligations.
Establish a copyright compliance policy and provide sufficiently detailed summary of training data.
Systemic Risk Models
GPAI models with "systemic risk"—defined as models trained with more than 10^25 FLOPs of compute—face additional obligations including model evaluations, adversarial testing, incident tracking, and cybersecurity protections.
As of 2025, this captures frontier models from major labs. If you're deploying these models, ensure your provider is meeting their GPAI obligations.
The supply chain question: Your compliance depends partly on your AI providers' compliance. When selecting model providers, evaluate their AI Act readiness. Can they provide the documentation you need? Are they committed to ongoing transparency?
How Sovereign AI Helps
Many AI Act requirements are easier to satisfy with sovereign infrastructure:
Data Governance
When you train or fine-tune models on your own infrastructure, you control the training data pipeline. Data lineage is documented. Quality checks are auditable. You can demonstrate exactly what data was used, how it was processed, and what biases were evaluated.
With cloud APIs, you're dependent on providers' documentation—which may not meet the specificity regulators require for high-risk applications.
Record-Keeping
Sovereign infrastructure means sovereign logs. You control retention, access, and format. When regulators request audit trails, you provide them directly—no third-party data requests, no provider cooperation required.
Human Oversight
With local inference, you control the full request-response pipeline. Intervention points can be implemented anywhere. Outputs can be reviewed, modified, or blocked before reaching end users. This is harder to implement when the model is a black box API call to an external service.
Cybersecurity
Sovereign models don't transmit sensitive prompts and responses across the internet. The attack surface is reduced. Adversarial testing can be performed without exposing vulnerabilities to external parties.
SIA AI Act Compliance Architecture
The Sovereign Intelligence Architecture incorporates AI Act requirements into its design principles.
Audit-Ready Logging
Immutable, tamper-evident logs with configurable retention. Every inference logged with full context for regulatory review.
Data Lineage Tracking
Document training data provenance, processing steps, and quality checks. Generate compliance reports automatically.
Human Override Workflows
Built-in intervention points. Review queues for high-stakes decisions. Override logging for audit trails.
Risk Monitoring Dashboard
Real-time monitoring of model performance, drift detection, and anomaly alerting. Continuous risk management as the Act requires.
Timeline and Transition
The AI Act phases in over time:
August 2024: Entry into force. Start preparing now.
February 2025: Prohibited practices provisions apply. Ensure you're not deploying banned AI applications.
August 2025: GPAI model rules apply. Ensure your model providers are compliant.
August 2026: High-risk AI system requirements fully applicable. All high-risk deployments must be compliant.
August 2027: Obligations for high-risk AI systems that are safety components of regulated products.
If you're deploying high-risk AI, you have until August 2026 to achieve full compliance. That's not as long as it sounds—especially if architectural changes are required.
Practical Steps
1. Inventory Your AI
Catalog every AI deployment in your organization. For each, determine: Is it high-risk? What data does it process? What decisions does it influence? Many organizations discover AI deployments they didn't know existed during this exercise.
2. Gap Analysis
For high-risk systems, evaluate current state against AI Act requirements. Do you have adequate logging? Data governance documentation? Human oversight mechanisms? Identify gaps now, while there's time to address them.
3. Architectural Planning
Some gaps can be closed with process changes. Others require architectural modifications. Plan these now—adding immutable logging or human oversight to production systems takes time.
4. Provider Evaluation
Assess your AI providers' AI Act readiness. Can they provide documentation needed for your compliance? Are they committed to transparency? Consider whether sovereign alternatives might reduce compliance risk.
5. Documentation Pipeline
Build documentation generation into your AI development process. Technical documentation, risk assessments, and compliance reports should be byproducts of development, not afterthoughts.
The bottom line: The EU AI Act is significant but manageable. Most enterprise AI isn't high-risk. For systems that are, the requirements align with good practice—governance, transparency, oversight. Sovereign architecture makes compliance easier. Start now, and August 2026 is achievable.
Need help with AI Act compliance?
The SIA methodology incorporates regulatory compliance into its architecture patterns. Let's discuss how to build AI Act readiness into your infrastructure.
Start a Conversation →