EU AI Act
May 9, 2026 · 8 min read
EU AI Act Articles 10-15: Practical Compliance for US Companies
The EU AI Act's high-risk obligations take full effect on August 2, 2026.
US companies have spent the last year assuming this is a European problem. It isn't.
The Act's extraterritorial scope reaches any AI system placed on the EU market or affecting
EU citizens — regardless of where the provider is incorporated. If you sell to European law firms,
financial institutions, medical organizations, or recruitment companies, you have obligations
under Articles 9-15. Here's what they actually require in practice.
€35M or 7% of Global Turnover: The Number That Matters
Article 99 of Regulation (EU) 2024/1689 sets a tiered penalty structure. The top tier —
placing prohibited AI on the market or placing non-compliant high-risk systems — reaches
€35,000,000 or 7% of total worldwide annual turnover, whichever is higher.
For a US company with €100M in global revenue, that's a €7M exposure. For a company with
€1B in revenue, it's €70M — capped at €35M.
The lower tiers are also significant: violations of Articles 9-15 obligations (the technical
controls for high-risk systems) carry fines up to €15M or 3% of global turnover. These are
the provisions most US companies building AI for regulated verticals will fail first.
Are you in scope? Annex III lists the high-risk categories. The ones most
likely to catch US companies: AI in employment/HR decisions (Annex III §4), AI in access to
essential private and public services (§5), AI in law enforcement (§6), AI in administration
of justice (§8). Legal AI, financial AI with credit-scoring components, and recruitment AI
are all potentially in scope.
Articles 10-15: What Each One Actually Requires
Art. 10
Data and data governance
Training, validation, and testing data must meet quality criteria: relevant, representative, free from errors, and complete for intended purpose. Data governance practices must be documented. For US companies using pre-trained foundation models fine-tuned on customer data, this requires documentation of what data was used, how it was validated, and what biases were assessed and mitigated.
Art. 11
Technical documentation
Before placing a high-risk AI system on the EU market, providers must prepare and maintain technical documentation per Annex IV. This includes system description, design specifications, training methodology, performance metrics, risk assessment, and validation testing results. For US companies, this means a written technical file — not a marketing FAQ — that an EU national authority could assess for compliance.
Art. 12
Record-keeping and logging
High-risk AI systems must automatically record events (logs) throughout their lifetime. Logs must capture: the period of each use, the reference database queried, the input data that triggered results, and the identity of persons involved in verification. For legal AI or financial AI, this means every inference — every output the system produces — must be logged with enough context to reconstruct what happened in a regulatory investigation. Logs must be accessible to deployers and to national competent authorities upon request.
Art. 13
Transparency and instructions for use
High-risk AI systems must be transparent enough that deployers can interpret outputs appropriately. Providers must supply instructions for use that cover: intended purpose, level of accuracy and robustness achieved, known limitations, input data requirements, human oversight measures, and categories of persons or groups of persons likely to be affected. For US providers, this means a written disclosure document — not a chatbot help page — covering the system's actual performance characteristics.
Art. 14
Human oversight
High-risk AI systems must be designed so that humans can effectively oversee them. Specifically: operators must be able to understand the output; detect anomalies, dysfunctions, and unexpected performance; interrupt, override, or stop the system; and the system must not prevent appropriate human oversight. For AI legal or financial advice tools, this means every output must include sufficient grounding information for a qualified professional to independently assess — not merely accept — the AI's conclusions. "I agree with the AI" is not oversight.
Art. 15
Accuracy, robustness, and cybersecurity
High-risk AI systems must achieve an appropriate level of accuracy, robustness, and cybersecurity throughout their lifecycle. Accuracy levels must be documented in technical documentation and instructions for use. The system must be resilient to errors, faults, inconsistencies, and adversarial inputs. For US AI companies, Article 15 means: (a) documented accuracy benchmarks with specific numbers, not adjectives; (b) documented testing against adversarial inputs; (c) ongoing monitoring for accuracy degradation post-deployment.
Is your EU AI Act readiness complete?
Sturna's EU AI Act readiness assessment maps your current AI system against Articles 9-15
requirements and identifies documentation gaps before your August 2026 deadline.
Run EU AI Act Readiness Assessment →
Not legal advice. For EU regulatory compliance, consult qualified EU counsel.
The Three Things US Companies Get Wrong
1. Treating this as a documentation exercise
Articles 10-15 are not a documentation checklist. Article 12 requires technical logging
infrastructure. Article 14 requires architectural choices about how outputs are presented
to enable human oversight. Article 15 requires ongoing accuracy monitoring.
These are engineering requirements, not compliance paperwork.
A company that responds to the EU AI Act by writing a "model card" PDF without building
actual technical controls will fail enforcement scrutiny.
2. Assuming foundation model providers cover this
The EU AI Act distinguishes between providers (who build and place AI on the market) and
deployers (who use it). If your company sells an AI product built on OpenAI's API,
you are the provider under the Act, not OpenAI. OpenAI's SOC 2 report and
its own EU AI Act compliance posture don't transfer to your product. You inherit the
provider obligations for your product layer.
3. Missing the August 2026 deadline
High-risk AI system obligations under Annex III apply from August 2, 2026.
This is not a soft deadline. Market surveillance authorities in EU member states begin
enforcement on that date. Technical documentation must exist before the system
is placed on the market — not after enforcement begins. If your product is already deployed
in the EU and you don't have Annex IV documentation, you're already in violation as of August 3.
Deploy EU AI Act-compliant AI infrastructure
Sturna provides the technical controls Articles 10-15 require: WORM audit logging
(Article 12), grounded outputs with source citations (Articles 13-14), documented
accuracy benchmarks (Article 15), and technical documentation formatted for EU
national authority review. Active from day 1.
Reserve Compliance Pilot →
Payments secured by Stripe · No annual contract required