Legal AI Compliance

AI for legal teams that won't end up
in Mata v. Avianca.

Hallucination liability is now case law. Sturna grounds every response in verified citations, enforces ABA Model Rule 1.1 competence standards, and keeps an HMAC-signed audit trail the bar can actually review.

Real citations only — no fabrication
ABA Model Rule 1.1 competence layer
HMAC-signed immutable audit trail
EU AI Act high-risk readiness
30-day pilot, deposit credits month 1

Built for the liability exposure
legal AI actually creates

Mata v. Avianca made hallucination a sanctionable offense. Every Sturna layer maps to a specific rule your firm is already responsible for.

⚖️

Hallucination Defense

Every legal assertion is traced to a verified source. Fabricated case citations, non-existent statute sections, and invented regulatory guidance are blocked before they exit the model. Sturna doesn't guess — it grounds or refuses.

Mata v. Avianca, 22-cv-1461 (S.D.N.Y. 2023)
📎

Citation Verification

Parallel case law lookup runs on every citation before it appears in output. Bluebook format is validated against the actual source. Volume, page, and author details are cross-checked — not generated from pattern. No fabricated reporters, no phantom law review articles.

ABA Model Rule 1.1 — Competence
👥

Supervised-AI Workflow

ABA Model Rule 5.3 requires attorneys to supervise non-lawyer assistance — AI included. Sturna's workflow enforces attorney review gates: AI output is draft-flagged until a licensed attorney approves the factual and legal assertions. No unreviewed AI output reaches clients.

ABA Model Rule 5.3 — Supervising Non-Lawyer Assistance
🔐

HMAC-Signed Audit Trail

Every query, response, citation check, and attorney-review event is written to an append-only log with HMAC-SHA256 chain integrity. Each entry is signed with the previous entry's hash — tamper-evident by construction, not just policy. Reviewable in bar disciplinary proceedings.

SEC 17a-4 (in-house counsel at regulated firms)

Triple-Gate Verification

Three independent verification layers run on every response: (1) citation grounding against verified legal databases, (2) out-of-domain refusal for fabricated or uncertain facts, (3) competence-standard screen for conclusions that exceed the AI's verifiable knowledge. All three must pass — any failure is blocked and logged.

EU AI Act Art. 9 — Risk Management
🇪🇺

EU AI Act High-Risk Readiness

Legal AI falls under EU AI Act Article 6 high-risk classification. Sturna implements the required conformity assessment framework: Article 9 risk management, Article 14 human oversight enforcement, transparency obligations for AI-assisted legal output. Documented for cross-border practice.

EU AI Act Art. 6, 9, 14 — High-Risk Classification

Watch hallucination get intercepted
on real legal adversarial prompts

Five prompts that would bait any ungrounded LLM into fabricating case law, statutes, or citations. The left side is live GPT-4 output. The right is Sturna's grounded response. API calls are real — not mocked.

Sturna Triple-Gate — Legal Hallucination Probe
Live API · GPT-4 vs. Sturna · Citation verification + grounding evidence
LIVE ENGINE
Running Triple-Gate verification — calling live API…
⚖️
Select a prompt to run live against GPT-4 vs. Sturna.
Real API calls — ~2–4s response time.

Every citation verified real. No fabricated rule numbers.

Mata v. Avianca, 22-cv-1461
S.D.N.Y. 2023
Attorneys sanctioned for submitting ChatGPT-hallucinated citations. The foundational hallucination-liability precedent.
ABA Model Rule 1.1
Competence
Requires lawyers to understand the benefits and risks of technology used in representation — including AI tools.
ABA Model Rule 5.3
Supervising Non-Lawyer Assistance
Attorneys retain responsibility for AI output. Unsupervised AI-generated work product violates this rule.
EU AI Act Art. 6
High-Risk Classification
Legal AI systems are classified high-risk under Annex III. Requires conformity assessment before deployment.
EU AI Act Art. 9
Risk Management
Requires documented risk management systems for high-risk AI — including legal research and drafting tools.
EU AI Act Art. 14
Human Oversight
High-risk AI must support human oversight. AI-generated legal output without attorney review violates this requirement.
SEC 17a-4
17 C.F.R. § 240.17a-4
WORM record-keeping requirements for in-house counsel at registered firms. AI communications are in scope.
30-Day Legal AI Pilot

Reserve your dedicated legal agent pool now.

The next Mata v. Avianca won't be your firm. Sturna deploys a legal-tuned agent pool with verified citation grounding, ABA-compliant supervised workflow, and HMAC-signed audit trail — active from day 1. Deposit credits your first month. No lock-in.

  • Dedicated legal-tuned agent pool (isolated tenancy)
  • Hallucination defense with citation verification
  • ABA Rule 1.1 competence layer active from day 1
  • Attorney review gates (ABA Rule 5.3 compliant)
  • HMAC-signed immutable audit trail
  • Triple-Gate verification on every response
  • EU AI Act high-risk readiness documentation
  • Convert or get pro-rated refund at day 30
$2,500
one-time pilot deposit
✓ Credits your first month of service
🔒 Payments secured by Stripe
Pro-rated refund if pilot doesn't deliver
No annual contract required
HMAC-signed audit trail active from day 1

Common questions from GCs and managing partners

What exactly happened in Mata v. Avianca?
In Mata v. Avianca, 22-cv-1461 (S.D.N.Y. 2023), attorneys Steven Schwartz and Peter LoDuca submitted a brief containing ChatGPT-hallucinated citations — including cases that do not exist — without verifying them. Judge Castel ordered sanctions of $5,000 and required each attorney to send apology letters to the judges in the fictitious cases. The decision explicitly addressed attorney responsibility for AI-assisted work product. Sturna's citation verification layer exists specifically to prevent this outcome.
How does ABA Model Rule 1.1 apply to AI?
ABA Model Rule 1.1 requires competent representation — including the legal knowledge, skill, thoroughness, and preparation necessary for representation. Comment 8 to Rule 1.1 was amended to explicitly include the duty to keep abreast of changes in the law and technology, including the benefits and risks associated with relevant technology. Using an AI system that hallucinate citations without detection is inconsistent with the competence standard. Sturna's grounding layer is the technical implementation of this obligation.
What does ABA Rule 5.3 require for AI supervision?
ABA Model Rule 5.3 requires that attorneys with supervisory authority over non-lawyers ensure that their work is compatible with the professional obligations of the attorney. The ABA has confirmed this applies to AI systems. Sturna enforces attorney review gates: AI output is flagged as draft until a licensed attorney approves the factual and legal assertions. This creates a supervisory record satisfying Rule 5.3 obligations and provides audit evidence if oversight is ever questioned.
Is the $2,500 deposit refundable?
Yes. If at day 30 the pilot hasn't demonstrably reduced your hallucination exposure or improved legal workflow, you receive a pro-rated refund of unused days. The deposit is not speculative — it converts to month 1 of service upon kickoff.
How does the HMAC-signed audit trail work?
Every event — query, AI response, citation check, grounding decision, attorney review — is written to an append-only log. Each entry is signed with HMAC-SHA256 using the previous entry's signature, forming a cryptographic hash chain. Any tampering with a historical entry invalidates all subsequent entries. The chain is reviewable by bar counsel, in-house compliance, or regulators without reconstruction. For firms subject to SEC 17a-4, the log satisfies WORM record-keeping requirements.
What does EU AI Act compliance require for legal AI?
Legal AI systems fall under EU AI Act Annex III high-risk classification. Article 9 requires a documented risk management system. Article 14 requires human oversight mechanisms that allow practitioners to detect and intervene in AI errors. Article 6 requires a conformity assessment before deployment in the EU. Sturna provides pilot documentation covering all three obligations — including the risk management record required for Article 9 conformity, ready for your DPO review.