12 questions mapped to the HIPAA Security Rule (45 CFR Part 164) and HITECH. Get a free personalized gap report with primary-source HHS citations — no signup required.
PHI risk analysis, BAA coverage, audit controls, AI/ML governance policies — OCR-ready documentation in 30 days.
Start a HIPAA Compliance Pilot →A copy of your report has been emailed to you. Questions? Reply to that email.
A HIPAA Security Rule risk analysis (required under 45 CFR §164.308(a)(1)) is a formal assessment of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of all ePHI. It's a required administrative safeguard and the foundation of every HIPAA compliance program. HHS OCR cites incomplete or absent risk analysis in the majority of HIPAA resolution agreements.
Yes. Any AI/ML system that creates, receives, maintains, or transmits PHI on behalf of a covered entity or business associate must comply with the HIPAA Security Rule. LLMs used for clinical documentation, diagnostic support, or patient communication — along with their API providers — require Business Associate Agreements. HHS OCR issued specific AI + HIPAA guidance in December 2024.
HITECH (§13402) requires covered entities to notify affected individuals, HHS OCR, and potentially media outlets within 60 days of discovering a PHI breach. Encrypting PHI at rest and in transit triggers the Safe Harbor provision — encrypted data is exempt from breach notification requirements even if the storage media is lost or stolen.
Yes — if your AI vendor receives, processes, or stores PHI on your behalf, a BAA is required under 45 CFR §164.314(a)(1). This includes LLM API providers (OpenAI, Anthropic, Google Gemini), cloud infrastructure (AWS, Azure, GCP), and any analytics or ML platform with PHI access. Operating without a BAA is a HIPAA violation regardless of whether a breach occurs.
This assessment covers 12 controls mapped to the HIPAA Security Rule and HITECH. Each answer is scored: Yes = 1, Partial = 0.5, No or Don't Know = 0. PHI Inventory/Risk Analysis and Business Associate Agreements are weighted 2× because they are prerequisites for all other safeguards. HIPAA-Ready: 85%+. Gaps Identified: 60–84%. Significant Remediation Required: below 60%.
Civil monetary penalties range from $100 to $50,000 per violation (up to $1.9M per violation category per year). Notable penalties: Anthem ($16M, risk analysis failures), UCLA Health ($865K, 2023), Montefiore Medical ($4.75M). HHS OCR resolved $46M in HIPAA penalties in 2023 alone. Criminal penalties can reach $250,000 and 10 years imprisonment for intentional PHI disclosure.
HIPAA (§164.514(b)) recognizes two methods: (1) Expert Determination — a qualified statistician certifies re-identification risk is very small; (2) Safe Harbor — 18 specific identifiers including names, SSNs, dates, and geographic data are removed. De-identified data falls outside HIPAA's scope, making it potentially suitable for AI model training. However, AI models can inadvertently memorize PHI — de-identification assessments must account for this risk.
Under 45 CFR §164.312(b), audit controls are a required specification — no alternative is permitted. For AI systems processing ePHI, this means logging: every PHI input to an AI model, model outputs containing PHI, user IDs initiating AI queries, and timestamps. Logs must be retained for 6+ years and reviewed for anomalous access patterns. AI inference logs are subject to the same HIPAA audit requirements as traditional PHI access logs.