Key Takeaways
- Autonomous AI agents in EHRs (Epic, Oracle, Salesforce) are already placing orders, submitting prior authorizations, and closing care gaps without contemporaneous physician review — yet the standard of care still attributes those clinical actions to the supervising physician.
- Only 3% of healthcare organizations have live AI governance frameworks (Microsoft/Health Management Academy, NEJM Catalyst, Feb 2026), meaning most practices deploying agents are already operating without the audit trails and oversight policies they would need in litigation.
- Standard EHR vendor contracts cap liability at the cost of the service and explicitly transfer risk to the practice for outputs generated by customized or vendor-built agents — precisely the category Epic's Agent Factory creates.
- Malpractice claims involving AI tools increased 14% from 2022 to 2024 (The Doctors Company), a trend that predates the current agentic deployment wave and will accelerate into a claims cohort practices are not currently capitalized to handle.
- The minimum viable governance layer — a written scope-of-agency policy, auditable action logs, and an explicit contract amendment on liability allocation — is the practice's responsibility to build. Vendors have already allocated that risk to you contractually.
The physician who clicks "accept" on an AI-generated recommendation still owns the liability. What changes when there's no click at all?
That question is no longer theoretical. At HIMSS26 in March 2026, Epic unveiled Agent Factory, a no-code platform allowing health systems to build and deploy AI agents that, in Epic's own language, "reason, decide, and execute steps autonomously" across clinical, operational, and patient-facing workflows. Epic's Art agent surfaces evidence-based decision support from the Cosmos dataset (300 million patients, billions of encounters) and can generate and place medical orders. Its Penny agent autonomously drafts and submits denial appeals. These are not documentation aids. They are workflow actors.
BCG's December 2025 report identifies 2026 as the inflection year for healthcare AI agents, and the market is validating that assessment rapidly: 61% of healthcare executives are already building or have budgeted for agentic AI, and a Deloitte survey found that 80%+ expect moderate-to-significant clinical and administrative value from agents this year. The governance infrastructure to match that velocity doesn't exist. A Microsoft and Health Management Academy study published in NEJM Catalyst in February 2026 found that only 3% of healthcare organizations have live AI governance frameworks, while 43% remain in the piloting phase. The liability gap this creates falls almost entirely on the physician and the practice.
Scribes Suggest — Agents Act: Why the Legal Distinction Is Everything
Ambient scribing tools and clinical chatbots operate under a clear legal model: the clinician receives a recommendation, evaluates it, and acts. Liability attaches to the act. The AI is a tool, not an actor.
Autonomous agents break that model structurally. When Epic's Emmie identifies a care gap and schedules a missing lab for a patient based on chart data, no physician evaluated that specific decision. When Penny drafts and submits a prior authorization appeal, the content goes out under the practice's operational umbrella without a clinician signing off on each submission. The AI is executing, not recommending.
The legal distinction matters because courts apply the "reasonable physician" standard to clinical decisions, not to the tools that generate them. A physician who follows an AI recommendation without applying independent clinical judgment is already exposed to malpractice. A physician whose AI agent acts without any contemporaneous physician input occupies murkier legal territory, but the current framework in most jurisdictions resolves that ambiguity against the clinician. As the Milbank Quarterly has documented, the governing principle remains: "Physicians have a duty to independently apply the standard of care for their field, regardless of an AI/ML algorithm output." Genuine autonomous agency, by design, removes the contemporaneous physician input that standard assumes.
What Agents Are Already Doing in Production Environments
The scope of autonomous clinical workflow execution in early-adopter practices today exceeds what most administrators have been briefed on. Beyond Epic, Oracle Health's autonomous reimbursement platform drafts and files prior authorization submissions directly inside the EHR, drawing on a dataset of 120 million patient records. Oracle has framed this capability as addressing an estimated $200 billion in annual healthcare administrative costs. Salesforce's Agentforce Health, also unveiled in March 2026, integrates with Viz.ai and HealthEx to automate clinical workflow execution across system boundaries.
The NVIDIA 2026 healthcare AI survey found that 47% of organizations are currently using or assessing agentic AI, with 70% of organizations overall now actively deploying some form of AI. That adoption pace is outrunning the credentialing and governance infrastructure needed to manage the liability exposure it generates. The Presidio analysis of HIMSS26 found that the dominant conference theme was agentic deployment alongside a near-universal absence of validation frameworks, orchestration capabilities, and accountability mechanisms for agent decisions.
Vicarious Liability in the Age of AI: How Courts Are Beginning to Frame the Physician-Agent Relationship
The doctrine of respondeat superior is the most immediate liability vehicle for agentic AI errors. Courts are beginning to treat autonomous AI agents operating inside health systems as subordinates of the employing organization, which means vicarious liability for agent errors travels up the organizational hierarchy. As analyzed in a Nature/Humanities and Social Sciences Communications study on civil liability for autonomous AI, liability for AI actions in a healthcare setting can attach to the physician or institution overseeing that agent, particularly when the agent executes within the clinician's recognized scope of practice.
The Doctors Company reported a 14% increase in malpractice claims involving AI tools between 2022 and 2024, the first sustained claim frequency increase since early-2000s tort reforms. That trend predates widespread autonomous agent deployment. The agentic wave happening now will produce a corresponding claims cohort in 2027 and 2028 that most practices are not currently capitalized to absorb.
State legislatures are signaling where post-hoc liability will land. Texas now prohibits utilization review agents from issuing adverse determinations via automated systems without human oversight. Arizona and Maryland bar AI as the sole basis for medical necessity denials. Illinois' 2025 statute prevents licensed professionals from allowing AI to make independent therapeutic decisions. Wiley Law's 2025 state AI law review documents the emerging legislative consensus: when autonomous AI causes patient harm, the licensed professional supervising the deployment bears the accountability. None of these laws provide a safe harbor for practices that deployed agents before the statutes passed.
What Your Vendor Contract Almost Certainly Says About Who Bears Clinical Risk
The ArentFox Schiff analysis of AI service agreement indemnification clauses documents the standard vendor posture: the provider maintains responsibility for clinical decisions, the vendor warrants accuracy to "industry standard," and indemnity carve-outs exclude any output generated outside the agreed scope or following provider customization. Standard contracts cap vendor liability at the cost of the service, which for a clinical error causing patient harm is effectively zero relative to the claim exposure.
Critically, the standard EHR AI contract transfers risk to the practice when the practice modifies, customizes, fine-tunes, or builds on top of the vendor's model. That is precisely what Epic's Agent Factory is designed to enable. Build a custom agent, own the liability for what it executes. The vendor's indemnity language accounts for this; your malpractice policy almost certainly does not. Amounts paid outside the vendor contract's scope come directly out of the practice. As Sheppard Health Law's March 2025 analysis of healthcare AI vendor contracts makes clear, providers must push for tiered liability caps, explicit clinical risk carve-outs, and vendor-maintained errors and omissions coverage — provisions that are absent from most standard agreements.
The FDA's January 2026 revised guidance granted enforcement discretion for CDS tools where a clinician can independently review decision logic. Agents executing multi-step workflows without surfacing their reasoning for contemporaneous review don't cleanly qualify — but the framework is ambiguous enough that vendors will argue it covers their products. That argument protects the vendor's regulatory posture, not the practice's liability exposure.
Building a Governance Layer Before Your EMR Vendor Builds It For You
Practices that wait for their EHR vendor to provide AI governance guidance are making a structural mistake. Vendors have already allocated that responsibility to the practice through contract language. The governance layer is yours to build, and building it after a patient harm event is not a defense.
The minimum viable framework for any practice that has activated agentic AI features has three components that must exist before an agent takes its first autonomous clinical action.
A scope-of-agency policy is a written practice document specifying which categories of action the AI agent is authorized to execute without contemporaneous physician review, and which require an attestation step before execution. Prior authorization submissions to payers may be an acceptable agent-only function; medical order generation almost certainly is not. Drawing that line explicitly, in writing, creates the documentation trail that separates a well-governed practice from one that appears to have abdicated clinical oversight.
An audit trail requirement is a technical specification, ideally embedded in the vendor contract, ensuring every autonomous agent action generates a timestamped, attributable log that can be produced in discovery. The absence of this log is itself a liability in litigation. Courts interpreting algorithmic decision-making routinely request decision logs and metadata; a practice that cannot produce them has no ability to reconstruct what the agent did or why.
A contract amendment is a documented negotiation with the EHR vendor clarifying liability allocation for agent-generated errors, including insurance and indemnification requirements that reflect the clinical (not merely administrative) risk the agent is taking on. This is a leverage point that exists at contract renewal and at activation of new agentic features. The practices that exercise it will be in a materially better position than those that accept default terms.
The BCG analysis of 2026 healthcare AI describes agents that "observe, plan, and act on their own" as the year's defining technological shift. That shift is real. What BCG does not tell you is that the legal and contractual infrastructure around those agents was written to protect the vendors who built them. The practice that deploys them without building its own governance layer is accepting liability that its vendor has already declined.
Frequently Asked Questions
If an AI agent built into my EHR executes a clinically wrong action, can I sue the vendor instead of bearing the liability myself?
In most cases, standard EHR vendor contracts cap liability at the cost of the service and explicitly exclude indemnification for outputs generated after the provider customizes or configures the agent, per [ArentFox Schiff's analysis of AI service agreement indemnification clauses](https://www.afslaw.com/perspectives/health-care-counsel-blog/ai-service-agreements-health-care-indemnification-clauses). While products liability theories against algorithm developers are being tested in courts, these claims have been inconsistently applied to software, and [Johns Hopkins Carey legal analysis](https://carey.jhu.edu/articles/fault-lines-health-care-ai-part-two-whos-responsible-when-ai-gets-it-wrong) confirms that under current doctrine the physician and health system supervising the agent bear primary accountability for patient harm.
Does my existing malpractice coverage protect me for errors made by autonomous AI agents in my practice?
Standard medical malpractice policies were written to cover physician judgment errors, not autonomous AI execution errors, and amounts paid outside the scope of a vendor contract's indemnification provisions are typically not covered and come directly out of the practice. [Sheppard Health Law](https://www.sheppardhealthlaw.com/2025/03/articles/artificial-intelligence/key-considerations-before-negotiating-healthcare-ai-vendor-contracts/) recommends practices explicitly verify AI agent coverage with their malpractice carrier and negotiate vendor-maintained errors and omissions coverage before activating agentic features.
What specific EHR agent capabilities carry the highest malpractice exposure for a practice?
Capabilities where the agent takes a clinical action (placing an order, submitting a prior authorization, closing a care gap by scheduling a procedure) without a required physician attestation step carry the highest exposure, because they remove the contemporaneous independent clinical judgment that the standard of care assumes. [The Doctors Company's Q4 2025 analysis](https://www.thedoctors.com/the-doctors-advocate/fourth-quarter-2025/ai-on-trial-the-rising-liability-risks-of-artificial-intelligence-in-healthcare) specifically flags diagnostic and treatment-pathway AI as the locus of the current 14% malpractice claim increase, and agent-generated orders in those categories represent an amplification of that risk.
Are there states where physicians currently have legislative protection from liability for AI agent errors?
No state has enacted a meaningful physician safe harbor for AI agent errors as of Q1 2026. State legislative activity has moved in the opposite direction: Texas, Arizona, and Maryland have passed laws requiring human oversight for AI-driven adverse determinations, and Illinois bars licensed professionals from allowing AI to make independent therapeutic decisions, per [Wiley Law's 2025 state AI law tracker](https://www.wiley.law/article-12233). These statutes set a floor for required physician oversight that, if not met, creates a statutory basis for liability in addition to common-law malpractice.
How much time do practices realistically have before this liability exposure becomes acute?
Practices that have already activated agentic EHR features are in current exposure, not prospective exposure. [The Microsoft and Health Management Academy study](https://www.microsoft.com/en-us/industry/blog/healthcare/2026/02/12/assessing-healthcares-agentic-ai-readiness-new-research-from-microsoft-and-the-health-management-academy/) published in NEJM Catalyst in February 2026 found that only 3% of organizations have live governance frameworks, meaning the majority of practices now running agents are already operating without the documentation that would protect them in litigation. Given that malpractice claims typically surface 12 to 36 months after the harm event, the agentic deployments happening now in early-adopter practices will generate their claims cohort in 2027 and 2028.