What happened

In United States v. Heppner, Judge Jed Rakoff of the Southern District of New York ruled that a criminal defendant's conversations with Claude, the AI chatbot operated by Anthropic, were not protected by attorney-client privilege or the work product doctrine. The judgment appears to be the first in the US, and possibly the world, to address privilege over generative AI interactions.

The facts are simple. Bradley Heppner is charged with securities fraud. After learning that he was under investigation, he used Claude to walk through his legal position and generate defence strategies. When the FBI seized those conversations, he argued they were privileged.

The court disagreed, on three grounds that matter well beyond this case.

Why it matters for your organisation

First: AI is not your lawyer

This was always going to be the outcome. Claude is an algorithm, not an attorney. However intuitive that algorithm felt to him, in law and in reality, Heppner was exchanging automated messages with a technology company. No attorney-client relationship can be founded on that basis.

Your practical concern may not be that your Legal function would make this mistake. It could perhaps be that a sales representative already has. The experience of using generative AI is designed to feel like a private, trusted conversation. Your teams may be using these tools to prepare for negotiations, draft internal assessments, or think through regulatory implications of alternative approaches. Many of them are doing so without any awareness that what they type may be discoverable.

Second: terms of service can destroy confidentiality and privilege

This is the finding that should concern every organisation using consumer AI tools. Heppner's conversations with Claude were, in practice, confidential; the overwhelming majority of AI conversations are never seen by human eyes. They were not legally confidential. Anthropic's privacy policy entitled the company to collect his inputs and outputs, use them for model training, and disclose them to third parties without a court order.

On Judge Rakoff's reasoning, consumer terms of service that negate reasonable expectations of confidentiality are fatal to privilege. If your teams are using ChatGPT, Claude, Gemini, or any other AI tool on anything less than enterprise licences to process sensitive information, the vendor's terms almost certainly permit collection, use, and disclosure of that information in ways that undermine any claim to legal protection.

There is an inconsistency here that the court did not address. Until 2017, Gmail routinely scanned customer emails to serve personalised advertising, under terms unlikely to have been materially different from Anthropic's in terms of confidentiality. In both cases, a technology company processed otherwise confidential material under terms that permitted disclosure to third parties. Nobody has ever seriously suggested that lawyers who emailed through Gmail had waived privilege. What matters for confidentiality is not what a technology company does with data once received, whether that is serving advertisements or training a model, but the terms on which it may disclose that data to others. On that test, the distinction between Anthropic's terms in this case and Gmail's pre-2017 terms is difficult to sustain. For now, however, Rakoff's reasoning is the law in the Southern District of New York, and your risk management should reflect that.

Third: AI-assisted preparation can be protected, but only if lawyers are involved

Heppner's legal team conceded that his AI-assisted case preparation did not involve his lawyers. Rakoff found that the work product doctrine could not apply without at least some direction from counsel. Crucially, however, the judge noted that if counsel had directed Heppner to use the tool, the result might have been different.

An AI tool used under a lawyer's direction may function as the lawyer's agent, potentially within the protection of the work product doctrine. Heppner's claim failed not because he used AI, but because he did so alone, without reference to any legal strategy his lawyers were developing.

A note for those operating in the UK as well as the US: English litigation privilege does not impose the same requirement. Documents created for the dominant purpose of reasonably contemplated litigation are protected whether or not they were prepared on counsel's instructions, provided confidentiality is maintained. Whether an English court would follow Rakoff's reasoning on this point is untested, but the potential divergence matters and deserves attention from English lawyers now.

The architectural question

Rakoff framed the counsel-direction question as a binary one: did a lawyer tell this client to use this tool? In Heppner, the answer was no. A more interesting question for the future is whether "direction" can be structural rather than specific.

In late 2025, legal AI providers including Legora and Harvey launched products that sit between law firms and their clients. These platforms provide secure, branded environments in which firms embed their own workflows, analytical frameworks, and institutional knowledge as AI tools that clients operate directly.

A client using one of these products accesses an environment that counsel designed. The direction is not a single instruction given on a single occasion. It is architectural: counsel builds the room, the client walks in.

Firms designing these workflows would be wise to document their design process carefully. In future, protection might depend on their ability to show that their platforms represent genuine direction to the client by counsel, as opposed to providing a slightly more attractive interface for a generic corporate product.

The OpenAI paradox

In June 2025, Sam Altman, CEO of OpenAI, called publicly for something akin to "AI privilege", arguing that conversations with AI should carry the same protections as speaking with a lawyer or doctor.

His comments followed a data preservation order made against OpenAI in the New York Times copyright infringement litigation. That order compelled OpenAI to retain all ChatGPT user conversations indefinitely, including conversations that users had already "deleted." In November 2025, a federal magistrate judge ordered OpenAI to produce twenty million de-identified ChatGPT logs to the plaintiffs.

The preservation order did not apply to enterprise or education account holders, or to API customers using OpenAI's zero data retention endpoints. The distinguishing factor appears to have been contractual: enterprise agreements contain materially different data-handling commitments from consumer terms.

This points to a conclusion that should be obvious: the path to preserving confidentiality may not require fresh legislation at all. In many cases, it simply requires better contracts. Consumer terms of service are set by the vendors themselves. Terms very much like OpenAI's consumer terms opened the door for Rakoff to conclude that Heppner's communications with Anthropic lacked the confidentiality necessary for privilege.

Altman's call for legislation to mandate AI privilege lacks credibility given OpenAI's own consumer terms.

What to do now

In light of the judge's reasoning in this case, and the lessons that can be learnt from the NYT v. OpenAI discovery rulings, it is worth considering the following actions.

Audit your contractual terms. If your organisation uses consumer AI products, the vendor's terms almost certainly permit data collection and third-party disclosure that is incompatible with legal privilege. Enterprise agreements with appropriate confidentiality commitments, zero data retention provisions, and restrictions on training use are the minimum baseline.

Establish an AI usage policy and enforce it. Your teams are already using these tools. The question is whether they are doing so within a framework that preserves your legal protections, or outside one. A clear policy that distinguishes between permitted tools (enterprise, with appropriate contractual terms) and prohibited tools (consumer terms) is now essential.

Maintain prompting hygiene. Even within enterprise environments, avoid inputting privileged material into AI tools unless the confidentiality framework has been specifically verified. Treat AI prompts with the same discipline you would apply to any written communication that might be disclosed.

Build architectural relationships with your external lawyers. If your law firms are deploying AI-assisted tools for your matters, ensure those tools represent genuine legal direction, not repackaged generic products. The closer a workflow comes to encoding counsel's assessment of which questions are legally relevant, and to reflecting their approach in framing those questions, the closer it comes to the kind of protection Rakoff's reasoning would support.

The future of privilege in an AI-enabled world will depend on whether the lawyers deploying these tools, and the judges ruling on the implications of their use, understand both the legal theory and the practical realities at play. For now, the message from the New York Court is clear: if you want protection, build intentionally for it.