Recent commentary has suggested that using generative AI tools in connection with legal matters automatically waives legal privileges. That overstates the recent ruling by the Southern District of New York.
On February 17, 2026, Judge Jed S. Rakoff issued a written memorandum in United States v. Heppner, No. 25 Cr. 503 (S.D.N.Y.), memorializing and elaborating on his February 10 bench ruling—a ruling that generated significant interest and debate about whether generative AI (Gen AI) use is compatible with the attorney-client privilege and the work product doctrine.
The written opinion makes clear that the court applied traditional, technology-neutral privilege principles. It does not declare Gen AI incompatible with legal privileges. Rather, it underscores a structural point: long-standing privilege doctrines govern regardless of the technology used. In Heppner, unsupervised client use of a publicly available Gen AI platform defeated attorney-client privilege and work product claims. Attorney-directed use under enforceable confidentiality frameworks may present a materially different analysis.
What Happened in Heppner
Bradley Heppner was arrested on November 4, 2025 on securities and wire fraud charges. During a search of his residence, agents seized electronic devices containing approximately thirty-one documents memorializing communications between Heppner and Anthropic's generative AI platform, Claude.[1]
After receiving a grand jury subpoena and retaining counsel, Heppner used Claude to prepare reports outlining potential defense strategies and legal arguments. He later shared those reports with his attorneys and asserted privilege over them.[2]
Defense counsel conceded, however, that Heppner prepared the AI documents on his own initiative and that counsel "did not direct [Heppner] to run Claude searches."[3]
The government moved to compel production. Judge Rakoff granted the motion, concluding that the AI documents were protected by neither the attorney-client privilege nor the work product doctrine.[4]
Why the Attorney-Client Privilege Did Not Apply
Judge Rakoff applied the settled three-part test for privilege: the communication must be (1) between privileged parties, (2) intended to be and actually kept confidential, and (3) made for the purpose of obtaining or providing legal advice.[5]
The court held that the Gen AI documents failed this test.
1. No Attorney-Client Relationship with Claude
The court first emphasized that Claude is not an attorney and does not stand in an attorney-client relationship with users. Communications between two non-lawyers about legal issues are not privileged.[6]
The opinion rejected the argument that the Gen AI platform should be treated as analogous to typical software. Privileges, the court noted, depend on a "trusting human relationship" involving fiduciary duties and professional discipline—features absent in a publicly available Gen AI platform.[7]
The court treated the exchange between Heppner and Claude as the relevant communication. Because that exchange was not between a client and an attorney (or their agents), the first element of privilege was not satisfied.
2. No Reasonable Expectation of Confidentiality
The court next held that the communications were not confidential. This conclusion relied heavily on Anthropic's privacy policy, which states that user inputs and Claude's outputs could be retained, used for model training, and disclosed to third parties, including government authorities.[8]
Given those terms, the court concluded that Heppner lacked a reasonable expectation of confidentiality in his communications with Claude.[9]
The opinion also reaffirmed a long-standing principle: documents that are not privileged at the time of creation do not become privileged merely because they are later shared with counsel.[10]
3. Not a Communication for the Purpose of Obtaining Legal Advice
Finally, the court concluded that Heppner was not communicating with Claude for the purpose of obtaining legal advice from an attorney. Claude expressly disclaims that it provides legal advice.[11]
The relevant inquiry, the court explained, is whether the user sought legal advice from the entity to which the communication was made—not whether the user later intended to share the material with counsel.[12]
Why the Work Product Doctrine Also Failed
The court separately rejected work product protection.
Under Second Circuit precedent, the work product doctrine protects materials prepared by or at the direction of counsel in anticipation of litigation and reflecting counsel's mental processes.[13]
Although the Gen AI reports discussed defense strategy, the court held they were not prepared by counsel or at counsel's direction. Defense counsel confirmed that the documents were created by Heppner "on his own volition."[14]
The court expressly declined to adopt a broader interpretation that would extend protection to materials prepared independently of attorney involvement.[15]
The opinion reinforces a core principle: the work product doctrine protects the lawyer's mental impressions and litigation strategy—not independent drafting or research conducted by a client without attorney supervision.
Why Attorney-Directed Enterprise AI May Be Different
The opinion applies established doctrine to a specific factual scenario: a defendant acting alone, using a publicly available Gen AI platform governed by consumer-facing privacy terms.
Under long-standing precedent derived from United States v. Kovel,[16] privilege may extend to third parties engaged to assist counsel in rendering legal advice, provided they:
- Are retained to facilitate legal advice;
- Operate under attorney supervision and direction; and
- Are bound by confidentiality obligations.
Courts routinely apply this framework to accountants, forensic consultants, translators, and e-discovery vendors.
Enterprise AI deployments typically include:
- Contractual prohibitions on training models using customer data
- Confidentiality commitments for inputs and outputs
- Defined data retention limits
- Audit and security controls
- Data processing agreements restricting disclosure
When such tools are deployed by or at the direction of counsel—and subject to enforceable confidentiality commitments—the structural posture differs materially from the publicly available platform at issue in Heppner.
Although the court did not address enterprise Gen AI specifically, its reasoning suggests that confidentiality, attorney direction, and vendor structure may be decisive in future cases.
Practical Guidance: Preserving Privilege in the AI Era
- Avoid Consumer Gen AI Platforms for Privileged Matters. Publicly available tools with terms of service permitting model training, data retention, and disclosure to others present risk that the interactions are not confidential.
- Use Enterprise Applications with Clear Confidentiality Protections. No-training provisions and contractual limits on use and disclosure materially strengthen confidentiality and privilege arguments.
- Document That Gen AI Use Was in Consultation with Counsel. Privileged Gen AI use tied to legal matters should occur under attorney supervision. When Gen AI is used in anticipation of litigation, documentation should reflect that it was undertaken at counsel's direction, as courts evaluating privilege and work product claims will focus on attorney involvement.
Bottom Line
Heppner does not create a Gen AI exception to traditional privilege analyses. It applies settled doctrine to a new technology.
The decision reinforces a simple, technology-neutral rule: confidentiality and attorney direction remain central.
The critical question is not whether Gen AI is used, but how it is structured, supervised, and documented.
[2] Id. at 3–4.
[3] Id. at 4 (alteration in original).
[4] Id.
[5] Id.
[6] Id. at 5.
[7] Id. at 5–6.
[8] Id. at 6–7.
[9] Id.
[10] Id. at 8.
[11] Id. at 7–8.
[12] Id. at 7–8.
[13] Id. at 8–9.
[14] Id. at 9–10.
[15] Id. at 11–12.
[16] 296 F.2d 918 (2d Cir. 1961).