On 17 February 2026, Judge Jed S. Rakoff of the United States District Court for the Southern District of New York issued a ruling that every compliance officer, legal counsel, and Chief Executive in Africa should read: conversations with AI chatbots are not private. They are not protected. And they are discoverable in court.
What Happened in United States v. Heppner
In United States v. Heppner (Case No. 25 Cr. 503 (JSR), S.D.N.Y., Feb. 17, 2026), the court addressed whether conversations between a defendant and Anthropic's Claude AI assistant were protected by attorney-client privilege. Judge Rakoff rejected this argument on three separate grounds.
The Three Grounds
- 1. AI is not a lawyer. Attorney-client privilege protects communications to a licensed legal professional. An AI chatbot is not a lawyer. No privilege attaches.
- 2. No reasonable expectation of privacy. Anthropic's Terms of Service state that conversations may be used to train the model. Users cannot reasonably expect confidentiality.
- 3. Not work product. Conversations generated for an AI assistant do not constitute attorney work product, regardless of their content.
Why This Matters for African Enterprises
If your data leaves your infrastructure and sits on a third party's servers, you have surrendered control of it. What your staff types into a public AI assistant today is, functionally, a disclosure to a third party. Major law firms have now issued advisories recommending that AI use involving confidential material be restricted to private, on-premise deployments.
The Alternative: AI That Never Leaves Your Walls
Private LLM deployments — running models like Llama or Mistral on infrastructure you own — eliminate the data sovereignty problem entirely. Your data stays inside your perimeter. No terms of service. No cloud provider to trust. No court exposure.
The Summary Position
The question is not whether to use AI. It is whether to use AI that cannot be used against you.
What to Do Next
Audit your current AI tool usage across the organisation. We help organisations assess their AI exposure and deploy private AI environments that give teams full capability without the compliance risk.