In the first two months of 2026, two federal decisions out of the Southern District of New York redefined the risk calculus for every legal professional using AI.

Together, they establish a stark new reality: anything typed into a consumer AI platform is neither privileged nor private. It can be preserved, subpoenaed, and used at trial.

SafeIdea was built for exactly this moment.


The Rulings That Changed Everything

United States v. Heppner — Privilege Waived

On February 10, 2026, Judge Jed S. Rakoff ruled from the bench — and issued a written opinion on February 17 — that thirty-one documents a criminal defendant generated using the consumer version of Anthropic's Claude were protected by neither attorney-client privilege nor the work product doctrine. The court called it "a question of first impression nationwide."

Bradley Heppner, facing securities fraud charges, had used Claude to analyze his legal exposure and develop defense strategy. He later shared those AI-generated reports with his attorneys and asserted privilege. The government disagreed, and Judge Rakoff sided with the prosecution on multiple independent grounds:

  • Claude is not an attorney. The court held that all recognized privileges require "a trusting human relationship" with a licensed professional. No such relationship exists, or could exist, between a user and an AI platform.
  • No reasonable expectation of confidentiality. Anthropic's consumer privacy policy reserves the right to collect user inputs and outputs, use them for model training, and disclose data to third parties including governmental authorities. Users consent to these terms. The court concluded Heppner "could have had no 'reasonable expectation of confidentiality'" in his communications with Claude.
  • Voluntary disclosure to a third party. Heppner shared the equivalent of his notes with Claude — a third-party platform — before those notes ever reached his attorneys. Non-privileged materials do not become privileged simply because they are later forwarded to counsel.

Critically, Judge Rakoff left the door open. He noted that if counsel had directed Heppner to use Claude, the AI tool "might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer's agent" under the Kovel doctrine. Multiple commentators have observed that enterprise AI tools — which do not train on user data and maintain contractual confidentiality protections — may present a fundamentally different analysis.

Case: United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Feb. 17, 2026)

In re OpenAI, Inc. Copyright Infringement Litigation — Your Chats Are Discoverable

Just weeks before Heppner, on January 5, 2026, District Judge Sidney H. Stein affirmed Magistrate Judge Ona T. Wang's order compelling OpenAI to produce 20 million de-identified ChatGPT conversation logs in the consolidated copyright litigation brought by The New York Times, Chicago Tribune, and other publishers.

OpenAI itself had initially offered 20 million logs — 0.5% of the tens of billions it has preserved — then reversed course and attempted to produce only keyword-filtered results. Judge Wang rejected this approach in November 2025. Judge Stein affirmed in full.

The court found that ChatGPT users had "voluntarily submitted their communications" to OpenAI, which substantially reduced their privacy interests compared to subjects of covert surveillance. The users whose conversations were ordered produced were given no notice and had no opportunity to object.

Separately, a May 2025 preservation order in the same litigation required OpenAI to retain output logs — including conversations users had deleted — that would otherwise have been purged under standard 30-day deletion policies. Enterprise and education customers were exempted. Consumer users were not.

Case: In re: OpenAI, Inc. Copyright Infringement Litigation, No. 1:25-md-03143 (S.D.N.Y.), order affirmed Jan. 5, 2026


The Bottom Line for Legal Professionals

These decisions make plain what security-minded practitioners have long suspected:

Consumer AI platforms are not confidential channels. Their terms of service say so. Courts now agree. Sharing privileged content with a consumer AI tool is legally equivalent to discussing strategy in a crowded restaurant — except the restaurant keeps transcripts and can be compelled to hand them over.

This is the problem SafeIdea solves at the architecture level.


SafeIdea: Local-First by Design

SafeIdea is a native desktop application. It runs on your machine — not in a browser, not on our servers, and not on any AI provider's consumer platform. Your documents, conversation history, entity dictionaries, and indexes never leave your computer.

What Stays on Your Machine

All documents and files you work with. All conversation history between you and the AI. All entity indexes and dictionaries built by the Indexer. All usage patterns and behavioral data. Your personal memory — the AI's learned knowledge from your past work.

SafeIdea stores nothing on external servers except licensing and billing information.


Patent-Pending Confidentiality Protection

When SafeIdea sends a query to Claude AI, SafeIdea's patent-pending masking technology scans for confidential content — client names, organization names, case numbers, addresses, and other identifying entities — and replaces them with ephemeral neutral placeholders. These placeholders are randomly generated for each session, making it impossible to build cross-session associations.

The AI response is restored with the original content locally. The cloud AI never sees your client's real information.

In the language of Heppner: the information disclosed to the third-party AI provider contains no privileged content to waive. In the language of the OpenAI discovery order: even if every conversation log SafeIdea ever touched were subpoenaed from Anthropic, the only content recoverable would be sanitized queries with meaningless placeholder tokens.

11 Entity Types, Built for Legal Work

SafeIdea's proprietary masking technology detects and masks: persons, organizations, email addresses, phone numbers, physical addresses, dates, monetary amounts, case numbers, patent numbers, jurisdictions, and inventions. Entity detection is powered entirely by local AI models — no cloud calls during analysis.

You Control What's Masked

Not everything needs to be masked. SafeIdea's masking technology offers three levels of control:

  • Mask everything by default
  • Pass through entire entity types — for example, let jurisdictions through but mask all persons
  • Pass through individual entities — for example, your own firm name

You decide what the AI needs to see.


Local Document Indexing

The SafeIdea Indexer processes documents entirely on your machine using local AI models (Ollama). It extracts entities, classifies clause types, and builds searchable indexes — so you can search by meaning, not just keywords — all without any network communication.

Supported formats include PDF (text and scanned), DOCX, images (via OCR), and plain text.


What Goes to the Cloud

Only sanitized queries — with all confidential content replaced by safe placeholders — are sent to Anthropic's Claude API. These queries are:

  • Encrypted in transit via TLS 1.3
  • Not used for AI model training per Anthropic's API data policy (distinct from consumer terms)
  • Protected by API-tier terms, not the consumer privacy policy cited in Heppner

API keys are encrypted at rest using AES-256-GCM.

This is a critical distinction. Consumer Claude (claude.ai) and ChatGPT retain conversations and may use them for training. SafeIdea uses the Anthropic API under enterprise-grade terms: zero data retention, no training on inputs, and contractual confidentiality commitments. Judge Rakoff's analysis in Heppner turned in part on the consumer privacy policy's broad reservations of rights — rights that do not apply to Anthropic's API tier. This is the distinction he identified as potentially preserving privilege.


Desktop Architecture

SafeIdea runs as a native application on Mac and Windows. There is no web version and no browser extension. Your data is stored in standard application directories on your machine, accessible only to you.

The only network communication is sanitized AI queries and license validation.


Why This Matters Now

Before Heppner and the OpenAI discovery order, the risk of using consumer AI for legal work was theoretical. Today, it is documented in federal case law:

  • A court has ruled that consumer AI conversations waive attorney-client privilege.
  • A court has ordered the production of 20 million user conversations — including conversations users had deleted — with no notice to the affected individuals.
  • Multiple courts have held that AI users "do not have substantial privacy interests" in their conversations with consumer platforms.

Every legal professional now faces a choice: stop using AI for substantive legal work, or use an architecture designed from the ground up to protect confidential content.

SafeIdea is that architecture.


References

1. United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Feb. 17, 2026) — Judge Rakoff's written opinion on AI and attorney-client privilege

2. In re: OpenAI, Inc. Copyright Infringement Litigation, No. 1:25-md-03143 (S.D.N.Y.) — 20 million ChatGPT logs ordered produced

3. Combined analysis of both decisions:


Questions? Contact us at security@safeidea.ai for our security whitepaper or to schedule a technical walkthrough.