How Claude handles your confidential real estate deal documents. Key differences from ChatGPT, privacy implications, and what GPs should know about AI data policies.
Your $50M deal deck gets emailed to three different prospective LPs. Within hours, one of them has uploaded it to ChatGPT. Another uses Claude. A third is experimenting with Gemini. They're asking AI to identify risks, analyze returns, and spot inconsistencies in your financials.
You have no visibility into any of this.
If you've read our
Anthropic has built its entire brand on responsible AI and privacy-forward policies. That matters. But the marketing doesn't always match the reality, and for GPs, the nuance is critical.
Unlike ChatGPT's simpler structure, Claude comes in three flavors, each with different privacy implications:
Anthropic's terms state that conversations may be used for training and improvement purposes—similar to ChatGPT's free tier. If an investor uses Claude.ai (the free consumer product) to analyze your deal, Anthropic reserves the right to use that conversation data for model training. Your deal deck could theoretically be part of the next generation of Claude.
Anthropic states that Pro subscribers' conversations are not used for training by default. This is a meaningful step up. An investor paying for Claude Pro is likely to have their analysis of your deck treated with more privacy protection than a free user would.
This is where things get significantly better. When Claude is accessed via API (which is how IRDESK uses it), Anthropic explicitly does not train on your data. Uploaded documents are retained for only 30 days for trust and safety purposes, then deleted. This is the closest you get to a privacy guarantee with any mainstream AI platform.
The problem: you have no way of knowing which tier an investor is using to analyze your deck. If they're using the free version, assume the worst. If they've paid for Pro, they're likely getting better privacy protection. If they're uploading to a platform like IRDESK that uses the API, your data stays in a controlled environment.
Beyond data retention policies, Claude and ChatGPT handle information fundamentally differently. For deal analysis, these differences can be significant:
Claude supports a 200,000-token context window. ChatGPT's limits vary by model (GPT-4 offers 128,000 tokens, but most free users get far less). In practical terms, an investor can upload your entire deal package—executive summary, financial projections, market analysis, cap table, term sheet, and supporting documents—and Claude will process all of it in a single conversation. It won't need to cherry-pick sections.
This is actually good news for you: Claude has more information, which means it's less likely to misunderstand your deal structure. But it also means a complete picture of your deal gets sent to Anthropic's servers at once.
Claude tends to be more conservative in financial analysis than ChatGPT. It's less likely to make up specific numbers or project unwarranted certainty about outcomes. Instead, it will more readily say "I cannot verify this assumption" or "This number seems inconsistent with industry norms—can you clarify?"
This is Anthropic's Constitutional AI at work—Claude is deliberately built to be cautious, hedge claims, and admit uncertainty. For investors analyzing your deck, this can be frustrating (they want confident assessments). But for accuracy, it's a feature, not a bug.
Unlike ChatGPT, Claude cannot generate images, create charts, or edit documents. It's a pure text-and-document analysis tool. Your investor can't ask Claude to redesign your pitch deck. They can only ask it to read and critique what you've already sent. This is a minor privacy win—less capability means fewer ways data can be repurposed.
Claude processes uploaded documents by including them directly in the conversation context. The full text flows through the model. ChatGPT (on some implementations) uses a retrieval system that breaks documents into chunks and searches through them. Both approaches send your data to the cloud, but Claude's method means the entire document is visible to the model at once, which can lead to more thorough analysis and more risk of data leakage if the system is ever compromised.
Here's what Anthropic wants you to believe: we're a public benefit corporation focused on safety, we don't train on your data (if you use Pro or API), and we delete everything quickly. All true statements.
Here's what actually happens when an investor uploads your deck to Claude.ai: your confidential financial information sits on Anthropic's servers, gets processed by their model, is potentially used for training improvement purposes, and exists in their system under terms you didn't agree to and can't control.
Anthropic's data policies are genuinely more privacy-conscious than OpenAI's. Their shorter retention windows, their API guarantees, and their public benefit corporation structure all matter. But the core problem remains unchanged: your data leaves your control.
Whether your deal information goes to ChatGPT, Claude, Gemini, or any other AI platform, you've lost control of that information the moment it's uploaded. The vendor's privacy policy is less important than the fact that the vendor has it at all. Policies change. Companies get acquired. Servers get breached. Once the data is out of your hands, you can't guarantee its safety.
Let's be fair. If investors are going to use AI to analyze your deck regardless—and they will—Claude has some genuine advantages:
Translation: if an investor is going to screen your deal with AI, you might actually prefer they use Claude. It's less likely to confabulate a return projection, and it's more likely to ask clarifying questions about your assumptions.
That doesn't solve the privacy problem. But it means the risk includes a minor upside: more thoughtful feedback from sophisticated investors.
Here's where the conversation shifts from "which commercial AI product is least bad" to "is there an alternative that actually solves the problem?"
IRDESK uses Claude's API—not the consumer product. When investors interact with deal documents through IRDESK's deal room, they're not uploading files to claude.ai. Instead:
This is the critical distinction. It's not about Claude being safer than ChatGPT (though it has some advantages). It's about where and how the AI is being used.
You cannot prevent investors from analyzing your deal with whatever tools they want. An LP who wants to upload your deck to Claude—free or Pro—will do so. You have no contractual leverage to stop this, and even if you did, enforcing it would be impractical and destructive to your fundraising.
So what can you actually do?
Investors are using AI to screen deals. It's happening now. The investor confidentiality agreement you have them sign covers legal protections, not LLM data policies. You need a different approach.
Most retail investors will use free ChatGPT or free Claude. Some sophisticated LPs might subscribe to Pro versions. Almost none of them will be using enterprise APIs. The free tier is the baseline assumption.
This is where platforms matter. If your deal process includes a controlled environment—a deal room with integrated AI analysis—sophisticated investors will prefer it. They get better functionality (full document context, conversation history, more rigorous analysis), and you get to control the interaction.
IRDESK's approach is an example: AI-powered deal analysis happens within a secure platform, using API-tier security, with full transparency to GPs about what questions are being asked and what analysis is being performed.
This is counterintuitive, but true. If your deal is solid—if your assumptions are defensible and your numbers check out—investors will get better analysis, faster, from Claude than from ChatGPT. Claude's refusal to hallucinate numbers and its demand for clarity actually work in your favor. A deal that survives Claude's scrutiny is a deal that will survive an LP's scrutiny.
AI tools for deal analysis aren't getting less powerful—they're getting more sophisticated. Investors will keep uploading documents to whatever AI assistants are available. Anthropic will keep updating Claude's capabilities. OpenAI will keep improving ChatGPT. Google will keep iterating on Gemini.
The privacy question isn't "which AI platform is best?" It's "what infrastructure do you control?" If your investors are using consumer products, you control nothing. If you've provided them a better alternative—a deal room with integrated, API-tier AI analysis—you've shifted the equation entirely.
The "spying" framing in the headline is deliberately provocative, but it's not technically wrong. It's not malicious spying. It's structural—your proprietary information enters a third-party system that you don't own or control. The solution isn't to fight the tide of AI adoption. It's to build a better dam.