In a striking reminder about digital privacy, OpenAI CEO Sam Altman has warned users that conversations with ChatGPT, the company’s popular AI chatbot, are not legally confidential. According to Altman, anything shared with ChatGPT — including personal or sensitive information — can potentially be used as evidence in court proceedings.
Unlike communications with doctors, therapists, or legal counsel, which are protected by legal privilege, chats with artificial intelligence tools like ChatGPT do not benefit from any form of privileged confidentiality. This revelation has raised significant concerns among users who often use ChatGPT to discuss deeply personal matters, from relationship issues to mental health or financial struggles.
In a public interview and subsequent online discussions, Altman emphasized that OpenAI does not provide the same legal protections afforded to human professionals. “We want to be honest with our users: there is no legal privilege here,” Altman reportedly said, according to tech policy outlet TechCrunch. “Users need to understand that anything shared with ChatGPT could be subject to subpoenas or court orders.”
Potential Legal Exposure
Legal experts have confirmed that AI-generated conversations fall outside the bounds of traditional confidentiality laws. “When you talk to your lawyer, that conversation is protected by attorney-client privilege. When you speak with ChatGPT, there is no such protection,” said Elizabeth Joh, a professor of law at the University of California, Davis, in an interview with The Verge.
This lack of legal cover could mean that anything typed into ChatGPT — including accidental confessions, controversial opinions, or sensitive personal data — could be retrieved in legal proceedings, such as lawsuits, criminal investigations, or regulatory inquiries.
Privacy Assumptions Under Scrutiny
Many users assume that their interactions with AI chatbots are private, much like personal journaling or online therapy sessions. However, OpenAI’s terms of service state clearly that chats may be stored, reviewed, and used to improve models or comply with legal obligations. Although OpenAI does allow users to delete past chats, it does not guarantee complete data removal from its internal systems.
This lack of clarity around data retention and privacy is troubling for digital rights advocates. “There is a growing illusion of intimacy and privacy with AI tools,” said Evan Greer of the non-profit Fight for the Future. “People are confiding in chatbots, but these tools are not safe havens.”
A Call for Policy Reform
The growing use of AI in everyday life has prompted calls for tighter regulation and more transparent policies regarding user privacy. As AI tools like ChatGPT become embedded in healthcare, education, and business workflows, experts warn that the lack of legal confidentiality could expose users to unexpected legal risks.
Some advocates have urged lawmakers to establish new protections for AI-mediated conversations. “We need AI-specific privacy laws that recognize how people actually use these technologies,” said Greer.
Until such measures are in place, OpenAI’s message is clear: users must exercise caution. “Don’t tell ChatGPT anything you wouldn’t want a court to read,” Altman reportedly advised.
Conclusion
As AI becomes more ingrained in personal and professional lives, the assumption of privacy in digital conversations is being challenged. OpenAI’s open admission about ChatGPT’s lack of legal confidentiality serves as a crucial reminder for users to be mindful of the information they share. While AI can offer convenience and insight, it cannot offer legal protection — at least not yet.