11 September, 2025

Police May Access ChatGPT Conversations in Special Cases, Confirms OpenAI

OpenAI has officially confirmed that ChatGPT conversations are not always private. In rare and special cases, particularly when users issue credible threats of violence against others, the company may review chats and, if necessary, share them with law enforcement.

This announcement comes as OpenAI strengthens its safety protocols to prevent real-world harm, marking a significant shift in how AI interactions are monitored and managed.


When Can Police Access ChatGPT Conversations?

According to OpenAI, conversations may be flagged if they include:

  • Threats of physical harm to others
  • Plans of violence or criminal activity

Such flagged content undergoes human review through OpenAI’s moderation system. If reviewers determine the threat is imminent and credible, the case may be escalated to police authorities.

👉 Important: Self-harm-related chats are not reported to police. Instead, users are directed to mental health resources like crisis hotlines.


OpenAI’s Stance on Privacy

OpenAI emphasises that the majority of conversations remain private and secure. However, CEO Sam Altman has admitted that ChatGPT conversations lack legal confidentiality protections, unlike discussions with lawyers or doctors.

The company is also exploring encryption for temporary chats, which are stored for only 30 days and not used for training. This could further improve user privacy and security in the future.


Public Reaction: Support and Concern

The move has sparked mixed reactions online:

  • Some argue it’s a necessary step to ensure safety and prevent violence.
  • Others fear this could lead to privacy erosion or even misuse, such as false reports and “swatting” incidents.

Summary Table:

AspectDetails
Policy UpdateOpenAI confirms police may access ChatGPT chats in rare cases.
Trigger ConditionCredible threats of violence or harm to others.
Self-Harm ChatsNot reported to police; users directed to crisis support.
Review ProcessAutomated flagging → Human review → Escalation to police if necessary.
Privacy LimitationConversations not legally protected like with doctors or lawyers.
Future PlansExploring encryption for temporary chats (stored for 30 days only).
Public ReactionConversations are not legally protected like with doctors or lawyers.