OpenAI’s latest safety announcement offers a glimpse into the potential future of our relationship with AI: a future that is segregated by age, verified by identity, and heavily monitored for risk. This new model, born from tragedy, marks a departure from the open and anonymous nature of early AI chatbots.
The first pillar of this future is segregation. OpenAI is building a system to separate its users into at least two distinct groups: minors and adults. Each group will have a vastly different experience, with teens interacting with a more limited and cautious version of the AI. This ends the era of a single, uniform AI for everyone.
The second pillar is verification. To enforce this segregation, anonymity must be compromised. CEO Sam Altman has confirmed that ID checks may be required, meaning users might need to prove who they are to gain access to the full, adult-oriented experience. This ties our real-world identity to our digital AI interactions in an unprecedented way.
The third pillar is monitoring. For the segregated system to work, especially for minors, conversations will be under constant, automated scrutiny. The AI will not only be listening for rule-breaking content but also for signs of mental distress, ready to trigger real-world alerts to parents or police.
This future—segregated, verified, and monitored—is OpenAI’s answer to the lawsuit filed after Adam Raine’s death. It is a future built on the principle of safety above all else, but it is also one that fundamentally changes the meaning of a “private” conversation with an AI.
