OpenAI has officially confirmed that the perpetrator of the February 2026 shooting in Tumbler Ridge, B.C., managed to bypass a previous platform ban by creating a second ChatGPT account. This revelation has intensified the scrutiny on how AI companies monitor and report potentially violent users.
Thank you for reading this post, don't forget to subscribe!The Failure of the “Repeat Violator” System
- The 2025 Ban: Eight months before the attack, OpenAI flagged and banned the shooter’s original account for generating content related to gun violence.
- The Loophole: Despite the initial ban, the shooter successfully opened a second account. OpenAI’s internal “repeat violator” detection failed to link the new account to the previously banned user, allowing them to remain active until after the tragedy occurred.
- Decision Not to Report: At the time of the first ban, OpenAI employees were reportedly concerned, but the company did not contact the RCMP. They cited that the user’s prompts did not meet the “imminent threat” threshold required for police intervention.
Policy Overhaul and “The Solomon Letter”
In a formal response to Canada’s Artificial Intelligence Minister, Evan Solomon, OpenAI admitted that their previous reporting standards were too rigid.
Key Changes Announced:
- Lower Thresholds: OpenAI will no longer wait for a specific “target, time, and place” to be mentioned before alerting authorities.
- Enhanced Detection: The company is upgrading its identity-linking technology to prevent banned users from returning under new aliases.
- RCMP Collaboration: A dedicated channel will be established for direct communication with Canadian law enforcement.















