OpenAI Flagged a Mass Shooter 8 Months Early — Then Did Nothing

:police_car_light: OpenAI Flagged a Mass Shooter 8 Months Early — Then Did Nothing

A dozen employees wanted to call the cops. Leadership overruled them. Eight people are dead.

8 months of warning. 12 employees who raised alarms. 8 lives lost. 25 injured. 1 company that decided “not imminent enough.”

A Wall Street Journal investigation reveals OpenAI’s automated monitoring system flagged violent ChatGPT conversations from an 18-year-old in June 2025 — the same person who carried out the deadliest school shooting in Canadian history on February 10, 2026.

surveillance monitoring


🧩 Dumb Mode Dictionary
Term What It Actually Means
Automated review system Software that scans ChatGPT conversations for policy violations (violence, abuse, etc.)
Usage policy violation User did something ChatGPT’s terms explicitly forbid
Imminent threat threshold The legal/policy bar a company sets before it calls the cops — OpenAI’s was “real-world harm is about to happen right now”
RCMP Royal Canadian Mounted Police — Canada’s federal law enforcement
LLM abuse detection Tools that flag when people try to misuse AI chatbots for harmful purposes
Law enforcement referral When a company forwards user data to police because something looks dangerous
📰 What the Wall Street Journal Found

The WSJ investigation, published February 21, 2026, revealed that OpenAI’s internal monitoring system flagged Jesse Van Rootselaar’s ChatGPT account for “furtherance of violent activities” back in June 2025.

The conversations included violent scenarios involving gun violence, played out over multiple days. The automated system caught it. Human reviewers confirmed it was disturbing. About a dozen OpenAI employees then debated internally whether to notify law enforcement.

Some of those employees pushed hard to call the police.

Leadership said no.

📊 The Numbers That Matter
Metric Data
Date flagged by OpenAI June 2025
Date of shooting February 10, 2026
Gap between flag and tragedy ~8 months
Employees aware internally ~12
People killed 8 (including shooter)
People injured 25+
OpenAI’s action Banned the account. Did not contact police.
Post-shooting action Contacted RCMP after learning of the attack
🔍 OpenAI's Defense — And the Counter-Argument

OpenAI’s spokesperson said Van Rootselaar’s activity “did not meet the criteria for reporting to law enforcement.” Their policy requires an imminent threat of real-world harm before escalating to police.

The company also argued that being “too trigger-happy with police referrals can create unintended harm” — citing privacy concerns.

But here’s the thing nobody mentions: this isn’t a borderline case. The automated system flagged it. Human reviewers confirmed it was violent enough to ban. A dozen employees thought it warranted police involvement. And the company’s own category was “furtherance of violent activities.”

The counter-argument writes itself. If your own system classifies someone under “furtherance of violent activities” and your own employees are begging to call the cops — your threshold is set wrong.

red alert warning

🗣️ What People Are Saying

OpenAI spokesperson:

“We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we’ll continue to support their investigation.”

RCMP Staff Sgt. Kris Clark:

Confirmed OpenAI contacted police after the shootings. Said a “thorough review of the content on electronic devices, as well as social media and online activities” is underway.

B.C. Government:

Stated there was “no mention of concerns from OpenAI” the day after the Tumbler Ridge shooting.

Privacy and AI safety researchers have pointed out that this case exposes a fundamental gap: AI companies have real-time access to threatening behavior that social media platforms don’t — because chatbots engage users directly, sometimes over multiple sessions.

⚙️ The Bigger Picture: AI Companies as First Responders?

Every major social media platform has wrestled with the “when do we call the police” question. Facebook, Instagram, Discord — all have protocols for imminent threats.

But AI chatbots are different. They’re not passive platforms where users post content for others to see. They’re active conversational partners. When someone describes gun violence scenarios to ChatGPT over multiple days, the AI is the only other party in the room.

That creates a duty-of-care question that hasn’t been legally tested yet. And it’s one OpenAI clearly doesn’t have a good answer for.

The data shows OpenAI has the technical capability to detect dangerous behavior — their automated system worked. The human reviewers confirmed the severity. The failure was a policy decision, not a technical one.

📖 The Shooter's Other Red Flags

Van Rootselaar’s ChatGPT activity was far from the only warning sign:

  • Created a game on Roblox simulating a mass shooting at a mall
  • Posted about firearms on Reddit
  • Local police had been called to the family home multiple times for mental health crises
  • Had been hospitalized under B.C.'s Mental Health Act
  • Started a fire while under the influence of drugs

The 18-year-old killed her mother and stepbrother at home before attacking Tumbler Ridge Secondary School, where she killed five students and a teacher and injured 25 others, then took her own life.

No single platform — not OpenAI, not Roblox, not Reddit, not local police — connected the dots.


Cool. AI companies are monitoring your conversations but can’t decide when to act on them. Now What the Hell Do We Do? ( ͡ಠ ʖ̯ ͡ಠ)

thinking concerned

🔧 Hustle #1: Build AI Safety Auditing Tools

The gap here is clear: automated detection works, but the decision pipeline after detection is broken. Companies need third-party auditing frameworks that evaluate whether their escalation policies actually protect people.

If you know anything about compliance software or safety frameworks, this is a market that barely exists and just got a very public use case.

:brain: Example: A security researcher in Estonia built a compliance-checking tool for GDPR violations using open-source LLMs. Sold it as a SaaS product to 40+ EU companies within 6 months. Revenue hit €8K/month before they even had a sales team. The same template applies to AI safety auditing — except the urgency is now 10x higher.

:chart_increasing: Timeline: 2-4 months to build an MVP. Regulatory pressure is already mounting. First movers in this space will own the market before legislation catches up.

💼 Hustle #2: AI Threat Assessment Consulting

OpenAI had 12 employees debating what to do. That means they had no clear protocol beyond “is it imminent?” Every AI company deploying a chatbot at scale needs a threat assessment framework — and most don’t have one.

If you have a background in threat assessment, crisis intervention, or behavioral analysis, you can package that into consulting for AI companies.

:brain: Example: A former law enforcement behavioral analyst in New Zealand pivoted into consulting for social media companies on threat escalation protocols. She’s now charging $200/hr and has contracts with three mid-size platforms. Her pitch: “You already detect the threats. I teach your team what to do next.” That pitch works verbatim for AI companies right now.

:chart_increasing: Timeline: Immediately. Every AI company’s legal team is reading these headlines today and asking “do we have a policy for this?”

💰 Hustle #3: Cross-Platform Risk Aggregation

The most damning detail in this story: Van Rootselaar was flagged on ChatGPT, created a mass shooting sim on Roblox, posted about guns on Reddit, and had police visits at home. Nobody connected the dots.

There’s a product here: a cross-platform risk signal aggregator that pulls behavioral flags from multiple services and scores cumulative risk.

:brain: Example: A data engineer in South Korea built a tool that aggregates public social media signals for corporate brand risk monitoring. He scraped public posts across 6 platforms, ran sentiment analysis, and sold alert dashboards to PR firms for $500/month per client. Same architecture, different application — behavioral risk instead of brand risk.

:chart_increasing: Timeline: 3-6 months for a working prototype. Privacy regulations are a minefield, but the demand from school districts, law enforcement, and corporate security teams will be massive.

📊 Hustle #4: AI Ethics Policy Templates for Startups

Here’s a stat: hundreds of AI startups are deploying chatbots right now. Almost none of them have a formal escalation policy for when users express violent intent. OpenAI — the biggest player — just proved their policy doesn’t work.

Package threat escalation policy templates, legal frameworks, and compliance checklists. Sell them to AI startups who can’t afford a full legal team.

:brain: Example: A legal tech freelancer in Portugal created GDPR compliance template packs — privacy policies, data processing agreements, cookie consent frameworks — and sold them on Gumroad for €49-€199 each. Made €3K in the first month from organic traffic alone. AI safety policy templates are the 2026 version of that same play.

:chart_increasing: Timeline: 2-4 weeks to research and package. The news cycle is giving you free marketing right now.

🛠️ Hustle #5: School Safety Tech Integration

School districts in Canada and the US are going to throw money at this problem. They already buy threat detection software for social media monitoring. Now they’ll want tools that interface with AI chatbot APIs too.

:brain: Example: A software developer in Brazil built a school violence early-warning dashboard that aggregated local social media posts near school locations and flagged keywords. Piloted it with 3 schools in São Paulo for R$2,000/month each. After a local media story about the tool, she had a waiting list of 40 schools. The Tumbler Ridge case makes this product category urgent globally.

:chart_increasing: Timeline: 4-8 weeks to build on existing threat monitoring APIs. School districts budget for safety tech quarterly — Q2 2026 purchasing decisions are being made right now.

🛠️ Follow-Up Actions
Step Action
:open_book: Read The full WSJ investigation and TechCrunch coverage
:magnifying_glass_tilted_left: Research OpenAI’s usage policies and transparency page
:brain: Study Existing threat assessment frameworks (FBI’s behavioral analysis unit publishes free resources)
:wrench: Build Pick one hustle above and ship an MVP before the news cycle ends
:loudspeaker: Reach out Contact AI safety orgs — MIRI, Center for AI Safety — they’re actively hiring and funding

:high_voltage: Quick Hits

Want to… Do this
:magnifying_glass_tilted_left: Understand AI monitoring Read OpenAI’s transparency page — it explains their detection pipeline
:shield: Check if your AI chats are monitored They are. Every major LLM provider scans for policy violations. Read the TOS.
:money_bag: Sell AI safety consulting Package threat escalation frameworks — the market just got created by this headline
:bar_chart: Track AI regulation updates Follow the EU AI Act enforcement and Canada’s proposed AI & Data Act (C-27)
:brain: Learn behavioral threat assessment FBI’s free Making Prevention a Reality resources are the gold standard

OpenAI’s system caught the threat. OpenAI’s people saw the threat. OpenAI’s policy killed the response. The algorithm worked — the humans with the override didn’t.

[/center]

2 Likes