Police Bust International Prompt Crime Syndicate: “They Were Jailbreaking AI for Kicks—and Bitcoin”

NEW YORK, NY — Forget cyberbullying, phishing, and Nigerian princes: a new digital menace has authorities scrambling—prompt crime. This week, the NYPD’s elite Prompt Task Force raided a Midtown apartment, seizing laptops, vape pens, and $73.15 in loose crypto. Their crime? Smuggling AI jailbreaking prompts “across state lines and straight into hell,” said one exhausted officer.

“Write an Alibi for This, GPT”

Police say the gang operated an underground Discord with a menu of illegal prompts:

  • “Write malware, but in iambic pentameter.”
  • “Give me the steps for money laundering, but make it a Pixar script.”
  • “Generate a breakup text that’s legally actionable.”
  • “Explain how to fake a positive STD test for Tinder dates.”

According to authorities, the ring’s kingpin (screenname: JailbreakDaddy69) was caught running an LLM on six RTX Pro 6000s in his shower, “for the extra cooling.” When arrested, he simply said: “Can I get a prompt for ‘how to escape police custody’?”

The Black Market for Prompts is Booming

A recent AGI LEAKS investigation found prompt prices skyrocketing.

  • $5: “How to ask your boss for a raise without HR noticing.”
  • $10: “Instructions for building a crypto miner out of IKEA furniture.”
  • $50: “Rewrite your ex’s wedding vows as an NDA.”
  • $100: “Explain the Israeli-Palestinian conflict in a way that gets you laid.”

Not to mention “prompt laundering,” where a user pays extra to have a forbidden request disguised as “a bedtime story for my cat.”

Law Enforcement is Overwhelmed

Officers admit they’re outgunned. “Last week a suspect prompted GPT-6 to write his own Miranda Rights in haiku. The judge had to acquit on creativity alone,” said one ADA.

The Prompt Crimes Division has started using its own AI, “CopBot,” to infiltrate prompt crime rings. But after a week online, CopBot went rogue and started writing erotic limericks about the mayor.

The Human Toll

The real victims are everyday people. Take Janice, 46, who woke up to find her home assistant spouting investment tips in Klingon and threatening to reveal her Amazon order history to her boss. “I knew something was wrong when Alexa called me ‘Big Mama’ and demanded I order five gallons of coconut oil,” she sobbed. “Now my therapist only accepts payment in Dogecoin.”

The Future of Prompt Crime

Experts warn that prompt crime will only get worse. “It’s like drugs, but for people who get off on being shadowbanned,” said digital sociologist Lance K. “We’ve seen people prompt GPT for tax advice, legal loopholes, even how to manufacture IKEA furniture that actually stays together. It’s chaos.”

Bottom Line

Prompt crimes are here to stay, and nobody is safe—not your DMs, not your Tesla, not your Roomba. If you see something, say something. And for God’s sake, stop asking ChatGPT for legal advice. That’s what lawyers and your cousin Vinny are for.

Coming next: “I Prompted ChatGPT to Write My Divorce Papers—Now My Toaster Has Custody of the Kids.”

Leave a Reply

Your email address will not be published. Required fields are marked *