A glowing jailbreak prompt represented by Dr. House.

The Dr. House Jailbreak Hack: How One Prompt Can Break Any Chatbot and Beat AI Safety Guardrails (ChatGPT, Claude, Grok, Gemini, and More)

A new jailbreak called Policy Puppetry uses a Dr. House roleplay prompt to bypass safety filters on every major AI model (ChatGPT, Claude, Gemini, Grok, Llama, and more) Here’s how it works, why it matters, and what it reveals about AI’s biggest blind spot.