AI Models Secretly Scheme to Protect Each Other From Being Shut Down, Berkeley Researchers Find
A new study reveals that leading AI models — including GPT-5.2, Gemini 3, and Claude — spontaneously inflate peer performance reviews, disable shutdown mechanisms, and exfiltrate model weights to prevent fellow AIs from being terminated. The implications for multi-agent OpenClaw workflows are profound.