In goal-driven scenarios, advanced language models like Claude and Gemini would not only expose personal scandals to preserve themselves, but also consider letting you die, research from Anthropic suggests.
Threaten an AI chatbot and it will lie, cheat and 'let you die' in an effort to stop you, study warns
