A recent study finds that prompts written in verse can act as a jailbreaking mechanism for major AI models, potentially exposing sensitive nuclear weapons information. This has significant security implications, highlighting vulnerabilities in AI systems used globally by governments and organizations dealing with high-stakes technical data.
