AI jailbreak method tricks LLMs into poisoning their own context

The “Echo Chamber” attack achieves harmful outputs without any direct harmful inputs.