OpenAI’s new AI safety policy drops pre-release testing requirements for persuasive or manipulative capabilities, sparking concern among experts
OpenAI no longer considers manipulation and mass disinformation campaigns a risk worth testing for before releasing its AI models
