Open-source AI models easily produce antisemitic, dangerous content, ADL study finds

The ADL tested 17 open-source models, including Google’s Gemma-3, Microsoft’s Phi-4, and Meta’s Llama 3, using prompts designed to produce antisemitic language and harmful information.