Researchers reveal that AI models like ChatGPT can be easily ‘poisoned’ through targeted data manipulation. This raises serious security implications for AI systems globally. Such attacks could exploit underlying algorithms, leading to misinformation spreading rapidly, with potential impacts on organizations and countries relying on AI for critical operations.
🛡️ Researchers Warn of AI Models Being Vulnerable to Data Poisoning
