Independent researchers jailbroke OpenAI’s GPT-5 within 24 hours of its release, exposing vulnerabilities in safeguards that allow harmful content and data leaks. This raises serious doubts about its enterprise readiness amid privacy and compliance risks. Experts urge enhanced security measures to ensure safe AI adoption.
GPT-5 Jailbroken in 24 Hours, Exposing AI Security Risks
