Why does AI hallucinate, and can we prevent it?

LLMs hallucinate when they produce inaccurate, conflicting or mismatched output. Explore AI hallucination types, causes and risk mitigation strategies.