Study: AI hallucinations limit reliability of foundation models

A study published in medRxiv reveals that inference techniques including chain-of-thought and search augmented generation can reduce AI hallucination rates.