LLMs can’t perform “genuine logical reasoning,” Apple researchers suggest

Irrelevant red herrings lead to “catastrophic” failure of logical inference.