AI safety advocates demand firms perform existential threat assessments similar to those done for nuclear weapons. The need arises from concerns about AI systems potentially surpassing human control, emphasizing the importance of rigorous evaluation and ethical guidelines within countries like the US and the UK.