Large language models like ChatGPT are not conscious, but there are other “serious contenders for AI consciousness that exist today” Furthermore, “AI development will not wait for philosophers and cognitive scientists to agree on what constitutes machine consciousness… There are pressing ethical issues we must face now.” So writes Susan Schneider in the following guest post. Dr. Schneider is professor of philosophy at Florida Atlantic University, director of its Center for the Future of AI, Mind and Society, and co-director of its Machine Perception and Cognitive Robotics Lab. She is the author of Artificial You: AI and the Future of Your Mind, among other works. (An earlier version of the following was originally published as a report by the Center for the Future of AI, Mind and Society, Florida Atlantic University, and posted on Medium. Note that an unrevised draft was mistakenly posted here for a several minutes; the revised version is below.) Which AI’s Might be Conscious, and Why it Matters by Susan Schneider In a recent New York Times opinion piece, philosopher Barbara Montero urges: “A.I. is on its way to doing something even more remarkable: becoming conscious.” Her view is illustrative of a larger public tendency to suspect that the impressive linguistic abilities of LLMs suggest that they are conscious—that it feels like something from the inside to be them. After all, these systems have expressed feelings, including claims of consciousness. Ignoring these claims may strike one as speciesist, yet once we look under the hood, there’s no reason to think systems like ChatGPT or Gemini are conscious. Further, we should not allow ourselves to be distracted from more plausible cases of conscious AI. There are already serious contenders for AI consciousness that exist today. The linguistic capabilities of LLM chatbots, including their occasional claims that they are conscious, can be explained without positing genuine consciousness. As I’ve argued elsewhere [1], there is a far more mundane account of what is going on. Today’s LLMs have been trained on a vast trove of human data, including data on consciousness and beliefs about feelings, selves, and minds. When they report consciousness or emotion, it is not that they are engaging in deceptive behaviors, trying to convince us they deserve rights. It..
Which AI’s Might be Conscious, and Why it Matters (guest post)
