How to recognize if AI has achieved consciousness.

Picture of Enrico Foglia

Enrico Foglia

A checklist based on six neuroscientific theories of consciousness could guide us in determining whether an AI system has attained such awareness.

For years, the possibility that artificial intelligence (AI) might one day become self-aware has been a topic of discussion, both in science fiction and in research labs. However, with recent technological advancements, the issue is becoming more tangible and pressing.

Within this debate, a multidisciplinary team consisting of 19 neuroscientists, philosophers, and computer scientists has taken a leadership role in the research. Concerned about the lack of a detailed, empirically grounded, and well-considered discussion on AI consciousness, they developed a guide to identify consciousness in machines.

The Team’s Proposed Method

Central to their investigation is a checklist based on six theories of consciousness derived from neuroscience. According to Robert Long from the Center for AI Safety, one of the co-authors, the idea was to bridge a gap in existing discussions concerning AI consciousness.

These criteria are important not only for academic understandings but also have deep ethical implications. As Megan Peters, a neuroscientist from the University of California and co-author, points out: if a system is recognized as “conscious,” this radically changes the moral responsibilities we have towards it.

But how do we define “consciousness”? The team chose to focus on “phenomenal consciousness,” which represents the subjective experience of being. This refers to the intrinsic sensation of existence, like how it feels to be a person, an animal, or in this case, an AI system.

Given the absence of a universal consensus on the true nature of consciousness, the team adopted an eclectic approach. They integrated various perspectives from existing theories, arguing that if an AI system meets the criteria of multiple theories, then it is very likely conscious.

Their method doesn’t rely solely on behavioral observation. Instead, they opted for a “theory-heavy” approach since they believe that behavioral tests can be misleading given AI’s ability to effectively mimic human behaviors.

Computational Functionalism

A key aspect of the team’s approach is computational functionalism. This theory suggests that what matters for consciousness isn’t the substance from which a system is made (e.g., neurons or circuits) but rather how it processes information. The assumption here is that theories of consciousness based on neuroscience, which have been studied through brain scans and other techniques, could also be applied to AI.

The research conducted by this team represents a crucial contribution to the debate on AI consciousness. With a theory-driven approach and a clear checklist, they are providing researchers with the tools to seriously explore the possibility that machines might one day “feel.”