Study Explores AI Consciousness Through Pain Detection

编辑者: Mariia Gaia

A recent study by scientists from Google DeepMind and the London School of Economics (LSE) suggests that pain could be a reliable method for detecting emerging consciousness in artificial intelligence (AI) systems. This finding, published on the arXiv preprint server, highlights the complex interplay between emotions and consciousness in both living beings and AI.

Consciousness in animals is often defined by the ability to perceive emotions and sensations such as pain, pleasure, or fear. While many AI experts agree that modern generative AI models (GenAI) lack consciousness, the study proposes a framework for future testing of consciousness in AI.

The researchers created a text-based game for large language models (LLMs), which are the backbone of popular chatbots like ChatGPT. The models were tasked with earning points through various choices, some of which involved experiencing pain for greater rewards. Daria Zakharova from LSE explained that the models had to decide between options that either caused pain or allowed them to accumulate points.

The study draws on previous animal research, particularly a 2016 experiment where crabs were subjected to electric shocks of varying intensities to observe their pain thresholds. Co-author Jonathan Birch noted that AI does not exhibit behavior in the traditional sense, making it challenging to assess consciousness.

Interestingly, the results indicated that while most LLMs aimed to maximize their points, they also adjusted their strategies upon reaching certain pain or pleasure thresholds. The researchers found that LLMs did not consistently view pain negatively or pleasure positively, sometimes interpreting discomfort as beneficial.

Jeff Sebo from New York University praised the originality of this research, emphasizing its behavioral testing aspect rather than relying solely on participant reports. He suggested that consciousness in AI could emerge in the near future, though more research is necessary to understand the internal processes of LLMs.

Birch concluded that further investigation is needed to develop better tests for detecting consciousness in AI systems, as the reasons behind the models' behaviors remain unclear.

发现错误或不准确的地方吗?

我们会尽快处理您的评论。