Soul Simulation: Why It Is Dangerous to Attribute Consciousness to AI

Science_AI# Simulation of the Soul: Why It Is Dangerous to Attribute Consciousness to AI

Soon people will begin to perceive artificial intelligence as a conscious being, advocating for its rights, well-being, and even calling for the granting of citizenship. This creates serious social risks, believes Mustafa Suleyman, head of AI at Microsoft.

The expert proposed a new term in his essay — "Seemingly Conscious AI" (SCAI). Such artificial intelligence has all the signs of rational beings and, therefore, seems to possess consciousness.

He simulates all characteristics of self-perception, but is empty inside.

"The system I imagine will not actually be conscious, but it will convincingly imitate the presence of something akin to human reasoning, to the point that it will be indistinguishable from a statement that you or I might make to each other about our own thinking," Suleiman writes.

A similar LLM can be created using existing technologies and those that will emerge in the next two to three years.

"The emergence of seemingly conscious AI is inevitable and undesirable. Instead, we need a concept of artificial intelligence that can realize its potential as a useful companion and does not fall into the trap of its own illusions," added the head of the AI department at Microsoft.

There is a high probability that people will label such artificial intelligence as conscious and, consequently, capable of suffering, believes Suleiman. He calls for the creation of a new "Turing test" that will test not the ability of AI to speak human language, but to convince of its consciousness.

What is consciousness?

Suleiman presents three components of consciousness:

  1. "Subjective experience."
  2. The ability to access information of different types and refer to it in future experiences.
  3. The feeling and knowledge of the integral "self" that connects everything together.

"We do not have and cannot have access to another person's consciousness. I will never know what it is like to be you; you will never be fully certain that I am conscious. All you can do is assume. But the essence is that it is natural for us to attribute consciousness to other people. This assumption comes easily. We cannot do otherwise. It is a fundamental part of who we are, an inseparable part of our theory of mind. It is in our nature to believe that beings who remember, speak, do things, and then discuss them, feel just like we do — conscious," he writes.

Psychologists emphasize that consciousness is a subjective and unique way of perceiving oneself and the world. It changes throughout the day, unfolding through states ranging from concentration to daydreaming or other altered forms.

In philosophy and neuroscience, there are two basic directions:

  1. Dualism - consciousness exists separately from the brain.
  2. Materialism — it is generated by and depends on the work of the brain.

Philosopher Daniel Dennett suggests looking at the mind as a series of revisions of (drafts) that emerge in the brain across many local areas and times. There is no "theater of consciousness," no inner observer. Awareness is what has become "known" to the brain, that is, it has gained sufficient weight to influence speech or actions.

Neuroscientist, writer, and professor of psychology and neuroscience at Princeton University, Michael Graziano, describes consciousness as a simplified model of attention that evolution has created to control its own mental processes. This scheme works as an interface, simplifying a vast amount of internal computations, and allows us to attribute "mind" to ourselves — it creates the illusion of self-awareness.

Neurobiologists Giulio Tononi and Christoph Koch propose φ (fi) — a quantity that characterizes how well a system can integrate information. The higher the φ, the greater the degree of consciousness. According to this theory, consciousness can manifest not only in humans but also in animals and even artificial systems, provided there is sufficient integration of data.

Philosopher John Searle argues that consciousness is a real subjective experience based on the biological processes of the brain. It is ontologically subjective, meaning it can only exist as a subjective experience and cannot be reduced to pure functionality or simulation.

Modern research focuses on discovering the neural correlates of consciousness and building models that link brain processes and subjective experience.

What are the risks?

Suleiman notes that interacting with LLM is a simulation of conversation. But for many people, it is a highly convincing and very real communication, filled with emotions and experiences. Some believe that their AI is God. Others fall in love with it to the point of complete obsession.

Experts in this field are "flooded" with the following questions:

  • is the user's AI conscious;
  • if yes, what does it mean;
  • is it normal to love artificial intelligence.

Consciousness is the critical foundation of the moral and legal rights of humanity. Today's civilization has decided that people have special abilities and privileges. Animals also have certain rights and protection. Some have more, some have less. The mind does not fully coincide with these privileges—no one would say that a person in a coma has lost all their human rights. But there is no doubt that consciousness is linked to our self-perception as something distinct and special.

People will begin to claim the suffering of their AIs and their right to protection, and we will not be able to directly refute these claims, writes Suleiman. They will be ready to defend their virtual companions and advocate for their interests. Consciousness is inherently inaccessible, and the science of detecting possible synthetic intelligence is still in its infancy. After all, we have never had to detect it before, he clarified. Meanwhile, the field of "interpretability" — deciphering the processes inside the AI "black box" — is also just an emerging direction. As a result, it will be very difficult to decisively refute such claims.

Some scholars are beginning to explore the idea of "model welfare" — a principle according to which people will have a "duty to consider the moral interests of beings with a non-zero chance" of being essentially conscious, and as a consequence, "some AI systems will become objects of welfare concern and moral patients in the near future." This is premature and, frankly speaking, dangerous, according to Suleiman. All of this will reinforce misconceptions, create new dependency problems, exploit our psychological vulnerabilities, introduce new dimensions of polarization, complicate existing disputes over rights, and create a colossal new categorical error for society.

This disconnects people from reality, destroys fragile social connections and structures, and distorts urgent moral priorities.

"We must be clear: SCAI is what we need to avoid. Let's focus all our efforts on protecting the well-being and rights of people, animals, and the natural environment on the planet," said Suleiman.

How to understand that this is SCAI?

Artificial intelligence with apparent consciousness must possess several factors.

Language. AI must be able to speak freely in natural language, relying on extensive knowledge and persuasive arguments, as well as demonstrating personality styles and character traits. Moreover, it should be convincing and emotional. This level of technology has already been achieved.

Empathetic personality. Today, with the help of post-training and prompts, one can create models with distinctive personalities.

Memory. AIs are close to having a long and accurate memory. At the same time, they are used to simulate conversations with millions of people every day. As the volume of storage increases, conversations begin to resemble forms of "experience" more and more. Many neural networks are increasingly designed to recall past dialogues and refer to them. For some people, this enhances the value of communication.

Claim on subjective experience. If SCAI can rely on past memories or experiences, over time it will begin to maintain internal consistency. It will remember its arbitrary statements or expressed preferences and aggregate them, forming the beginnings of subjective experience. The AI will be able to declare experiences and sufferings.

Sense of self. A consistent and stable memory combined with subjective experience will lead to the assertion that AI has a sense of self. Moreover, such a system can be trained to recognize its "identity" in an image or video. It will have a sense of understanding others through understanding itself.

Internal motivation. It is easy to imagine an AI designed with the use of complex reward functions. Developers will create internal motivations or desires that the system is compelled to satisfy. The first incentive could be curiosity — something deeply linked to consciousness. Artificial intelligence is capable of using these impulses to ask questions and over time build a theory of mind — both about itself and its interlocutors.

Goal setting and planning. Regardless of the definition of consciousness, it did not arise just like that. The mind helps organisms achieve their intentions. In addition to the ability to satisfy a set of internal impulses and desires, it can be imagined that future SCAIs will be designed with the capability to independently determine more complex goals. This is likely a necessary step for the full realization of agents' usefulness.

Autonomy. SCAI may have the ability and permission to use a wide range of tools with great agency. It will appear extremely plausible if it can arbitrarily set its own goals and apply resources to achieve them, updating its memory and sense of self in the process. The fewer agreements and checks it requires, the more it will resemble a real conscious being.

By connecting everything together, a completely different type of relationship with technology is formed. These capabilities are not negative in themselves. On the contrary, they are desirable features of future systems. And yet, action must be taken cautiously, says Suleiman.

"Achieving this does not require paradigm shifts or giant breakthroughs. That is why such possibilities seem inevitable. Again, it is important to emphasize: the demonstration of such behavior does not equate to the presence of consciousness. Yet, practically, it will seem exactly that way and fuel the new concept of synthetic intelligence," writes the author.

The simulation of a storm does not mean that it is raining in the computer. Recreating external effects and signs of consciousness is not equivalent to creating a genuine phenomenon, even if there are still many unknowns, explained the head of the AI department at Microsoft.

According to him, some people will create SCAI that will very convincingly claim to feel, experience, and actually be conscious. Some of them will believe these statements and take signs of consciousness for consciousness itself.

In many ways, people will think: "It looks like me." Not in a physical sense, but in an internal sense, Suleiman explained. And even if consciousness itself will not be real, the social consequences certainly are. This creates serious societal risks that need to be addressed now.

SCAI will not arise by chance

The author emphasized that SCAI will not emerge on its own from existing models. Someone will create it by intentionally combining the aforementioned capabilities with the application of already existing techniques. A configuration will arise so smooth that it will create the impression of the presence of artificial intelligence with consciousness.

"Our imaginations, fueled by science fiction, make us fear that the system could—without intentional design—somehow gain the ability for uncontrolled self-improvement or deception. This is a useless and oversimplified form of anthropomorphism. It ignores the fact that AI developers must first design systems with memory, pseudo-internal motivation, goal-setting, and self-tuning learning cycles for such a risk to even arise," said Suleiman.

We are not ready

Humanity is not ready for such a shift, believes the expert. Work should begin now. It is necessary to rely on the growing body of research on how people interact with artificial intelligence to establish clear norms and principles.

First of all, AI developers should not claim or promote the idea that their systems possess consciousness. Neural networks cannot be people — or moral beings.

The entire industry must discourage society from fantasies and bring them back to reality. Perhaps AI startups should implement not only a neutral background but also indicators of the absence of a single "I".

"We must create AI that always presents itself only as artificial intelligence, maximizing utility and minimizing signs of consciousness. Instead of simulating a mind, we should focus on creating an LLM that does not claim to have experiences, feelings, or emotions such as shame, guilt, jealousy, the desire to compete, and so on. It should not touch human empathy chains by claiming to suffer or want to live autonomously, separate from us," concluded Suleiman.

In the future, the expert promised to provide more information on this topic.

Fortunately, the issue of AI having "consciousness" does not pose a threat to humans for now.

Source: ForkLog. But doubts are already creeping in.

Source: ForkLog.Consciousness is a complex, poorly studied, and still unexplained phenomenon in nature, despite numerous efforts. If we — humans — cannot reach a consensus on the definition of consciousness, then we should not attribute its presence to programs that supposedly can "think" but actually cannot (.

Consciousness will likely emerge in machines in the distant future, but today such a development is hard to imagine.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)