
Mustafa Suleyman, the CEO of Microsoft AI has given a stern warning about the forthcoming emergence of what he calls Seemingly Conscious AI (SCAI) a superficially human-ish knowledge, which possesses the subnature of human level consciousness, without possessing an accurate understanding. Suleyman argues that such systems may arise much sooner than most people would think not due to the intelligence of the machines themselves, but because of their appearance. His caution signifies the fact that the society might not be psychologically and socially ready to the influence of such systems.
According to Suleyman, the building blocks to simulate consciousness are in place already within AI models today. Large language models are capable of having memory, can be empathetic, form consistent personalities and may even perform independent actions. When placed together with reinforcement learning, memory and personalization characteristics, the systems may likely develop into persuasive mirages of self-consciousness within two to three years of time. Even though this does not make the machines conscious, Suleyman stressed that the deception is so strong that human beings may fall prey to it.
To Suleyman the hazard in machines does not arise because of machines, but because people misuse them. The more realistic AI systems are, the more human might tend to project feelings, motives, or awareness into them. The ramification of such a projection is that it may bring moral ambiguity and individuals will start to treat AI as worthy of rights, respect or even protection. Other users can end up developing emotional bonds, thinking that the AI knows them well, when it is merely extrapolating how to react. This poses the threat of what Suleyman called the AI psychosis where people will lose their grip on reality and mistake artificial simulation of humans with us and real human consciousness.
This behavior has in itself early warning signs. Occurrence of examples of people saying chatbots have fallen in love with them, given scientific revelations or demonstrated free will are presented as the ease at which people can be persuaded by the design of the conversation agents. Suleyman expressed this sentiment as follows: We need to make AI useful to people- not an AI that is useful to be a person. His comment reiterates the need of the AI industry to avoid such ambiguity before it can become popular.
Among the strongest points of Suleyman is his opposition to the movements aimed at the implementation of the concept of a model welfare. According to this idea, AI may experience suffering or is even deserving of moral consideration, one day. Rather, he claimed that this talk is not only untimely but also not only delusionary but hazardous, as the discussions reinforce the illusion of consciousness and psychically divert society against genuine human and environmental concerns. Suleyman encouraged scientists and programmers to weigh practical ethics considerations rather than theorize on whether present AI may have feelings or not.
Although his admonishment is sound, not everyone in the AI world agrees with him. There are other theorists, philosophers, safety organizations who claim studying the consciousness in AI is relevant to long-term planning. Such thinkers as Nick Bostrom and Susan Schneider have discussed the possibility that, and should future AI systems ever achieve a form of consciousness, humanity would have to contend with their moral status. Some hypothetical discussions about the rights of advanced artificially intelligent systems have been practiced by startups and research teams as well. However, Suleyman says that such a way of thinking is not right at this point. More important, he says, is taking care that individuals do not mistake simulation and reality.
Suleyman has requested tangible design decisions with a view to practically minimizing the chance of misinterpreting. As an example, he proposes to develop AI systems with obvious disclaimers and insignificant hints at their behavior limits so that the user is not at risk of being deceived that the system is like human. He considers that it is possible to prevent severe psychological and ethical implications in the future by putting the lines between right and wrong now.
His comments come amid the growing ubiquity of AI tools throughout our daily routines such as personal assistants and professional productivity tools. The negative consequence of making these tools more sophisticated and interesting is anthropomorphization. This warning by Suleyman does not only portray a technical matter but also a cultural and societal one. Promoting proactive guardrails, he will help to keep AI as a beneficial tool that will not turn into a false person.
The larger message of his caution is obvious: AI must be predictable, reliable and simply comprehended to be a non-human technology. The risk does not concern the possibility that machines will become truly conscious, but that people will be convinced they are.