Can A Robot Ever Be Conscious And How Would We Know If It Were?

Christof Koch, an early advocate of the push to identify physical correlations of consciousness, is one of the authors of the neuroscientist Guilio Tononis “theory of information integration of consciousness. The theory is, of course, only one of many, but it is remarkable for its attempt at mathematical measurability, which could help make immaterial feelings such as consciousness more tangible. Koch takes a more fundamental approach but remains optimistic that human-like AI will emerge in the near future.

It was the basis of the groundbreaking thesis on consciousness, which argued that consciousness could not be described as a physical process in the brain. More than 40 years later, advances in artificial intelligence and neural understanding have led to a reassessment of the claim that consciousness is not a “physical process” and therefore cannot be replicated in a robot. But there are still disagreements about whether machines can be conscious and, if so, how we are being told if this is the case.

It depends on your point of view whether you think robots can be conscious. We have no idea what we mean by “consciousness” when we look at our own; we think we have a rough idea of what it is like to be conscious for animals, but attributing consciousness to robots that act like humans is difficult insofar as we have no clear idea of what it would be like for us to attribute consciousness to them. In other words, attributing consciousness to a robot that behaves like a human is a presumption based on anthropomorphism.

If you accept the Dualism of Descartes, as many people do, then robots will never be conscious for the simple reason that they cannot be human because only humans have consciousness. But machine consciousness, if it exists, will not find robots tugging at our heartstrings (R2D2). The point is that machine consciousness is in principle possible, but the artificial intelligence produced will not be the ones that happen to be conscious.

According to a new line of thinking in neuroscience, consciousness involves sensory processing in the heat of the back of the brain. The technical approach assumes that consciousness emerges in the human brain in the midst of the firing of tens of millions of neurons in a network of parallel processing volumes as a high-level phenomenon and that meaningful information is at the next level. Sensory processing in this hot zone is the key to consciousness, and creatures with sensory sparkles are conscious.

As a result, the code in our DNA that shapes the architecture of operating systems and programs that develop in the brain creates consciousness in a unique way that we do not fully understand and could never duplicate. If this hypothesis is correct, the assumption that humanity is capable of building a robotic brain as complex as that of a human being, even if conscious, is a big question.

There are three interconnected and important aspects of consciousness that must be taken into account when we ask whether a robot can be conscious. Firstly, whether robots can simulate human behavior or not; secondly, whether we know what robots can do and to what extent; and thirdly, whether robots can experience things like humans.

One of the pitfalls of machine self-recognition is that human consciousness is not precisely defined, making it difficult, if not impossible for programmers to reproduce the state of an algorithm, report AI researchers in a study published in the journal Science in October 2017. The question of whether robots can be conscious is therefore reformulated: whether they can care, be aware, and act.

The scientists defined three levels of human consciousness based on calculations in the brain. The first, which they called “C0,” represents calculations that occur without our knowledge such as facial recognition, and most AI functions are like that, researchers wrote in the study.

The most important point for the development of artificial consciousness is to clarify that human consciousness is the development of functions such as self-consciousness, and he claims to have presented physical and mathematical proofs of this in his dissertation. Takeno conducted a mirror-image cognitive experiment with a robot equipped with computer modules called monads of self-confident functions. He claims that he invented these monads to build an artificial system of consciousness that creates relationships between emotions, feelings, and reason by linking modules in a hierarchy (Igarashi and Takeno, 2007).

The test of non-turfing whether a machine has consciousness rests on its ability to produce philosophical judgments that can be used to recognize, if not refute the existence of consciousness. In contrast, a strong AI assumes that the machine has no consciousness, mind, sensation, simulated thoughts, or understanding. John R. Searle is critical of strong artificial intelligence and characterizes it as “the assumption that one can program a computer in such a way that it has a mind and a sense, and that one can say that the computer understands cognitive states with the right program” (Searle 1980, p. 417).

For example, the absence of philosophical judgments causes a lack of mechanical intellect, but not a lack of consciousness. A positive result thus proves that the machine is conscious, while a negative result proves the opposite.

In order to solve specific questions of artificial consciousness, it is helpful to consider philosophical reflections on consciousness that focus on human and animal consciousness. A better understanding of artificial consciousness (or artificial rationality, artificial sensation, or similar concepts) is needed in the debate about the moral and legal status of robots as well as broader questions about how to respond and interact with machines. One strategy to avoid a narrow definition of machine consciousness is to avoid a definition.

Skeptics of robotic consciousness say that robots cannot feel anything, that they can only behave as they behave (or as Turing robots) by feeling or feeling things. But I suspect that if we think about it long enough, we will realize that it is the other mental problem that we should be concerned about: the case of man-made systems that are merely spirits, even if they are only “ghosts” of machines with normal consciousness. Let us agree with a number of vitalisms: there is no way that we can know that someone who is alive is not conscious or unable to return to consciousness.

LEAVE A REPLY

Please enter your comment!
Please enter your name here