Raúl Arrabales Moreno

Cognitive Neuroscience – Artificial Intelligence – Machine Consciousness

Consciousness Evolves Besides Genetics

Raúl Arrabales Moreno, Machine Consciousness researcher at Carlos III University of Madrid

“CONSCIOUSNESS EVOLVES BESIDES GENETICS”

By Ana María Jaramillo V. (Translation of interview published by Blog Sistemas Inteligentes)

Inspired by the Strong Artificial Intelligence school, the same that captivated audiences in movies like ‘The Matrix’ or ‘2001 Space Odyssey’, this engineer by profession, multidisciplinary scientist by passion, believes the ultimate goal of Machine Consciousness research is to understand human nature. He pursues, as only a few do, the dream of creating self-conscious robots, as he asserts the best way to prove that something is understood is by recreating it.

Arrabales believes the real advancement of this field will come thanks to the synergy between mind research and technological disciplines. He knows he will live to see important qualitative changes and advocates the application of cognitive models from psychology or neurology to computational architectures.

This young scientist works in a controversial but fascinating field, where everyday research can be turned into fantasy, raising questions about free will and determinism in both humans and their creations.

AMJ: From what I understood reading you blog, you believe in the creation of artificial consciousness, don’t you?

RA: Yes, I believe so. However, it is not clear to me when and to what degree we will achieve this goal. Actually, one of the most important research lines I am currently working on is focused on the measure of the degree of artificial consciousness. There is no consensus about how to address this challenge. In fact, we don’t have a clear answer about the degree of consciousness of a coma patient. The definition of the term consciousness is a problem itself.

Is it possible to scientifically study artificial consciousness?

Some people are reluctant to put the term consciousness in the title of an article because it could still evoke the smells of sulphur today. When talking about consciousness some people may understand you are referring to the soul or a religious aspect. Nevertheless, that is not my perspective. That is why I often remark the word scientific when I say scientific study of consciousness. This is a field in which philosophy, empiric sciences like psychology, and technical disciplines like robotics converge. Anyhow, I think nowadays most researchers in the area of Machine Consciousness come from the Artificial Intelligence arena.

Talking about the relations between science and technology in this particular field, do you believe new applications can be envisaged without necessarily questioning our current conceptual convictions? In other words, can engineers working alone create a human-like machine?

I would say no. You can’t fully understand concepts from other knowledge areas unless you participate in their research lines. I draw this conclusion from my own experience. I am a computer scientist, but when I have to implement a cognitive model I need to really understand what’s behind the theory. If you don’t understand the ideas that come from the multidisciplinary research you are being blind. Only a few labs in the world are exclusively dedicated to a serious attempt to build conscious machines. This is a young and immature field where advanced research platforms are still to come. However, there is multitude of scientists dedicated to closely related fields like those grouped under the umbrella of artificial cognitive systems. I think the study of consciousness is the last frontier of all these lines.

Can the gap be filled just with technical knowledge?

Absolutely not, and this is obvious when you attend to an international conference where philosophers discuss with engineers. For me it is clear these different aspects of research cannot be separated. Some people think consciousness is already completely understood and can be implemented right now based using existing technology. For instance, Pentti Haikonen, a Machine Consciousness expert, has recently published a book (Robot Brains) where a cognitive model for conscious machines is described. According to him, these machines will be able to speak, understand natural language, and even possess inner talk. I don’t think we are already able to build fully conscious human-like machines, but I know there are plenty of ideas like those proposed by Haikonen that can be explored and used to generate new advances. Some authors argue that the more neurons are we able to simulate the nearer we are to have conscious machines. However, I don’t think the problem is just a matter of quantity, but the configuration and connection scheme of these artificial neurons.

Will we ever have conscious machines?

Hmm, Many people think this is extremely complicated and even an unreachable objective. But it is great fun! Isn’t it exiting? Being pragmatic I think we can improve a lot nowadays machines. Attention is one of the areas in which remarkable advances have been achieved. The brain is highly parallel (it does lots of things at the same time) but consciousness is serial, how can this be managed? From the huge number of contents present in your mind at any given moment, how just one of them goes beyond the subconscious threshold and appears in the conscious scene? Consciousness is like a huge iceberg drifting in the sea, what you and I are experiencing now is just the tip. That part is selected by an attention mechanism and that can be imitated in artificial machines. Researchers in this field are confident of achieving interesting results by combining approximations: synergy between research lines and sustained effort will be the key, not magic.

But, what is the closest to the ultimate goal that we have already achieved today?

Not much. This is one of the points where people are usually wrong. My perspective is that we tend to attribute more than really is in there, like when we observe a computer working and we claim it is ‘thinking’. One of the most advanced projects in the area, called Cronos, which was recently completed and funded with approx. 315.000 pounds, demonstrated a primitive mechanism for imagination. A humanoid robot controlled by artificial neurons is assigned the task to knock down an object, without being explicitly programmed to do so. The robot uses an internal model and a physics-based simulator with the aim to find a suitable action for knocking down the object. This sort of experiments permit the researchers to remove the quotes from the word imagine and say the robot is able to imagine the best way to knock down the object, then he executes the action in the real world.

Well, a program performs a simulation and then chooses an action. For you, what is the fundamental difference between a human brain and a computer?

There are factors that influence how the human brain is developed, like culture and social relations, which are not present in artificial brains. There are theories that advocate for the primacy of genes in the development of brains. I think there is something special and different with human brains because consciousness was not really required for the replication of genes. There has been an explosion, higher-level internal mental experience has emerged and has been primed by culture and social interactions. This is something that cannot be explained based only in genetics.

What has been produced by the interaction with computers?

It has created a different conscience. The way we think today is not the same it was three centuries ago.

But that is not an evolutionary timeframe and biology claims that genes determine the features…

Consciousness evolves besides genetics. There is a scientific fact that supports this clam: your DNA has no capacity to code the design of an adult brain. It is impossible; information theory tell us that there are not enough bits of information in human DNA chains to code all the connections of a developed brain. Without the interactions and development produced during life – ontogeny – humans as we know them are not possible. If for some reason culture vanished today, a human generically built like you and I but not situated and developed within a culture would be quite a stupid being from our perspective. In other words, he would not develop the kind of consciousness we have now.

What is the last frontier?

The phenomenal aspect; that is, inner experience. It is not just the most challenging issue but the most controversial aspect. Out of all features of consciousness, it is commonly agreed that sooner or later we will be able to implement them in machines (attention, imagination, advanced learning), but inner experience, we are not sure or we don’t agree what it really is. If we had the technology we could build a robot identical to a human and behaving like a human. The difference is if I kick you in the leg you will feel a subjective experience of pain and probably hate. The question is would the robot feel something similar or it would be just like a zombie without inner experience? This is something we cannot answer yet.

What is your motivation to research on human-like machines?

I always felt attracted to science. I’ve always felt curiosity about how things work. I wanted to base my PhD thesis on Artificial Intelligence, but what you see in the movies is unreal, the actual research field is another story. I am most inclined to the strong AI, which was mostly abandoned because of its poor expectative. I think Machine Consciousness researchers (including me) generally think strong AI must rebirth, why not? Because it is too difficult. Don’t you dare? Actually, research is not particularly easy in this field, and most of the time we might be hitting out blindly. Nevertheless, for me this is much more interesting than other well delimited areas where they only speak about algorithms.

Will you live to see it?

There is hope, although I don’t quite see it yet. I refer to what it is called the singularity. Advocates of Singularity claim that suddenly, in a relative short time, our culture and life as we know right now will change dramatically because of a new way of interaction with next-generation machines. This is something similar to the cultural revolution produced by the introduction of computers and Internet. Living in a society where machines have a power similar to that of humans imply that rules will change. Japan is already making new laws about this. I think conscious machines can be built because we are actually biological machines, and machines can be built. So far, this is just a belief, but not science. It will be science when I recreate consciousness in a machine and show it to you.

The real risk is that robots become fully autonomous, isn’t it?

No. The question is if they are free. Autonomy means that my robot is able to autonomously go to the store and do the shopping. A different matter is whether or not he can decide not to go. It is our responsibility to design robots who don’t suffer because they cannot do things different from the assigned tasks. Robots are like slaves but they shouldn’t feel anything about it.

But, why do we want slaves now if it took centuries to abolish slavery?

I hope the ultimate goal of science is to understand human nature, not to cause any harm.

Has science lost that spirit in present days?

Well, if you are thinking about applications, there are many, and any tool can be a double-edged sword: a knife can be used to cut or to kill someone, and so a robot. And what should we do? Not to invent it? Is its usage what determines the good or bad quality. If you endow a machine with free will, then it is no more a machine or tool, it is promoted to a new status, like a person. But, what happens when the same ecosystem is shared by two species with equal power? They compete and one wins. Anyhow, all this musing is based in conjectures, but I agree we should not neglect these issues.
Nowadays, researchers focus on apparent emotions not taking care about the creation of artificial consciousness. This is useful in robotics from a practical standpoint. For instance, we can design robots to take care of elderly people while we are away.

But, don’t you prefer a real person?

Yes, sure. First of all, I would never leave someone alone with a robot designed by me (laughs), it would be too dangerous. Seriously, I think there would be advantages and disadvantages. If you trust the machine works fine, as good as your dishwasher makes its job today, then I would have no objections. A robot would be safer than a person.

Well, the machine could be safe, but what about the affective dimension?

That is relative. Think about what the company of a pet dog means to you. You probably attribute him inner experiences he doesn’t really have, but for you, the feedback you get is real and that is the actual question. As for Machine Consciousness research, at the end of the day it doesn’t matter if the robot really has internal phenomenal states, but if I attribute them to him and therefore I infer he is conscious. For me, that means: mission accomplished. Today, you probably prefer to interact with a real dog rather than a robotic one, but if such a robot is built that you cannot perceive the difference, it will be evaluated as conscious as a biological dog (no matter what inner states were present).

Beyond scientific insight, it is expected that science has an impact in society, helping us live better lives…

Laws of robotics can be summarized as follows: A robot may not injure a human being and protect its own existence unless a human is in danger. However, these laws are not always enforced by design. For instance, US army has autonomous machines designed to kill. But there are also robots designed to rescue persons from high risk situations. We all agree that losing a robot is preferable than losing one life, because a robot doesn’t die, it just get broken.

Raúl Arrabales

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top