Cognitive Robotics

In this article I aim to provide a comprehensive introduction to the field of cognitive robotics by providing you with some definitions, examples, links to information resources, courses, and research projects. Also, the research motivations of this field are discussed, as well as main application areas and the inspiration in natural cognitive systems.

The field of Cognitive Robotics is very much related with Machine Consciousness (MC). Indeed, I consider MC as a subfield or a specific focus of the research on Cognitive Robotics. Any implementation of the functionality of consciousness has to be framed within a cognitive architecture. Consciousness per se does not make any sense unless it is integrated in a subject able to develop end to end (embodied) processes like perception and behavior.

The ultimate aim of the development of cognitive architectures is the implementation of machines that are able to “know what they are doing”, thus being more robust, adaptive, and flexible. Social robots are significant example of the kind of applications that cognitive robots (and particularly conscious robots) might perform. Interacting with humans is an extremely complex task where all these cognitive capabilities are required.

Future cognitive robots are expected to be able to interact with humans, acting and learning in unpredictable environments.

Introduction to Cognitive Robotics (excerpt taken from [0])

Research in robotics has traditionally emphasized low-level sensing and control tasks including sensory processing, path planning, and manipulator design and control. In contrast, research in cognitive robotics is concerned with endowing robots and software agents with higher level cognitive functions that enable them to reason, act and perceive in changing, incompletely known, and unpredictable environments in a robust manner. Such robots must, for example, be able to reason about goals, actions, resources (linear and/or non-linear, discrete and/or continuous, replinishable or expendable), when to perceive and what to look for, the cognitive states of other agents, time, collaborative task execution, etc. In short, cognitive robotics is concerned with integrating reasoning, perception and action with a uniform theoretical and implementation framework.

The use of both software robots (softbots) and robotic artifacts in everyday life is on the upswing and we are seeing increasingly more examples of their use in society with commercial products around the corner and some already on the market. As interaction with humans increases, so does the demand for sophisticated robotic capabilities associated with deliberation and high-level cognitive functions. Combining results from the traditional robotics discipline with those from AI and cognitive science has and will continue to be central to research in cognitive robotics.

Continue reading “Cognitive Robotics”

Deus ex Machina. Interview with Anne Foerst

This is a transcript of a Berkeley Groks interview with Anne Foerst entitled Deus ex Machina.
December 22, 2004.

{mosimage} Prof. Anne Foerst
Visiting Professor of Theology and Computer Science, St. Bonaventure University
Author, God in the Machine: What Robots Teach Us about Humanity and God
Website

 

Robots have fascinated the public for years, appearing in countless films, books, and television shows.  The increasingly lifelike capability of robots in the real world has many prominent thinkers wondering how humans and their creations will interact in the future.  Will a thinking machine be regarded as a person, differing from humans only in design?  Will even the most human like robots be seen as possessing a soul?

Joining us today to discuss these issues of robots and our humanity is Prof. Anne Foerst.  Prof. Foerst is a theologian and research scientist, and visiting professor of theology and computer science at St. Bonaventure University.  Formerly she was a research scientist at the artificial intelligence laboratory atMIT, where she founded and directed the God and Computers project.  She is the author of the new book, God in the Machine: What Robots Teach Us about Humanity and God.

Prof. Anne Foerst (AF) joins Charles Lee (CL) to discuss robotics and theology.

CL:  It’s a pleasure to have you on the program.  You’ve written an interesting, and I would have to say somewhat controversial new book.

AF:  Thank you.

CL:  Well, this is indeed an issue that some people might find somewhat controversial, the relationship between robot building and god.  Some might find this incompatible.  How can the two be related?

AF:  First of all, what I realized was that when we try to build humanoid robots in our image, we realize the complexity of humans.  When we really try to build something that moves like us, acts like us, and is smart like us, our appreciation for nature grows.  And, I would put that in spiritual terms, our admiration for god’s creation grows.  So, building humanoid robots in our image is in a way a spiritual enterprise.

CL:  So what then is the motivation for building robots?

AF:  In the literature, you find two motivations.  The first is more hubristic and arrogant, playing god, trying to equal god, and so on.  The other is a more modest goal, trying to find out how we work, learning more about who we are, learning more about what makes us human by trying to rebuild us, and seeing the failures and successes, and therefore find some solution for who we are.  And, actually, it’s the second attempt that I mostly appreciate and discuss in my book.  I’m not interested in people who are arrogant.  I’m interested in people who are seriously interested in who we are, and trying to find out that millennia old quest by building robots.

CL:  So, one of the stories you mention in your book is from the Jewish traditions of golem building, and how one can read those two points of view from those stories as well.

AF:  The golem stories are interesting, because they really have the explicit motive of prayer.  Golem building, construction of artificial humans from clay, is actually a prayer.  And, the nice thing about the golem traditions, which go back to the 13th century to the Jewish mysticism called Kabala, is a real connection between current artificial intelligence research and this old eastern European motive.  Several of the early top notch AI researchers come from that tradition, and actually relate themselves to that tradition, and see those ancient golem builders as their ancestors.

CL:  You also talk about humans as storytellers.  How does this relate to our motivation for building robots and understanding ourselves?

AF:  People have always analyzed humans as thinkers.  So, since the early beginnings of artificial intelligence, people have tried to build machines or programs that could think, do chess playing, mathematical theorem proving, etc.  And, I define humans as mainly embodies and interactive.  That is really what makes us human, not the individual that thinks, but the person that is in community with other people trough embodiment.  We share a physical world.  And, we interact in that world through stories.  It’s interesting because you can see it on the very profane level of neurons, where we have pattern recognition and the way neurons interact, that our brain already creates narratives.  And, you can recognize it on the highest level of human societies interacting, where they have narratives about how they came into place, how they came into being, what’s their meaning, what’s their identity, etc.

So, we tell stories on all different levels.  And, I found out that this is actually a very fruitful attempt to bring theology into the realm of science, because science tells a lot of stories.  And, theology obviously tells a lot of stories too.  And, I think instead of playing out, “Oh, that is objective science”, which doesn’t exist in fact, and, “Oh, that is subjective theology”, I think the storytelling approach actually brings us much closer to talking to one another and enriching both sides of the dialogue.

CL:  Indeed.  You do mention in one of your chapters about the sort of conflict that arose when you attempted to bring these two sides together at MIT.

AF:  Yes.  Um, some people didn’t like me there.  It took me probably at least a year before heads would stop turning whenever the term, evolution, was mentioned, because they couldn’t imagine that I as a theologian had no trouble with evolution.  For me as a German academic theologian, the whole concept of evolution being in conflict with creation is utterly ridiculous.  Before I came to this country, I had never encountered it, because it’s very alien.  It really only exists in certain parts of the United States.  It doesn’t exist anywhere else in the world.  And, so when I encountered it, my explanation for it was that people try to reduce Christianity to something that is quite irrational, because it is easier to discard it then.  And, I think people when they encountered me at the beginning were kind of nervous, because I’m not a wacko.

CL:  Harder to discount then….

AF:  Yeah.  I think at least I’m not, perhaps I am.  So, people were a little bit disconcerted and they tried to put me in an irrational category.  They tried to denounce me as psychologically deluded, and that I would destroy MIT’s objectivity.  They had all of these interesting attacks, in order to kind of handle me.  In a way, when it happened, there was a real war out there, and people tried to get rid of me.  And, when it happened, I was quite hurt, but I had tons of supporters.  There were tons of wonderful people who really supported me, so in retrospect it was quite exciting.  You know, it was kind of funny how religious those people were in their rejection of religion.  So, it was quite emotional and interesting.

CL:  Well, one of the parallels you allude to is that science is somewhat of a religion itself.

AF:  Yes.  I think there are scientists who are not.  And, those are usually scientists that I interact with the most.  But, let’s take the project of building humanoid robots in our image.  When you try to build humanoid robots that are like us, you have to assume that humans are some kind of machine.  Right?  Otherwise, we couldn’t rebuild ourselves.  And, that’s perfectly fine, because science always operates from assumption, and then we see how far we get.  And, there is nothing wrong with that assumption.

The problem is when scientists then say, “Ah, that means we are nothing but machines.”  So, they turn a statement they assume for pragmatic reasons into a statement about the reality of the world.  And, that is when I come into problems with them.  People who do that are usually the ones who do not believe that human are narrators.  When I think about people like Rodney Brooks, who was my boss at MIT and the head of the whole robot project, he was very happy to say, ‘Of course, in my work, I have to assume that humans are nothing but machines, but I don’t want to be treated as a machine, and I don’t treat my kids as machines.’  And, he was perfectly happy to have that contrast within himself.  They were different stories that relate to different parts of who we are.  But, a lot of people are uncomfortable with that kind of discrepancy.  So, they try to prove the correctness of their assumption.  And, they then become religious.

CL:  Almost fanatical, one might day.

AF:  Yes.  Yes.  You know, what is so sad is that the fanaticism happens on both sides.  Of course, there are religious people who try to prove that humans are more than machines, and they can’t do that either, because there is absolutely no evidence of something like a soul or so.  So, you basically have people on both sides yelling at each other. And when we see that humans can have many different kinds of narratives, which are all true but don’t necessarily have to be coherent, only if we admit that can we make peace with it.

CL:  Should we try to reconcile all of these separate stories?

AF:  Reconcile is sort of a difficult word, because I don’t think that they can be reconciled.  I don’t think they can be turned into one single story.  But, I think they all make sense in certain contexts.  I mean, if I want to find out how a drug works, or how to perform certain surgeries, or other medical things, every idea of humans being more than machines gets in the way, right?  I have to look at the functions and mechanisms of the human system.

On the other hand, when I want to establish something like dignity and personhood, or fight for humanitarian causes against genocide, I have to assume that humans are more than machine, because otherwise I can’t give any reason for protection of humanity, right?  And, both of those stories make sense, but in very different contexts.  So, both of those stories are equally important.

CL:  So, when you were at MIT, you worked on the development of the Cog and Kismet robots.

AF:  When I first came to MIT, the team had just started to build Cog, who was only a year old at the time.  Cog was a humanoid robot, at first with one arm, then two arms, a head, torso, gyroscope, eyes, ears, and all that kind of stuff.  Cog was very cute and learned to coordinate its various body parts.  It was really the first real life attempt to build a humanoid robot, so it was the first time a serious robot team said that it wanted to build a humanoid by basically turning science fiction into reality.  And, I was quite intrigued by that.

And, what I especially liked about the whole Cog project was that these people did not say that the core of intelligence is chess playing and mathematical theorem proving, but they said that intelligence is the result of the human’s ingenious capability to survive everywhere.  In other words, intelligence is a result of evolution, and the fact that we can play chess and do math is more or less a by-product.  So, these people were interested in social interaction and coordinating the body.  That intrigued me because that is the type of understanding of humans that we can find in the Bible.  In the Bible, what defines humans is their capability for relationships, especially with God of course, and then with one another.  So, I felt very much at home there.

But, the problem with Cog was that they had so much difficulty trying to get Cog to coordinate its body.  Also, Cog was very massive and tall, and created a lot of anxiety and fear.  So, Cynthia Breazeal, who is now a professor at the MIT media lab, after some years started Kismet, because I had always been saying that when you look at human development, humans are only in relation.  We are so fundamentally relational. And, with Cog, there was too much that the robot did on its own, and I always thought that we needed a robot that was purely reactive, just reacts to input from the outside world, and see what such a robot could do.

So, Cynthia had the same ideas, and came back with the first proto-Kismet.  From then on, Cog was dead to me, and I only interacted with Kismet.  Kismet has a very cute face, and you basically fall in love with it right away.  It mimics your facial expressions and babbles, and it interacts with you in a quite profound way.  And that was very intriguing.  When I presented Kismet to non-technical audiences, they were much more fascinated by it, but also much more afraid.  They were afraid of all the emotions that they had towards that robot, which they didn’t want to have, because it’s just a stupid machine, right?  And so, Kismet was much more powerful for doing what I wanted to do, which was confronting people with their own mechanisms of social interaction, confront people with the fact that they are capable of reacting to just about everything, and making them aware of who they are.

CL:  In the book, you mention that even a very simple computer program written in the sixties was capable of eliciting these types of emotions.

AF:  Yes, ELIZA.  It was actually written by Josef Weizenbaum, who was an AI professor.  And the fact that his students used that program, which was a simple question and answer program to solve their own personal problems turned him into a critique of AI.  And, it was actually Josef Weizenbaum who got me to MIT.  I met him in Germany and he gave a talk there.  That talk actually inspired me to do the research that I am doing.  And, it was him that invited me to MIT.  It was interesting, because I think he hoped that I would become something of a critic of AI, but I never became one.  I think there is too much reason behind it.

CL:  So, what does this mean for our development of robots?  Do you think there will be a community of robots and humans interacting as storytellers?

AF:  Yes, definitely.  I always like to think that robots will be our future partner species.  In a way when you look at humans, we are so desperately lonely.  We look desperately for animal intelligence by trying to communicate with chimps and dolphins.  And, at the same time we look for extraterrestrial intelligence.  So, in a way, we are a very lonely species.  And, for me as a theologian, that is because we lost our relationship with God that was started at the beginning.  So, it makes sense that we would try to build a species that would be our partners and friends.

And, I think there is a good chance that those robots will become exactly that.  What we have learned from Cog and Kismet is that the best way to build an intelligent machine is to build it in accord with a human newborn, and like a human newborn let it grow in intelligence with its interaction with humans.  And, so those robots can only become smart if they have interaction, and that’s exactly how a human baby eventually becomes a smart grownup.  So, we have to build them communal, which will be the most fascinating thing about them.

I always refer to the Frakenstein story.  People always tell the story with Frankenstein building the monster who then turns against humanity, right?  But, the way I see it is that Frankenstein builds this monster or creature, and then leaves the creature alone.  And, the creature never has a chance to bond, and this is why it becomes evil.  And so, if we build the robots such that we have to interact with them in a good way to make them smart, then it will be a great relationship.

CL:  Is this the direction people are taking in terms of building intelligent robots?

AF:  Yes.  There is still the old camp that thinks you can build intelligent programs in disembodied, unconnected machines.  And, intelligent databases like that certainly serve their purpose.  But, I think when it comes to real interactive technology, then we have to build them embodied and social.

CL:  So, we are running out of time, but I am curious, how did you become interested in this topic?

AF:  Well, it’s kind of funny.  First of all, I was always fascinated by technology.  When I was four years old, I was building my own machines.  But at the same time, I have to admit that I fell early in love with humans, and I’m fascinated to see how we work.  So, I started first to study theology, and realized that I wanted to do something technical too.  It’s really my quest to determine what does it mean to be human?  Who am I?  This kind of drove me to gather as many insights from as many disciplines as possible.  And I guess with AI and religion that I’m gifted for those.

CL:  Well, you’ve certainly made a major effort in a unique academic field here.

AF:  Well, thank you.  I tried to, and I hope it is received well.  It was a really hard product, and just writing the book took I don’t know how many years.

CL:  Congratulations again on your book, God in the Machine, which I believe is in stores today.

AF:  Yes, it is in stores as of today.

CL:  Thank you for joining us today on Berkeley Groks.

AF:  Thank you

 

Berkeley Groks 2004 www.groks.net groks@hotmail.com

Original transcript at Berkeley Groks.

Robot Free Will

At any given time the mind has to take decisions and multiple unconscious actions are done. Our conscious mind continuously confabulates making up the illusion that it is in charge. But, who is actually in charge?

Can science tell us what is exactly the human nature? Can we reproduce that in artificial machines? Consciousness and free will have been typically evading the scientific arena. However, in the latest decades, philosophers and scientists have begun to work together in the search for a scientific explanation of the mind. In a review of Dennett’s Book, Freedom Evolves [1], by Simon Blackburn [2], it is pointed out why scientists need philosophers. Libet’s experiments show that:

[…] neural activity that begins an action starts up around a third of second before the agent’s conscious decision to act.” […]

Usually, neuroscientists have interpreted this as the illusion of being in charge. Dennett supports that this is a mistaken view. Instead, a conscious agent must be seen as a continuum, where there is no single moment of decision. The interventionist conception deduced from Libet’s experiments usually lead scientists to think that evolution and culture have created a prison for the mind. Dennett argues the contrary, as he thinks evolution and culture are the key differentiators that make us humanly able to shape responses of reason and imagine the future. In relation with the link between thought and action:
“We have the power to veto our urges and then to veto our vetoes,” he said. “We have the power of imagination, to see and imagine futures.”

According to the neurologist Mark Hallet [3], free will doesn’t exist: “Free will does exist, but it’s a perception, not a power or a driving force. People experience free will. They have the sense they are free. The more you scrutinize it, the more you realize you don’t have it.” Then, are we just biological robots? Well, some physicists argue that free will does actually exist. Anton Zeilinger, a quantum physicist, said that quantum randomness was “not a proof, just a hint, telling us we have free will” [3].

There are two main factors why some scientists establish a link between quantum mechanics and theories of consciousness. On the one hand, it is believed that a conscious mind plays an important role in the process of quantum measurement, and any theory of consciousness should account for that. On the other hand, some authors think that classical physics cannot explain by itself the properties of mind, but it could be explained based on the special features of quantum mechanics [4]. The trick here is that conscious observation play a crucial role in quantum effects. I am not an expert on quantum mechanics, but I would say that a mere (unconscious) observation would play the same role.

Focusing now on our main concern, conscious robots, can they have free will? According to Seth Lloyd, an expert on quantum computing, there is a kind of free will that machines and us share [3]. As Kurt Gödel demonstrated, in any formal system of logic there are statements that cannot be proven either true or false. Unless you wait and see the actual outcome, I would say. For a machine, as Lloyd explains, the only way to find out is to set it computing and see what happens. So, even if the actions of the machine (or ours) are determined, we don’t know what they will be until they actually take place. This leaves room for a kind of free will for machines.

[1] Daniel Dennett. “Freedom Evolves”. Viking. 2003.
[2] Simon Blackburn. “Who’s in Charge”. American Scientist, Volume 91. 2003.
[3] Dennis Overbye. “Free Will: Now You Have It, Now You Don’t”. The New York Times. Science. January 2, 2007.
[4] Hameroff, S. R. y Penrose, R. (1996). Orchrestated reduction of quantum coherence in brain microtubules: A model for consciousness. Toward a Science of Consciousness. Cambridge, MA: MIT Press.