Thursday, March 02, 2006

My Contribution at the RoboEthics Workshop

This workshop addresses the issue of implementing ethics in future robotics systems. During the discussions, one of the questions that arose, was about emotions. A few of the participants (prominently the renowned robotics researcher Ronald Arkin) held the view that robotic systems could exhibit emotions - the nature of emotions was well understood, and they could be modelled into the design of artificial intelligence systems. I objected: what would appear to be an implementation of emotion, in my view would only be an image of the emotion, but not the emotion itself. (Ronald held his view, but liked the fact that in my argument I seemed to have displayed a bit of emotion).

When later deliberating this issue again, I suddenly realized one of the essential ground truths: emotions are a superset of logic. In human life, emotions are natural, and logic is learned. The whole brain / human system is one large processing system, of which the emotions are one of its outputs. Logic is a special case of this output, when the human consciousness applies certain rules to it. Humans always act and interact on the basis of emotions. When they try to make logical decisions, they apply their "gutt feeling", then rationalize their decision with facts that fit their opinion.

In my talk about the Grand Challenge, I pointed out one fact which nobody at DARPA ever mentions, and this fact is also rarely discussed in the US in the context of National security: that such envisioned autonomous robotic systems which can save soldiers' lives on one side, make war more likely, because they lower the threshold for military intervention by the party who has these live-saving technologies. I then suggested that engineers / robotists / scientists should have a personal oath, similar to the Oath of Hippocrates to be sworn by physicians (this could be labeled as the Oath of Asimov) (actually, Prof. Gianmarco Veruggio who organized this workshp, had already in 2004 proposed a Roboethics Manifesto about this). Relying solely on other mechanisms that are built into the "system" such as checks and balances is unlikely to work, when even in a Western democracy someone can label the Geneva convention as "quaint".

And finally, I warned that if no such action would be done, mankind would sooner or later experience the "Hiroshima of Information Technology", after which the scientists would ask themselves the rhetorical question "what have we done?", similar to what happend to physics and physicists after WW2. This is very relevant, especially regarding the easy possibility of abuse by fundamentalistic ideologies or cults, as the journalists /authors Flo Conway and Jim Siegelman have pointed out in their latest book.

No comments: