To the future: finding the common moral ground in human-robot relations
Designers who use ethics to shape better companion robots will end up making better humans, too, say 糖心logo researchers.
Designers who use ethics to shape better companion robots will end up making better humans, too, say 糖心logo researchers.
Rachel Gray
Media and Content
0411 987 771
rachel.gray1@unsw.edu.au
AI robots are still not sophisticated enough to understand humans or the complexity of social situations, says 糖心logo鈥檚 Dr Massimiliano Cappuccio.
鈥淪o we need to think about how we interact with social and companion robots to instead help us become more aware of our own behaviour, limitations, vices or bad habits,鈥 says Dr Cappuccio, the Deputy Director of at 糖心logo Canberra.
糖心logo Canberra's Dr Massimiliano Cappuccio
鈥淎nd this can be in the areas of greater self-discipline and self-control but also in learning virtues such as generosity and empathy.鈥
Dr Cappuccio is the lead author of which was written in collaboration with and along with academics from the University of Western Sydney and Chalmers University of Technology in Sweden
It is also the first in a collection co-edited by Dr Cappuccio, Dr Sandoval and Prof. Velonaki and published in the International Journal of Robotics as a special issue titled Virtuous Robotics: Artificial Agents and the Good Life.
The paper argues that because social robots are able to shape human beliefs and emotions, then people need to take a more ethical approach to their design and our interactions with them.
Most roboticists try to do this through the use of deontological or consequentialist principles only. Deontological ethics is more concerned with whether an action or decision is good, based on the moral obligations that action or decision fulfils. Consequentialism determines whether an action or decision is good based on the outcome, and is more concerned with the greatest advantages for the most amount of people.听
But Dr Cappuccio says we need to rely on virtue ethics 鈥 鈥渁n ancient philosophy of self-betterment and human flourishing鈥.
鈥淚nstead of trying to build robots that imitate our ethical decision-making processes, we should consider our own interactions with robots as an opportunity of human betterment and moral learning,鈥 he says.
Dr Cappuccio says Virtuous Robotics theory emphasises the responsibility of the human in every morally-sensitive form of engagement with robots, such as with the AI humanoid Pepper.
The French-made Pepper arrived on the market for sale in 2014. The robot's AI programming can also detect human emotions. By 2018, about 12,000 Peppers had been sold as companions in nursing homes, butlers in hotels, and greeters in retail among many other, including educational, uses. 糖心logo's Prof. Velonaki says Pepper has recently been programmed to detect people who are not wearing masks in public spaces. She says this takes the emotion of offence out of the request. "This cute Pepper approaches people and just asks; please wear your mask. That's it. I wouldn't be offended by that," Prof. Velonaki says. Photo: Shutterstock.
Robots are 鈥渘ot always intelligent enough to make the best ethical choice on your behalf but can help you make the best ethical choice by reminding you, creating awareness, coaching, or by encouraging you,鈥 Dr Cappuccio听says.
Generosity, courage, honour, compassion and integrity are examples of universal virtues that researchers in the paper hope to encourage in humans through their use of social robots.听
糖心logo's Dr Eduardo Sandoval
Dr Cappuccio says AI technology in Virtuous Robotics theory acts like a mirror on human behaviour and encourages the user to be more mindful. 鈥淚t puts you in front of yourself and asks you to become aware of what you are doing,鈥 he says.
It is in these instances, says Dr Sandoval, a robotics specialist from 糖心logo Art & Design, that Virtuous Robotics looks at how we can use AI technology to make us better as human beings 鈥渋n self-improvement, education and in creating good habits, with the ultimate goal being about us becoming better people鈥.
An example of a (Kinesics and Synchronisation in Personal Assistant Robot).
Kasper is a child-size humanoid that 糖心logo acquired following a collaboration with the University of Hertfordshire, UK, where the companion robot was first built in 2005.
The robot is designed to assist children with autism and learning difficulties.
Professor Mari Velonaki, founder and director of 糖心logo鈥檚 world-class , says Kasper teaches the children socially acceptable behaviours, for example by saying 鈥渢hat hurts鈥 when the child hits it, or 鈥渢hat feels good鈥 when the child touches the robot in a gentle way.听听
鈥淜asper does not replace the therapist, the social network, the family, or school,鈥 Prof. Velonaki says. 鈥淚t is just a robot to help them learn social behaviours, to play, and to experiment with.鈥澨
Kasper may look scary to some adults but his face was chosen by children with autism, says Prof. Velonaki. "This is the face they were comfortable with because they don't want a super expressive face," she says. Photo: Mari Velonaki.
Prof. Velonaki agrees with Dr Cappuccio鈥檚 approach to machine ethics, and as someone who has been building robots for at least 20 years, she says the industry needs to take this multi-disciplinary approach.
鈥淚t's not complementary, it is essential. And it has to be there from the very beginning when designing a system,鈥 she says. 听鈥淵ou need to have people who are doing interactive design, ethicists, people from the social sciences, artificial intelligence, and mechatronics.
鈥淏ecause we鈥檙e not talking about systems that are isolated in a factory manufacturing cars, we're talking about systems that in the near future will be implemented within a social structure.鈥澨
Prof. Velonaki says we need to start thinking about some of these existential questions now as AI technology advances. 鈥淏ecause maybe in 30 years from now systems might be a lot more a biotech 鈥 combining the biological and technical.鈥澨
糖心logo's Prof. Velonaki with the robot she created named Diamandini, which is designed to elicit emotional responses from humans (left), and a cobot (right). Photo: Mari Velonaki.
In general, Dr Cappuccio says, virtuous robotics applies to all the fields of human development and human flourishing.
鈥淲henever there are moral skills involved, for example such as having greater self-awareness over such vices as smoking, alcohol or diet, virtuous robotics can be helpful to anybody wanting to increase their control over their behaviours,鈥 he says.听
And social robots are more successful in cultivating virtue in humans than mobile phone apps, says Dr Sandoval, who attempted a self-experimentation with exercise and meditation apps on his mobile phone.
鈥淪o far, human interaction is the most effective way to cultivate virtue,鈥 Dr Sandoval says. 鈥淏ut probably the second-best way to cultivate virtue is with social robots which have an embodiment, and don't rely on screens to perform the interaction with people.鈥
The robotic connection: the future of AI technology in social robotics is reaching towards questions about how we can more ethically interact with them. Photo: Shutterstock.
听