Socially perceptive robots: Challenges and Concerns
By:Ginevra Castellano & Christopher Peters
Within this article the authors discuss how popular science fiction media has led people to believe that artificial human-like entities will exist. Fictional media, such as Metropolis, 2001: A Space Odyssey, and Terminator are some examples of what has cause people believe that self aware robots are imminent.
Regardless of the role of machines/robots in films, when need to conceptualize believable roles for a robot to even being to conceptualize what needs to be built. The example given in the text is the role of the nanny. For this particular role the robot would have to understand the following: where the children are, what their emotional sates and future intentions maybe, and relating theses to the events unfolding in the environment. This all seems like very high expectations from a robot and current technology seems very far from achieving any of this. For this vision to become reality the research focused on the analysis of human verbal and non-verbal behaviour needs to shift to the more spontaneous and more subtle behaviours. These are the emotions that occur in everyday life and where social communication exists, rather than the extreme emotions. Another issue is mobile robots tend to be heavy and cumbersome, as well as cautious of paths taken that might cause potential destruction of the unit.
Aside from physical challenges there is the issue of social perceptive abilities, which includes recognizing people’s social affective expressions and states, understanding their intentions, and accounting for the context of the situations. Currently technology is usually specialized for handling specific situations and are far from being flexible like that of a human.
Although the research community are paying attention to the aforementioned issues, they still require more comprehensive investigations of social perceptive abilities. To fill the role such as childcare, robots need to progressively acquire the ability of perceiving and interpreting social cues, states, and intentions accounting for the context of interaction. Of course, there is also the ethical side of this type of research, such as if a system can detect when a child may be lying and exploits its knowledge about a child’s state in order to support persuasion attempts, which may be well-intentioned, such as getting the child to finish eating or go to bed. This means that guidelines should be clearly drawn to define what behaviour and reaction generated by the robot in response to a child’s emotional state can be considered ethical and safe.
In conclusion it has been shown that there are many risks involved when using social perceptive robots. More specifically, in the case of childcare robots, parents would have to be made aware of the potentially psychological and/or physical harmful consequences of having a robot be a caregiver. It remains the duty of scientist and industry to represent the drawbacks and inadequacies of their creations and clearly differentiating science fact from science fiction.
I find many of the challenges and concerns within this article is be true in terms of how far we are in developing a human-like entity. This article is dated at 2010, and neither technology or science has made any really improvements big enough to today’s date, to see robots in our near future like described in this article.