Children’s Conceptions of the Moral Standing of a Humanoid Robot of the Here and Now
|Institution:||University of Washington|
|Keywords:||beliefs about robots; human-robot interaction; moral development; moral standing; robots with children; Developmental psychology; Experimental psychology; psychology|
|Full text PDF:||http://hdl.handle.net/1773/35303|
Sophisticated humanoid robots have recently moved from the laboratory and research settings to the home environment. Some models are now marketed to families with children, and are designed to engage children in increasingly social and potentially moral interactions. The purpose of this study is to investigate—when children interact with a commercially available humanoid robot and witness a human causing a harm to the robot—whether children conceptualize the robot as being an entity that deserves moral consideration: what is referred to in this study as having moral standing. Participants included 120 children in 2 age groups (8-9 and 14-15). To assess the effects of the robot’s physical embodiment on participants’ conceptions of the robot’s moral standing, 30 participants from each age group (gender balanced) were randomly assigned to 1 of 2 conditions and interacted with either a humanoid robot or an analogous virtual agent. In each condition, the interaction culminated in a confederate hitting the robot/virtual agent with a book. Each participant was then engaged in a semi-structured interview that ascertained their judgments and reasoning regarding the robot/virtual agent and three comparison entities. Results show that the majority of the participants judged the confederate hitting the robot as a violation of moral obligation, and many brought the concept of artificial emotion to bear in their reasoning. Participants were significantly more likely to judge it a violation of moral obligation to hit the robot than the virtual agent. Moreover, the 8- to 9-year-olds were significantly more likely than the 14- to 15-year-olds to judge it a violation of moral obligation to hit either the robot or the virtual agent. Finally, participants’ conceptions of the robot’s moral standing largely showed a unique composition of moral features when compared to those of the comparison entities. Discussion addresses the broader implications of these findings and future directions for research are offered. Advisors/Committee Members: Kahn, Peter H (advisor).