Perhaps most surprising to me is that the latest discussion I’ve come across about ethical concerns with introducing robots into non-industrial settings is from someplace as mainstream as the Economist, but it’s actually a nice summary of upcoming concerns [via Slashdot]. The article indicates that there have been many (in the 100s?) industrial robot accidents in the past 25 years, but the concern discussed at a recent European Robotics Symposium is what happens when robots move out of the industrial setting and interact with the general population. Major questions the aticle pulls out include:
Should robots that are strong enough or heavy enough to crush people be allowed into homes? Is “system malfunction” a justifiable defence for a robotic fighter plane that contravenes the Geneva Convention and mistakenly fires on innocent civilians? And should robotic sex dolls resembling children be legally allowed?
These are, obviously, very different questions. The first one is, I think, mostly prompted by efforts to build living-assistant robots that will “live” with elderly people and help them around the house, make sure they take medications, and offer companionship. It’s that last piece that raises another, important question which isn’t mentioned here – what happens to society if we bring robots into it in a personal way? If people bemoan the negative impact of the internet on community, what will the impact of personal companion robots be? Is it worth it, or are these robots a crutch for us not taking responsibility, as a community, for taking care of each other?
What I particularly like about this article, as compared to others I have read, is that after all of the slightly-hysterical talk of what will happen when we have robots around us, and the ubiquitous mention of Asimov’s Laws of Robotics, the article ends with a very level-headed discussion of why these issues are not that different from safety concerns raised with other appliances we already have in the home. AI hasn’t gotten close to building a robot that would really require this type of concern. Robots today may be autonomous but they are not intelligent, so we are far from worrying that they might act of their own volition instead of ours.