Read my latest for Wired in full below or by clicking here.
The EU-backed €1.5 million (£1.3 million) RoboLaw Project brings together a team of roboticists, lawyers and philosophers to work together to come up with proposals for the laws and regulations necessary to manage emerging robotics technologies
It is easy to be wowed by the self-driving cars showcased by Google and now Oxford University, but Dr Pericle Salvini’s job is to try to make us think about the ethical and legal implications of such robotic technology. After all, if an autonomous vehicle crashed, who would be responsible? The driver? Google? The car itself?
“Robots are no longer science fiction, as they have left the factory and are arriving in our homes,” says Salvini from the BioRobotics Institute at the Scuola Superiore Sant’Anna (SSSA) in Pisa, Italy. And Asimov’s Three Laws simply aren’t sufficient.
As part of the unique EU-backed €1.5 million RoboLaw Project, Salvini is managing a team of roboticists, lawyers and philosophers (yes, philosophers) from a consortium of European universities, who are working hard to come up with proposals for the laws and regulations necessary to manage emerging robotics technologies in Europe in time to present them to European Commission a year from now. The consortium comprises the University of Tilburg (the Netherlands), the Humboldt University of Berlin, the University of Reading and the SSSA.
RoboLaw, says Dr Andrea Bertolini of the Law Faculty at SSSA, will address questions such as, “Is the Google car an object that is not much different from a fridge or a word processor or is it more like a pet which the owner is responsible for? Perhaps it’s more a parent-child relationship in which the parent still has some liability for his child’s action?
“It can be hard to decide who is responsible and that’s the gap the project is trying to fill in order to guarantee everyone’s security and safety even if they are not in the car,” adds Salvini.
While Joseph Engelberger, one of the fathers of robotics, was happy to admit that “I can’t define a robot, but I know one when I see one”, the RoboLaw project decided at the beginning to narrow this down a bit by looking at a wide range of “things” at home, from a robot arm to “softbots”, and hybrid bionic systems such as hand prostheses.
The list, says Salvini, takes into account autonomous robots, including neurobiotics — robots controlled via a brain-computer interface — and service robots that operate in the home, cities and other public roles.
The next task of the project was to do phased research to identify what existing regulations apply to robotic technology, how the consequences vary from country to country, and what is happening in other disciplines. The result was a series of case studies which the roboticists, lawyers and philosophers explored to find possible solutions to future problems.
Now with a year to go they are at last coming to conclusions — even if these are confidential until they’ve been shown to the European Commission.
While some roboticists may be worried about state regulation, Salvini’s belief in the project’s importance stems from his DustBot project, whose goal was to build an autonomous robot to collect the garbage and which was tested in a small town near Pisa.
“We designed, developed and deployed it for two months in a real small Italian town and quickly realised that, from the insurance companies to the town hall and local people, no one knew how to deal with it.
“These are exactly the kind of problems that roboticists will struggle with, as while they need to test their robots outside of the laboratory they are not always good at dealing with the social and legal environment.”
For Bertolini one of the big issues was privacy: “[Robots] will be in our homes and in order to function they have to collect data on who we are and what we do.”
Another, he says, “was how to make the rules that robots have to follow? Or even how they should be treated?” After all, there are some schools of thought see robots as autonomous individuals with the same or comparable rights as those of humans. “Or how do you actually describe a robot? You can address it like an animal or pet, but if your dog attacks someone then you are liable.”
It is a “sensitive issue”, he says, as the regulations that may come out of these discussions will have an impact on the way that technology is adopted and diffused — it might dissuade reboticists from working in certain areas and encourage them to focus on others.
“Traditionally Europe adopts a precautionary principle when addressing unknown risks and as a result regulators tend to set greater limits when there is no scientific certainty whether a technology may be harmful. The US follows different principles, with its well-known penchant for mass litigation: mass torts, class actions, very high damages awarded by juries in cases of litigation against big corporations.”
Now, Bertolini adds, “it is certain that robotic technologies raise problems of some unknown risks which could trigger some of these depicted ‘reactions’ [of mass litigation]. This is one of the ‘big’ concerns and one of the perspectives that at once justifies and requires a study like the one RoboLaw is conducting.”
After all, one of the big problems of autonomous vehicles like the Google car is the lack of absolute guarantee of safety, since it is impossible to predict the behaviour of every single other driver on the road.
According to Dr Susanne Beck, a lawyer from the University of Würzberg’s RobotRecht research centre, while “some of the problems caused by robotics can be answered by existing laws, there are some that need new laws to be created as soon as possible.” There are — for example — no regulations yet on how robots handle the personal data collected in hospital or old people’s homes, or who pays when a self-driving car crashes.
A key issue is the lack of public awareness and debate about these issues. “So many people see our research as ‘science fiction work’, although we are working mainly on problems society is facing right now,” explains Beck, adding that it’s necessary to inform society about the existing research — often taking place behind closed doors — and potential applications.
“After all, lawyers cannot answer questions for society.” Society has first to decide which robots it wants to accept, which risks it wants to take, who should be responsible for damages caused by robots, she warns.
“This is a very important project to help us to pre-emptively deal with the emerging robot technologies,” agrees Noel Sharkey, Professor of Artificial Intelligence and Robotics at the University of Sheffield. “We do not want to be caught off guard like we were with the internet. It was upon us and spread so quickly before we had a chance to think through the implications.”
For Sharkey, regulating robotics is more a matter of mapping into the old laws rather than creating new ones, even if “this is not easy”. He is particularly keen for legal provision to be made rapidly for autonomous vehicles.
However, he believes that the “biggest issue is not beyond the remit of the project”: the rapid rise in robotics for military and broader national security purposes. Other important civil issues are raised by the use of robots for child care and care of the elderly; currently the law is inadequate to control this use.
“While privacy and responsibility are very important ethical issues there are also strong questions about individual freedom, human dignity and psychological well-being to be considered,” he says. “So while we need to be careful not to stifle innovation, sometimes regulations are really necessary for the protection of people, particularly the vulnerable.”
In the end, he admits, it is hard to future-proof regulation and laws against the latest technology and its applications — as we have seen with the World Wide Web.
While Bertolini agrees that in the future the regulation they come up with now may no longer be relevant, he is keen to point out that “the Italian legal code dates back to the 1940s and can still solve problems today”.
Salvini, though, is worried by our own laziness “which means that it is easier to come up with the technology first and then work out the consequences”.
Given that we are only just getting around to adapting many laws (such as copyright and libel) for the web a mere twenty years after its invention, perhaps it’s not those who are inventing new technologies that should be accused of laziness.