If artificial intelligence eventually performs a task better than humans, is it negligence not to use AI for the task?
For instance, if driverless cars become safer than human drivers, is it negligence for humans not to use them?
An interesting question and one of many asked by author and professor Ryan Abbott in his book The Reasonable Robot. Abbott has dual degrees in medicine and law and teaches at the UCLA Medical School. He is also a mediator and arbitrator and Co-Chair of the AI Subcommittee of the American Intellectual Property Law Association (AIPLA).
In a recent episode of the Technically Legal Podcast, Abbott talked about his book in which he posits that laws should be AI neutral and that the acts of artificial intelligence should not be judged differently than humans’.
He calls this a “reasonable robot” standard. If AI causes harm, maybe it too should be judged under the same standard as a human. Abbott argues further that if AI is treated differently under the law, it may hamper innovation.
What does this portend for the legal profession? If a lawyer makes a blunder that could have been avoided by incorporating technology into the legal work, might it be malpractice for not using the tech? If a reasonable lawyer would have used technology, does the lawyer’s representation of the client fall below the requisite standard of care?
After all, lawyers in most states are already obligated to “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology. . .” under Rules of Professional Conduct like MRPC 1.1.
All interesting questions. Listen to the whole episode here: https://geni.us/Abbott