Algorithms as artificial persons
In Algorithms and the Law, a paper by Jeremy Barnett, Adriano Soares Koshiyama and Philip Treleaven, the authors discuss the emergence of algorithms as artificial persons and the need to formally regulate them. It aims to start discussion in the legal profession regarding the legal impact of algorithms on companies, software developers, insurers and lawyers.
“In Law a company is treated as having the rights and obligations of a person. In this era of Artificial Intelligence (intelligent assistants, Robo-advisors, robots, and autonomous vehicles) algorithms are rapidly emerging as artificial persons: a legal entity that is not a human being but for certain purposes is considered by virtue of statute to be a natural person. Intelligent algorithms will increasing[ly] require formal training, testing, verification, certification, regulation, insurance, and most importantly status in law.”
The authors point out that regulators, who have traditionally regulated firms and individuals, are raising the status of algorithms to persons. In financial services the FCA already requires firms to demonstrate that trading algorithms have been thoroughly tested, demonstrate “best execution” and are not engaged in market manipulation. In healthcare, medical-assistant chatbots and patient screening systems, driven by algorithms, will increasingly dispense medical advice and treatments to patients.
Algorithmic harms
In Algorithmic Regulation (PDF) a joint paper by King’s College, Centre for the Analysis of Risk and Regulation (CARR) and LSE, Leighton Andrews’ looks at the dangers from algorithms against which we are trying to protect ourselves. He categorises these as:
- algorithmic bias, in which judgements on individual futures – employment, eligibility for loans, likelihood of imprisonment – are determined by algorithmic choices which have in-built human errors or conscious or unconscious biases;
- algorithmic manipulation, in which judgements about, for example, news, information or advertising, are constructed on the basis of data collected on individuals and used to channel what is presented according to inferred preferences;
- algorithmic lawbreaking, in which algorithms are apparently deliberately constructed to deceive lawmakers and regulators, for example, in terms of emissions controls or traffic management, or attempts at price-fixing;
- algorithm usage in propaganda, from disinformation campaigns by unfriendly countries to election campaign bots;
- algorithmic brand contamination and advertising fraud where major brands have found their advertising placed alongside hate speech or terrorist material, or where human interaction with the advertising is proven to be less than reported as bots are upping the claimed strike-rate; and
- algorithmic unknowns – how machine learning means algorithms are becoming too complicated for humans to understand or unpick.
In evidence to the House of Commons Science and Technology Committee, into Algorithms in decision-making, suspended by the 2017 General Election, a range of technical and regulatory proposals were outlined, but notably no serious fiscal proposal.
Bill Gates has called for robots to be taxed in order to replace the income tax lost to automation, though it is not clear how this might be done.
Andrews suggests there are other ways of using fiscal instruments. In the past we have applied variable charges on car tax dependent on their emissions; radio and TV licences have funded developments in broadcast technologies; radio and TV franchises have been auctioned. He asks if there is a case for creating new fiscal instruments, for example a tax to limit insecure devices; or as insurance against harms by robots or driverless vehicles; or to raise additional funds from “algorithmically-driven internet intermediaries”.
Adjudication by algorithm
In Adjudicating by Algorithm, Regulating by Robot, Cary Coglianese and David Lehr of the Oxford University Faculty of Law are upbeat about the benefits of algorithms in public sector decision-making:
“When applied both to adjudication and rulemaking, algorithms promise powerful increases in speed and accuracy in decision-making, perhaps also eliminating the biases that can permeate human judgment. Furthermore, using machine learning algorithms to automate rulemaking and enforcement might prove especially useful, even essential, for overseeing automated private-sector activity, such as high-speed securities trading.
… we need not be terrified by the prospect of adjudicating by algorithm or rulemaking by robot. … machine learning is just like any other machine: useful when deployed appropriately by responsible human officials. … modern digital machines can be readily incorporated into governmental practice under prevailing law.
Algorithmic adjudication and robotic rulemaking offer the public sector many of the same decision-making advantages that machine learning increasingly delivers in the private sector.”
Nick Holmes is Editor of the Newsletter. Email nickholmes@infolaw.co.uk. Twitter @nickholmes.
Image: 1044 cc by x6e38 on Flickr.
One thought on “Algorithms in law”
Comments are closed.