Lex Ex Machina: a conference on law's computability
On 13 December 2019, this Conference brought together a diverse range of experts to explore the advances in artificial intelligence, machine learning and data science in relation to legal procedures and decision-making.
Lex Ex Machina was a one-day workshop, hosted by Jesus College's Intellectual Forum. It brought together legal academics with researchers in STEM, the social sciences, policy makers, LegalTech developers and civil society organisations to explore these questions and what computable law means for the autonomy, authority, and legitimacy of law as a social institution. Moving beyond narrow technical questions about bias and explainability, this workshop featured original contributions from researchers that cut across disciplinary boundaries and connected technical expertise to legal expertise, and examined the intersection of law and computation.
Advances in Artificial Intelligence (AI), Machine Learning (ML) and data science have reignited interest in applying computation to more aspects of legal process and decision-making.
This is particularly evident in the development of various LegalTech applications to assist with the practice and business of law. However, the ubiquity of algorithmic decision-making (ADM) systems to replicate, and in some cases replace, human judges and decision-makers has attracted the greatest attention from scholars, media, and the public. While there are familiar concerns about the bias, transparency, and accountability of ADM systems, their inevitable totalisation is framed as a matter of resolving narrow computational puzzles, not addressing more fundamental questions about what they mean for the rule of law, deliberative democracy, and indeed, whether they should be built in the first place.
Technical concerns about bias and transparency are important, but shouldn’t detract from the fact that legal authority is increasingly expressed and enforced algorithmically. What does it mean for this authority to be algorithmic, and thus, for law to be computable? The legal system is foremost a social institution, where legal norms, concepts, categories and reasoning are ultimately social constructions. Can these be sufficiently captured by computation? If so, to what extent? If not, what is lost? Can legal authority be legitimate if expressed via algorithms and 1s and 0s rather than through juridical reasoning and natural language? These questions become all the more acute in light of predictions of a forthcoming ‘legal singularity’ — a hypothetical point where AI’s functional capabilities vastly exceed those of human judges and lawyers. In this world of a ‘legal singularity’ the law is said to exist in a perpetual state of equilibrium between facts and norms. Regardless of how feasible this might be, the legal singularity is implicitly a proposal for eliminating juridical reasoning as the basis for dispute resolution and the allocation of rights, responsibilities and power. Will this world of a legal singularity be one that needs lawyers, judges, and indeed, the legal system as we currently understand it?
This Conference was kindly supported by the University of Cambridge Public Policy Strategic Research Initiative, LexisNexis and ThoughtRiver.