Abstracts

The presentations in the UCI Law Spring 2021 Artificial Intelligence & Law Colloquium Series reflect a range of innovative and interdisciplinary thinking at the intersections of law, policy, and emerging technologies, presented by leading scholars in their fields.

Colloquium Presentations

Much recent work in academic literature and policy discussions suggests that the proliferation of actuarial-meaning statistical-assessments of a defendant's recidivism risk in state sentencing structures is problematic. Yet scholars and policymakers focus on changes in technology over time while ignoring the effects of these tools on society. This Article shifts the focus away from technology to society in order to reframe debates. It asserts that sentencing technologies subtly change key social concepts that shape punishment and society. These same conceptual transformations preserve problematic features of the sociohistorical phenomenon of mass incarceration. By connecting technological interventions and conceptual transformations, this Article exposes an obscured threat posed by the proliferation of risk tools as sentencing reform. As sentencing technologies transform sentencing outcomes, the tools also alter society's language and concerns about punishment. Thus, actuarial risk tools as technological sentencing reform not only excise society 's deeper issues of race, class, and power from debates. The tools also strip society of a language to resist the status quo by changing notions of justice along the way.

Law should help direct—and not merely constrain—the development of artificial intelligence (AI). One path to influence is the development of standards of care both supplemented and informed by rigorous regulatory guidance. Such standards are particularly important given the potential for inaccurate and inappropriate data to contaminate machine learning. Firms relying on faulty data can be required to compensate those harmed by that data use—and should be subject to punitive damages when such use is repeated or willful. Regulatory standards for data collection, analysis, use, and stewardship can inform and complement generalist judges. Such regulation will not only provide guidance to industry to help it avoid preventable accidents. It will also assist a judiciary that is increasingly called upon to develop common law in response to legal disputes arising out of the deployment of AI.

Autonomous weapon systems are often described either as more independent versions of weapons already in use or as humanoid robotic soldiers. In many ways, these analogies are useful. Analogies and allusions to popular culture make new technologies seem accessible, identify potential dangers, and buttress desired narratives. Most importantly from a legal perspective, analogical reasoning helps stretch existing law to cover developing technologies and minimize law-free zones.

But all potential analogies—weapon, combatant, child soldier, animal combatant—fail to address the legal issues raised by autonomous weapon systems, largely because they all misrepresent legally salient traits. Conceiving of autonomous weapon systems as weapons minimizes their capacity for independent and self-determined action, while the combatant, child soldier, and animal combatant comparisons overemphasize it. Furthermore, these discrete and embodied analogies limit our ability to think imaginatively about this new technology and anticipate how it might develop, thereby impeding our ability to properly regulate it.

We cannot simply graft legal regimes crafted to regulate other entities onto autonomous weapon systems. Instead, as is often the case when analogical reasoning cannot justifiably stretch extant law to answer novel legal questions, new supplemental law is needed. The sooner we escape the confines of these insufficient analogies, the sooner we can create appropriate and effective regulations for autonomous weapon systems.

A received wisdom is that automated decision-making serves as an anti-bias intervention. The conceit is that removing humans from the decision-making process will also eliminate human bias. The paradox, however, is that in some instances, automated decision- making has served to replicate and amplify bias. With a case study of the algorithmic capture of hiring as a heuristic device, this Article provides a taxonomy of problematic features associated with algorithmic decision-making as anti-bias intervention and argues that those features are at odds with the fundamental principle of equal opportunity in employment. To examine these problematic features within the context of algorithmic hiring and to explore potential legal approaches to rectifying them, the Article brings together two streams of legal scholarship: law & technology studies and employment & labor law.

Counterintuitively, the Article contends that the framing of algorithmic bias as a technical problem is misguided. Rather, the Article’s central claim is that bias is introduced in the hiring process, in large part, due to an American legal tradition of deference to employers, especially allowing for such nebulous hiring criterion as “cultural fit.” The Article observes the lack of legal frameworks to account for the emerging technological capabilities of hiring tools which make it difficult to detect bias. The Article discusses several new approaches to hold liable for employment discrimination both employers and makers of algorithmic hiring systems. Particularly related to Title VII, the Article proposes that in legal reasoning corollary to extant tort doctrines, an employer’s failure to audit and correct its automated hiring platforms for disparate impact should serve as prima facie evidence of discriminatory intent, for the proposed new doctrine of discrimination per se. The Article also considers approaches separate from employment law, such as establishing consumer legal protections for job applicants that would mandate their access to the dossier of information consulted by automated hiring systems in making the employment decision.

Technology is often characterized as an outside force, with essential qualities, acting on the law. But the law, through both doctrine and theory, constructs the meaning of the technology it encounters. A particular feature of a particular technology disrupts the law only because the law has been structured in a way that makes that feature relevant. The law, in other words, plays a significant role in shaping its own disruption. This Essay is a study of how a particular technology, artificial intelligence, is framed by both copyright law and the First Amendment. How the algorithmic author is framed by these two areas illustrates the importance of legal context and legal construction to the disruption story.

Contact

Rabie Kadri
Law Centers Manager
centers@law.uci.edu
(949) 824-2370