Abstracts

The presentations in the UCI Law Spring 2020 Artificial Intelligence & Law Colloquium Series reflect a range of innovative and interdisciplinary thinking at the intersections of law, policy, and emerging technologies, presented by leading scholars in their fields.

Colloquium Presentations

This Article contends that current immigration- and security-related vetting protocols risk promulgating an algorithmically driven form of Jim Crow. Under the “separate but equal” discrimination of a historic Jim Crow regime, state laws required mandatory separation and discrimination on the front end, while purportedly establishing equality on the back end. In contrast, an Algorithmic Jim Crow regime allows for “equal but separate” discrimination. Under Algorithmic Jim Crow, equal vetting and database screening of all citizens and noncitizens will make it appear that fairness and equality principles are preserved on the front end. Algorithmic Jim Crow, however, will enable discrimination on the back end in the form of designing, interpreting, and acting upon vetting and screening systems in ways that result in a disparate impact.

Citation: Margaret Hu, Algorithmic Jim Crow, 86 Fordham L. Rev. 633 (2017).

What happens when artificially intelligent robots misbehave? The question is not just hypothetical. As robotics and artificial intelligence (AI) systems increasingly integrate into our society, they will do bad things. They have already killed people.

These new technologies present a number of interesting substantive law questions, from predictability, to transparency, to liability for high stakes decision making in complex computational systems. The focus of this Article is different. It seeks to explore what remedies the law can and should provide once a robot has caused harm.

It may, for example, make sense to focus less of our doctrinal attention on moral guilt and more of it on no-fault liability systems (or at least ones that define fault differently) to compensate plaintiffs. But addressing payments for injury solves only part of the problem. Our existing doctrines often take advantage of “irrational” human behavior like cognitive biases and risk aversion. But robots will be deterred only to the extent that their algorithms are modified to include sanctions as part of the risk-reward calculus.

Remedies law also has an expressive component that will be complicated by robots. We sometimes grant punitive damages—or disgorge ill-gotten gains—to show our displeasure. But if our goal is to send a slightly more nuanced signal than that through the threat of punishment, robots will require us to rethink many of our current doctrines. They also offer important insights into the law of remedies we already apply to people and corporations.

Citation: Mark A. Lemley & Bryan Casey, Remedies for Robots, 86 U. Chi. L. Rev. 1311 (2019).

This Article explores the impending conflict between the protection of civil rights and artificial intelligence (AI). While both areas of law have amassed rich and well-developed areas of scholarly work and doctrinal support, a growing body of scholars are interrogating the intersection between them. This Article argues that the issues surrounding algorithmic accountability demonstrate a deeper, more structural tension within a new generation of disputes regarding law and technology. The true promise of AI does not lie in the information we reveal to one another, but rather in the questions it raises about the interaction of technology, property, and civil rights. For this reason, it argues that we are looking in the wrong place if we look only to the state to address issues of algorithmic accountability. Instead, given the state's reluctance to address the issue, we must turn to other ways to ensure more transparency and accountability that stem from private industry, rather than public regulation. The issue of algorithmic bias represents a crucial new world of civil rights concerns, one that is distinct in nature from the ones that preceded it. Since we are in a world where the activities of private corporations, rather than the state, are raising concerns about privacy, due process, and discrimination, we must focus on the role of private corporations in addressing the issue. Towards this end, this Article discusses a variety of tools to help eliminate the opacity of AI, including codes of conduct, impact statements, and whistleblower protections, which it argues carry the potential to encourage greater endogeneity in civil rights enforcement. Ultimately, by examining the relationship between private industry and civil rights, we can perhaps develop a new generation of forms of accountability in the process.

Citation: Sonia K. Katyal, Private Accountability in the Age of Artificial Intelligence, 66 UCLA L. Rev. 54 (2019).

Within the political economy of informational capitalism, commercial surveillance practices are tools for resource extraction. That process requires an enabling legal construct, which this Chapter identifies and explores. Contemporary practices of personal information processing constitute a new type of public domain — a repository of raw materials that are there for the taking and that are framed as inputs to particular types of productive activity. As a legal construct, the biopolitical public domain shapes practices of appropriation and use of personal information in two complementary and interrelated ways. First, it constitutes personal information as available and potentially valuable: as a pool of materials that may be freely appropriated as inputs to economic production. That framing supports the reorganization of sociotechnical activity in ways directed toward extraction and appropriation. Second, the biopolitical public domain constitutes the personal information harvested within networked information environments as raw. That framing creates the backdrop for culturally-situated techniques of knowledge production and for the logic that designates those techniques as sites of legal privilege.

Citation: Julie E. Cohen, The Biopolitical Public Domain: The Legal Construction of the Surveillance Economy, 31 Phil. & Tech. 213 (2018).

Contact

Rabie Kadri
Law Centers Manager
centers@law.uci.edu
(949) 824-2370