Abstract

Although Artificial Intelligence (AI) is already of use to litigants and legal practitioners, we must be cautious and deliberate in incorporating AI into the common law judicial process. Human beings and machine systems process information and reach conclusions in fundamentally different ways, with AI being particularly ill-suited for the rule application and value balancing required of human judges. Nor will “cyborg justice”—hybrid human/AI judicial systems that attempt to marry the best of human and machine decisionmaking and minimize the drawbacks of both—be a panacea. While such systems would ideally maximize the strengths of human and machine intelligence, they might also magnify the drawbacks of both. They also raise distinct teaming risks associated with overtrust, undertrust, and interface design errors, as well as second-order structural side effects.

One such side effect is “technological–legal lock-in.” Translating rules and decisionmaking procedures into algorithms grants them a new kind of permanency, which creates an additional barrier to legal evolution. In augmenting the common law’s extant conservative bent, hybrid human/AI judicial systems risk fostering legal stagnation and an attendant loss of judicial legitimacy.

Document Type

Article

Publication Date

2019

Share

COinS