In most civil and criminal cases in the U.S. federal courts, a review of a trial court decision in one of the 94 district courts goes to one of the 12 U.S. Courts of Appeals (“Circuit Courts”) that are geographically coincident with the district court in question. In some areas, though, the system designers have elected to channel all appeals to a single Circuit Court.
We can imagine similar federated structures for AI/ML appeals. For example, when the same type of AI/ML algorithm or data is used, or for similar AI/ML applications, the issue may be directed to a centralized specialized team in an organization, whose members would become increasingly sophisticated in their review of particular issues. They might also be able to develop more general principles on topics like tradeoffs between AI/ML accuracy and explainability or on AI/ML fairness, which would not only be “adjudicative” of particular cases but could generate norms for AI/ML developers, much like precedent operates in the court system.
A different form of centralization to consider is at the level of cases, not courts. In the U.S. Federal courts, it is sometimes possible to consolidate a series of separate cases (for example, claims that a rent-a-car company added hidden fees to many customers) at the trial level through a class action. It is also possible to consolidate several cases together for the appeals stage, even if they were tried separately. In the AI/ML context, this possibility might be of particular value—there may be issues with the actual deployment of an algorithm that only become manifest when one looks at a large number of errors. The hospital readmission algorithm example above illustrates this. In any particular case, the nature of this problem may not be clear, but if enough are seen collectively on appeal, the bias becomes manifest.
There is a myriad of possible design choices that this opens up for consideration depending on context. When the decision in question is by an AI/ML system designed and applied in a hospital system to a particular clinical case, it would seem intuitive for the reviewing body on appeal to be situated in that particular hospital system. But if the same AI/ML is used in many different hospital systems across the country one might imagine a better design might be a centralized appellate review body potentially a regulated third-party entity, similar, for example, to the role of accounting auditors—to which all the hospitals feed cases for review.
We believe human-on-appeal designs can provide value irrespective of whether the particular medical AI/ML requires review by a regulator like the U.S. Food and Drug Administration (FDA) or not. What matters is the type of decision and the stakes involved which does not perfectly track the regulator’s jurisdiction in the U.S. or in other countries. That said, for the subset of medical AI/ML that does require review by regulators like the FDA, regulators may be able to consider, encourage, or even require some forms of appeal design as part of their regulatory review.