AI and Criminal Procedure Rights

A Response to the National Institute of Justice Request for Input


Brandon L. Garrett, Alicia Carriquiry, Karen Kafadar, Robin Mejia, Cynthia Rudin, Nicholas Scurich, Hal Stern

Introduction

Today, as artificial intelligence (AI) has been implemented across a wide range of human activities, new warnings have been issued from a wide range of sources, academic, public policy, and government, regarding the dangers posed by artificial intelligence to society, democracy, and individual rights. In 2023, the White House issued an “AI Bill of Rights,” and next, an executive order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” asking all federal agencies to account for how they use AI systems. That latter order tasks the Attorney General with submitting to the President a report regarding “the Federal Government’s fundamental obligation to ensure fair and impartial justice for all, with respect to the use of AI in the criminal justice system.”

The National Institute of Justice (NIJ) seeks written input from the public relevant to section 7.1(b) of Executive Order 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” We write to express our own views as scholars who study law, statistics, and constitutional criminal procedure, in response to the NIJ request for input. We note that several of us have long been affiliated with the Center for Statistics and Applications in Forensic Evidence (CSAFE), with the mission to bring improved statistical methods into the use of forensic evidence in criminal cases, to improve the quality of justice. We write to reflect our own views, and not those of our respective institutions, as researchers in law, scientific evidence, statistics, artificial intelligence, machine learning, and computer science.

We write to emphasize two basic points, focusing on predictive uses of AI, not on large language models.

First, in high-risk settings like the criminal justice system, AI models and underlying data must be adequately tested, including independently. Relatedly, sound statistics requires disclosing defined sources of data and information regarding variability and errors in measurement.

Second, artificial intelligence (AI) must not be black box in high-risk settings such as criminal investigations, in which it affects criminal procedure rights. A mature body of computer science research shows that nothing will be lost in performance by requiring such transparency through regulation. In short, AI must be transparent, tested, and interpretable.

To accomplish both goals, far more can and should be done to apply and robustly protect the existing Bill of Rights in the U.S. Constitution as it should apply to uses by government of AI in the criminal system, particularly when AI is used to provide evidence in investigations and trials.

Further, we also highlight the need to promptly comply with the Office of Management and Budget federal procurement guidelines, released in March 2024, to implement the 2023 Executive Order. Together, these measures provide for a range of federal agency reviews and oversight, inventories of AI systems, managing risks, and most importantly—auditing of AI systems, by testing how they perform in its “intended real-world context.”4 Requiring AI vetting, review, and disclosure provides a sound model. Federal agencies must implement minimum practices for risk management of safety and rights-impacting AI by December 1, 2024. Importantly, these regulations set out the fundamental need for independent testing of AI systems:

  • Through test results, agencies should demonstrate that the AI will achieve its expected benefits and that associated risks will be sufficiently mitigated, or else the agency should not use the AI.

Our view is that these procurement rules should be carefully followed in all contexts where due process and other rights are affected by the use of AI systems by law enforcement in criminal investigations and cases. These rules address our first point, regarding testing and transparency. They do not address our second point, regarding interpretability of AI systems in high risk settings.

In criminal cases in which liberty is at stake, there should be a strong presumption that fully interpretable AI be used, when it is directed towards providing evidence against criminal defendants. The burden to justify “black box” uses of AI in court should be a high one, given our commitments to due process, public judicial proceedings, and defense rights of access. There is no evidence that performance and efficiency depend on keeping the operation of AI secret from the public and unintelligible to users. That fundamental point, that AI can and should be open, for inspection, vetting, and explanation, is a simple one and it can be more forcefully insisted on at the federal level.

Finally, we do not disagree that existing rights need to be at times reinterpreted for the AI era. However, most important is a strong commitment to enforce existing constitutional criminal procedure rights, particularly given how difficult it is to amend the U.S. Constitution, but also the unfortunate reality that those constitutional rights have been unevenly enforced in criminal cases, where largely indigent defendants face challenges in obtaining adequate discovery and the pressures to plead guilty and waive trial rights.

The National Institute of Justice, the federal law enforcement agencies, and the Department of Justice, should lead by example, in the use of AI technologies to vigorously adhere to statistical and scientific standards, and to protect constitutional rights of criminal defendants.

 Earn a Degree in Crime Scene Investigation, Forensic Science, Computer Forensics or Forensic Psychology

Read the report:




Receive our free monthly newsletter and/or job posting alerts Click to sign up