Skip to main content

For Journalists

Methodological frameworks for reporting on AI in criminal justice.

The most consequential AI story in criminal justice is not about the algorithm. It is about what happens after the algorithm speaks: the undocumented space where a human receives an AI recommendation and makes a decision that affects someone's liberty. Justice Decision Observability provides the methodological framework to report on that space.

The Story Nobody Is Telling

ProPublica's 2016 investigation of COMPAS examined whether the algorithm was biased. That was the right question for 2016. The question for 2026 is different: even when the algorithm is “fair,” what do humans actually do with its output?

This is the unreported layer of AI in criminal justice. Algorithm bias is a technology story. Human reliance behavior is a governance story, and governance stories require a different methodological framework.

What JBS Offers Journalists

Analytical Framework

The JB-DOF™ framework provides a structured methodology for examining AI-informed decisions. Instead of asking “Is the algorithm biased?” (a technology question), it asks five governance questions: How did the AI output reach the decision-maker? What did the decision-maker do with it? How did institutional factors shape the response? Was discretion exercised or constrained? Did outcomes match governance intent?

Expert Source

Stephanie Fleming, MS, PhD, is available for background briefings, on- and off-the-record interviews, and expert commentary on topics at the intersection of AI, criminal justice, and governance documentation. JBS can provide context on regulatory developments, help interpret agency AI deployment decisions, and explain the governance implications of specific cases or policies.

The Questions Worth Asking

JBS can help journalists formulate questions that go beyond “Is this algorithm biased?” to the governance layer:

  • What percentage of decisions follow the AI recommendation exactly?
  • What documentation exists for cases where the AI was overridden?
  • What training do staff receive on interpreting AI outputs?
  • What institutional pressures shape how officers interact with AI recommendations?
  • Is there any documentation of what happens between “AI produces score” and “human makes decision”?

Coverage Areas

JBS Intelligence (forthcoming) will publish analysis on the intersection of AI, criminal justice, and governance. Coverage will track:

  • Federal and state regulatory developments affecting AI in criminal justice
  • Agency AI deployment decisions and their governance implications
  • Cases where the absence of governance documentation produced preventable harm
  • The evolving landscape of AI governance standards and their implementation gaps

Media Inquiries

For interview requests, background briefings, or expert commentary, contact JBS directly or visit our Press page for approved quotes and company information.

Contact JBS