Frequently Asked Questions
What is Justice Decision Observability?
Justice Decision Observability is the governance discipline of documenting how human decision-making is shaped once automated and AI-supported systems are deployed in justice environments. It addresses the gap between the moment an AI system produces a recommendation and the moment a human being acts on it: the interpretive layer where discretion occurs, misunderstanding happens, and responsibility becomes unclear.
What is the JB-DOF™ framework?
The Justice Beacon Decision Observability Framework (JB-DOF™) is a five-pillar governance architecture: Signal Integrity, Reliance Behavior Mapping, Discretion Governance, Institutional Pressure Mapping, and Outcome Integrity Monitoring. Together, these pillars provide a comprehensive methodology for documenting how humans interact with AI outputs in justice settings, from how information is received, to what decisions are made, to whether outcomes align with governance intent.
How is governance documentation different from algorithm auditing?
Algorithm auditing examines whether an AI system's outputs are biased, accurate, or fair. Governance documentation examines what humans do with those outputs. They are complementary but distinct: an algorithm audit tells you whether the AI got it right; governance documentation tells you whether the human engaged meaningfully with the AI's recommendation before making a decision. JBS operates at the human layer, not the algorithmic layer.
What is a Deployment Context Risk Review (DCRR™)?
The DCRR™ is a pre-deployment governance documentation instrument. Before an AI or automated system goes live in a justice environment, the DCRR documents the context it will enter: the decision-making workflows it will affect, the staff who will interact with its outputs, the institutional pressures that may shape their responses, and the governance structures intended to ensure meaningful human engagement.
What is a Critical Event Governance Review (CEGR™)?
The CEGR™ is a post-incident governance documentation engagement. When a significant event occurs involving an AI-informed decision (such as a wrongful detention, a sentencing anomaly, or a supervision failure), the CEGR reconstructs the human-AI interaction that preceded it: what the system recommended, what the human did, what institutional factors shaped that response, and whether governance protocols were followed.
Who needs Justice Decision Observability?
Justice Decision Observability serves government agencies deploying AI in corrections, courts, and supervision; defense attorneys challenging AI-informed decisions; oversight bodies evaluating institutional AI use; academic researchers studying human-AI interaction; investigative journalists covering AI in criminal justice; and justice-impacted communities affected by AI-informed decisions.
How does JBS relate to NIST AI Risk Management Framework requirements?
The NIST AI Risk Management Framework (AI RMF) establishes standards for AI governance including risk identification, assessment, and management. JBS operationalizes these standards specifically for criminal justice settings. Where NIST provides the governance architecture, JBS provides the documentation methodology: the structured process for recording whether human oversight actually occurs as the framework intends.
How does JBS relate to the DOJ's AI in Criminal Justice report?
The DOJ's December 2024 report on AI and Criminal Justice called for centralized AI records, staff expertise requirements, higher-risk safeguards, and public engagement mechanisms. The report specifies what agencies must do but provides no methodology for how to implement these requirements. JBS's frameworks, particularly the JB-DOF and DCRR, directly address this implementation gap, providing the structured governance documentation the DOJ's recommendations require.
What happens between "AI produces a recommendation" and "human makes a decision"?
In most justice settings deploying AI, this space is completely undocumented. A risk assessment tool produces a score. A monitoring system flags an alert. A case management platform generates a recommendation. What happens next (whether a human reads the output carefully, glances at it, overrides it, rubber-stamps it, or ignores it entirely) is almost never recorded. This undocumented space is where wrongful detentions, sentencing disparities, and accountability failures originate. Justice Decision Observability exists to document it.
How is JBS different from AI governance platforms like FairNow or Credo AI?
AI governance platforms like FairNow, Credo AI, and OneTrust track the AI model lifecycle: model risk scoring, deployment compliance, and system-level documentation. They monitor the AI. JBS documents the human. These tools answer "Is the AI system being managed responsibly?" JBS answers "What do humans actually do when the AI system gives them an answer?" The approaches are complementary: governance platforms track the system, JBS tracks the human decision-maker.