Glossary of Terms
Defined terms within the Justice Decision Observability discipline.
Justice Decision Observability (JDO)
The governance discipline of documenting how human decision-making is shaped once automated and AI-supported systems are deployed in justice environments.
JDO addresses the critical gap between AI system deployment and human decision documentation. It is not algorithm auditing, AI certification, or investigation; it is structured observation and documentation of the human layer.
JB-DOF™ (Justice Beacon Decision Observability Framework)
A five-pillar governance architecture for documenting human-AI interaction in justice settings.
The five pillars are Signal Integrity, Reliance Behavior Mapping, Discretion Governance, Institutional Pressure Mapping, and Outcome Integrity Monitoring. Each pillar addresses a distinct dimension of the human-AI interaction in justice environments.
DCRR™ (Deployment Context Risk Review)
A pre-deployment governance documentation instrument that captures the decision-making context into which an AI system will be introduced.
Conducted before an AI or automated system goes live, the DCRR documents workflows, staff interactions, institutional pressures, and governance structures. It establishes a governance baseline against which post-deployment behavior can be measured.
CEGR™ (Critical Event Governance Review)
A post-incident governance documentation engagement that reconstructs the human-AI interaction preceding a significant event.
Activated following events such as wrongful detention, sentencing anomalies, or supervision failures. The CEGR documents what the AI recommended, what the human did, what institutional factors shaped that response, and whether governance protocols were followed.
Signal Integrity
The first pillar of JB-DOF™. Examines how AI outputs are received, displayed, and contextualized for human decision-makers.
Signal Integrity documents whether information from AI systems reaches decision-makers in a complete, accurate, and contextually appropriate form, or whether it is stripped of nuance, obscured by interface design, or distorted by selective presentation.
Reliance Behavior Mapping
The second pillar of JB-DOF™. Documents what decision-makers actually do with AI recommendations.
Tracks whether humans accept, modify, override, or ignore AI outputs, and identifies the patterns, frequencies, and conditions under which each response occurs. Includes analysis of institutional incentives that shape reliance behavior.
Discretion Governance
The third pillar of JB-DOF™. Documents how individual judgment interacts with institutional protocols when AI is involved.
Maps the boundary between structured decision-making and individual professional judgment, examining whether AI narrows, expands, or distorts the exercise of human discretion.
Institutional Pressure Mapping
The fourth pillar of JB-DOF™. Examines organizational dynamics that shape whether humans meaningfully engage with AI outputs.
Documents caseload demands, training adequacy, supervisory expectations, and cultural norms that determine whether human oversight is substantive practice or procedural formality.
Outcome Integrity Monitoring
The fifth pillar of JB-DOF™. Tracks whether AI-informed decisions produce outcomes consistent with documented governance intent.
Measures the downstream effects of human-AI interaction. Not the algorithm's predictive accuracy, but whether human decisions made with algorithmic input achieve the institutional goals they were intended to serve.
Governance Documentation
Structured, systematic documentation of decision-making processes, institutional behaviors, and governance compliance in environments where AI influences human decisions.
Distinct from compliance reporting (which records that processes were followed) and audit documentation (which evaluates system performance). Governance documentation records what actually happens in the space between AI output and human action.
Decision-Risk
The risk created not by the AI system itself but by how humans interact with its outputs in consequential settings.
Decision-risk increases when humans over-rely on AI recommendations (automation bias), when institutional pressures incentivize rubber-stamping, or when governance documentation is absent. JBS frameworks are designed to make decision-risk visible and documentable.