When an AI system influences a justice decision (bail, sentencing, supervision, parole) the human decision layer is almost never documented. You cannot challenge what you cannot see. You cannot monitor what is not recorded. Justice Decision Observability produces the governance documentation that makes AI-informed decisions transparent, accountable, and challengeable.
Who This Is For
- Defense attorneys and public defenders who need evidentiary tools to understand and challenge AI-informed decisions that affected their clients
- Court-appointed monitors evaluating whether agencies' use of AI meets consent decree requirements
- Compliance officers responsible for demonstrating governance adherence to DOJ, OMB, and state mandates
- Oversight bodies evaluating institutional AI governance practices
The Evidentiary Problem
When an AI system influences a criminal justice decision, the documentation void is total:
- The AI system produced an output, but was it presented to the decision-maker in complete, unmodified form?
- The decision-maker received the output, but did they engage with it meaningfully, or did institutional pressure push them toward rubber-stamping?
- The decision was made, but what was the actual decision pathway from AI recommendation to human action?
- If 95% of decisions follow the AI score exactly, that pattern is evidentiary evidence of automation bias, but without governance documentation, the pattern is invisible.
What JBS Documentation Provides
Signal Integrity Evidence
Was the AI output presented in its complete form, with confidence intervals, qualifying factors, and contextual information? Or was it reduced to a single score that stripped the nuance the system was designed to communicate?
Reliance Behavior Patterns
What are the documented patterns of human response to AI recommendations? Override rates, modification patterns, and systematic automation bias, all documented as evidentiary evidence.
Institutional Pressure Documentation
Were caseload volumes so high that meaningful review was practically impossible? Were there supervisor expectations that encouraged following AI recommendations? Evidence that an individual's “choice” may have been structurally constrained.
Post-Event Reconstruction (CEGR™)
When a critical event has already occurred, the CEGR reconstructs the entire human-AI interaction chain, providing the evidentiary foundation for challenging the decision or evaluating governance compliance.
How Legal Teams Use JBS
JBS governance documentation is descriptive evidence. It records what happened, not what should have happened. This descriptive posture makes it suitable for multiple legal and oversight contexts:
- Pre-trial motions challenging the reliability of AI-informed bail or detention decisions
- Sentencing challenges where algorithmic assessments influenced judicial decisions
- Consent decree monitoring evaluating whether AI governance requirements are being met in practice
- Compliance assessments documenting whether institutional AI use meets DOJ, OMB, and state mandates
- Pattern evidence demonstrating systemic automation bias or institutional failure to exercise meaningful oversight
Discuss a Matter
JBS provides consultations for legal teams, oversight monitors, and compliance bodies evaluating governance documentation needs.
Contact JBS