Skip to main content

Signal Integrity

JB-DOF™ Pillar 1: How AI outputs reach the human decision-maker.

Signal Integrity is the first pillar of the JB-DOF™ framework. It examines how AI outputs are received, displayed, and contextualized for human decision-makers, and whether the information that reaches the human is complete, accurate, and presented in a form that supports informed decision-making.

The Core Question

Before a human can meaningfully engage with an AI recommendation, they must first receive it. Signal Integrity asks: does the information that reaches the decision-maker faithfully represent what the AI system actually produced?

This is not a trivial question. The path between “AI system generates output” and “human sees result” is filled with opportunities for distortion.

What does Signal Integrity document?

Signal Integrity documentation captures the full information chain from AI output to human receipt:

  • Output completeness. Does the human see the full AI output (including confidence intervals, qualifying factors, data limitations, and contextual information) or a reduced version? Many systems present a single score or color code that strips the nuance the underlying model was designed to communicate.
  • Interface design. How does the user interface present AI outputs? Are risk scores given visual prominence that implies certainty? Are qualifying statements buried in secondary screens? Does the interface design encourage engagement or rubber-stamping?
  • Information timing. When does the AI output reach the decision-maker relative to the decision point? Is it presented with adequate time for review, or delivered at a point in the workflow where meaningful engagement is impractical?
  • Contextual framing. Is the AI output presented alongside relevant case information that enables informed judgment, or is it isolated in a way that makes the score or recommendation appear authoritative and self-sufficient?
  • Selective presentation. Are some AI outputs shown while others are suppressed? Are confidence levels displayed for high scores but hidden for low ones? Is the presentation of information systematically shaped in ways the decision-maker may not recognize?

Why Signal Integrity Comes First

Signal Integrity is the foundation of the entire JB-DOF™ framework because every subsequent pillar depends on it. Reliance behavior cannot be meaningfully assessed if the information the human received was incomplete or distorted. Discretion cannot be exercised on the basis of information the decision-maker never saw. Institutional pressures interact with information presentation in ways that amplify or mitigate their effects. And outcome integrity cannot be evaluated without understanding what information actually entered the decision process.

If the signal is corrupted, every downstream assessment is compromised. That is why Signal Integrity is Pillar 1.

What are examples of signal integrity failures?

Signal integrity failures take many forms in justice settings:

  • A risk assessment tool produces a score with a confidence interval of ±15%, but the interface displays only the point estimate , making a probabilistic output appear deterministic.
  • An electronic monitoring system generates an alert with qualifying context (GPS drift, known dead zone, equipment malfunction history), but the alert displayed to the officer contains only “violation detected.”
  • A case management platform recommends a supervision level but presents it as a color-coded badge (red/yellow/green) that reduces a complex assessment to a traffic light, inviting snap judgment rather than informed review.
  • An AI system produces both a risk score and a set of mitigating factors, but the interface displays the score prominently and buries the mitigating factors in a secondary tab that few officers access.

In each case, the AI system may function exactly as designed. The failure is not in the algorithm. It is in the signal path between the algorithm and the human.

Regulatory Relevance

California's SB 524 (2025) requires disclosure of AI-authored police reports and retention of drafts, the first state-level signal integrity mandate. The DOJ's 2024 report calls for staff to have adequate expertise to interpret AI outputs, which presupposes that AI outputs are presented in interpretable form.

As regulatory attention to AI in criminal justice intensifies, signal integrity will become a central compliance concern. Agencies that cannot demonstrate that AI outputs reach decision-makers in complete, contextually appropriate form will face accountability gaps that no amount of algorithm auditing can close.

Framework Pillars

← Back to Framework Overview