Skip to main content

Reliance Behavior Mapping

JB-DOF™ Pillar 2: What decision-makers actually do with AI recommendations.

Reliance Behavior Mapping is the second pillar of the JB-DOF™ framework. It documents what decision-makers actually do with AI recommendations, whether they accept, modify, override, or ignore them, and identifies the patterns, frequencies, and conditions under which each response occurs.

The Core Question

Once the AI output reaches the decision-maker (documented by Signal Integrity), what happens next? Reliance Behavior Mapping asks: does the human engage with the AI recommendation as one input among many, or does the recommendation become the decision?

This question is the center of the “meaningful human oversight” debate. Every governance framework demands it. Reliance Behavior Mapping is the methodology for measuring whether it actually occurs.

What does Reliance Behavior Mapping document?

Reliance Behavior Mapping captures the full spectrum of human responses to AI outputs:

  • Acceptance patterns. How frequently do decision-makers follow the AI recommendation without modification? Under what conditions? Is the acceptance rate uniform across case types, risk levels, and individual officers, or do patterns vary in ways that reveal institutional dynamics?
  • Modification behavior. When decision-makers adjust the AI recommendation, how do they adjust it? Do they consistently modify in one direction (e.g., always increasing supervision beyond what the AI recommends)? Do modifications correlate with specific case characteristics or officer profiles?
  • Override frequency and conditions. How often do decision-makers override the AI recommendation entirely? What triggers an override? Is there documentation of the reasoning? Are officers who override more frequently subject to supervisory scrutiny, and if so, does that scrutiny discourage future overrides?
  • Disengagement indicators. Are there signs that decision-makers are not engaging with the AI output at all: decision times too short for meaningful review, identical responses across diverse cases, patterns consistent with clicking through rather than considering?
  • Automation bias markers. Does the data show systematic over-reliance on AI recommendations? If 97% of decisions follow the AI score exactly, that pattern is not evidence of agreement. It is evidence of automation bias, a well-documented phenomenon in which humans defer to automated outputs even when their own judgment would produce a different result.

Automation Bias in Justice Settings

Automation bias is the tendency for humans to over-rely on automated outputs, particularly under conditions of high workload, time pressure, and cognitive fatigue, all conditions endemic to criminal justice settings. Research across domains from aviation to healthcare consistently demonstrates that when humans receive automated recommendations, they tend to follow them regardless of quality.

In justice environments, automation bias carries unique dangers. The decisions are consequential (bail, sentencing, supervision, parole) and the stakes for error are measured in human liberty. When a parole officer follows a risk score reflexively rather than engaging with the full case context, the “human oversight” that every governance framework demands is functionally absent. Reliance Behavior Mapping makes this absence visible and documentable.

How does Reliance Behavior Mapping relate to accountability?

The accountability implications of reliance behavior documentation are profound. Without it:

  • Agencies can claim “human oversight” without evidence that oversight is substantive rather than performative.
  • Defense attorneys cannot challenge AI-informed decisions because the human decision pathway is undocumented.
  • Oversight bodies cannot evaluate whether agency AI use meets governance standards because the relevant data does not exist.
  • Researchers cannot study human-AI interaction in justice settings because the interactions are not recorded.

Reliance Behavior Mapping produces the documentation that makes accountability operational rather than aspirational.

What Good Reliance Behavior Looks Like

Reliance Behavior Mapping is descriptive, not prescriptive. It does not dictate what the “right” override rate is. However, the data it produces enables informed governance judgment. A healthy reliance pattern typically shows variation: different responses to different case circumstances, evidence that decision-makers are exercising professional judgment rather than uniformly deferring to the algorithm. An unhealthy pattern shows uniformity: near-total acceptance rates, decision times too short for meaningful review, identical responses to diverse cases.

Framework Pillars

← Back to Framework Overview