Human error in offender risk assessment can take many forms, including assessment staff bias and mistakes in official records. The effect of these human mistakes on outcomes depends on the “sensitivity of error” of the risk assessment instruments. Yet how these human errors influence risk classification outcomes has remained speculative. To disentangle this relationship and fill the research gap, simulated human error was injected into two risk datasets to determine how unreliability and invalidity affects classification validity. Two main conclusions are drawn. First, risk devices are highly sensitive to human errors, and their use should be met with caution. Second, new techniques are needed to measure and convey model validity. The findings and the conclusions of this study are critical given that more accurate risk assessments give rise to higher levels of public safety. Methods of reducing the sensitivity of error in risk instruments are offered.
Aaron Ho, Amy Shlosberg & Eric Lesneskie
Justice System Journal, 15 Aug 2018