Developing an Effective Strategy to Review Correlation Results: a Lesson in Cost-Benefit Analysis

Ron Nichols

I’ve been associated with IBIS® since its introduction into the U.S. market. Since its inception, there have been continuous hardware and software improvements making IBIS ever more efficient and effective. Not only are the images better, but combined with the algorithm, the IBIS powers of discrimination are the best in the ballistic-imaging market. This is possible because IBIS maximizes the information that can be gained from imaging ballistic evidence. For cartridge cases, this can pose a challenge to users because there are many ways in which the correlation results can be analyzed and viewed.

For cartridge case correlations, results can be ranked and sorted in several different ways:

  • two and three-dimensional firing pin  
  • two and three-dimensional breechface
  • two-dimensional with sidelight
  • three-dimensional ejector marks.

NOTE: Some of these are due to the need to ensure IBIS is backward-compatible with images acquired by earlier versions of IBIS.

Or you can use the Rank Sort feature

Unlike the previously-mentioned features in which a single region of interest (ROI) is evaluated, the overall Rank Score is based on the exhibit’s relative placement within a ranked series for each ROI on each exhibit.

Nevertheless, the list of correlation results can be rather lengthy (varies with caliber) and the task of performing a correlation review to determine if there are any potential leads can be daunting. There are two keys to developing an effective and efficient strategy for attacking this list:

  1. Have a well-defined goal of what the user agency wants to accomplish.
  2. Develop a correlation review scheme that best meets the needs of achieving the goal.

Define the goal

It’s important to make the distinction that IBIS is not an examination tool, it’s a screening tool.

A screening tool provides investigators with reliable information in a timely fashion. It helps investigators be more efficient by narrowing the scope of their investigations so that they can identify individuals who may have been involved.

In the context of forensics, an examination tool is designed for laboratories to provide the examiner with the best opportunity for objective, unequivocal information and data from which to draw conclusions regarding the submitted evidence. This helps courts to have confidence in the work being presented before them when deciding the guilt or innocence of a defendant.

You might say that examination tools are meant to be as perfect as possible while screening tools are meant to be as timely as possible while maintaining a sense of reliability, recognizing and accepting that they will not be perfect.

Let me help you define the appropriate goal: IBIS is a screening tool that provides investigators with timely, reliable information to link shootings. While IBIS can be used as a tool to aid examiners in performing bullet and cartridge case comparisons, IBIS is not an examination tool. It is designed to perform hundreds of comparisons in an open case file—something examiners have little time to do—and to highlight a few potential exhibits that the examiner should focus on.

Define the correlation review scheme

If you accept the premise that IBIS is a screening and not an examination tool, it’s important to define a correlation review scheme that maximizes its potential for providing timely, reliable information while minimizing the potential for a missed lead. At its core, the development of a correlation review scheme is a cost-benefit analysis.

To assist agencies in defining an effective correlation review scheme, I recently published a paper in the AFTE Journal (Association of Firearm and Toolmark Examiners) which provides detailed information with respect to the efficiency of the algorithm in discriminating potential leads from non-leads in the current NIBIN (National Integrated Ballistic Information Network) database.

Published in the Winter 2019 edition of the AFTE Journal (Volume 51, Number 1, pages 20 through 24), this article highlights the value of the Rank Sort feature provided in MATCHPOINT™. A total of 2,548 published leads over a seven-month period were reviewed to determine where each ranked in each of the six ROI and Rank Sort categories. Data shows that 50% of the leads were in Rank Sort position 1 and that 95% were within positions 1 through 20. When Rank Sort and all six ROI were considered, all but two of the leads fell within positions 1 through 20 in one of those seven categories.

Conclusion

Tremendous cost savings in terms of time spent performing correlation reviews can be achieved while sacrificing little if anything in terms of reliability when focusing on the Rank Sort feature.

Blog signup

Subscribe to our blog

Sign up for download access

Please submit your details below to access our downloads.

I'm happy for you to contact me

View our privacy policy
Not now