See also the instructional videos on "Fingerprint Comparison"
In this video, Brian Cerchiai discusses a NIJ-supported a study conducted by the Miami-Dade Police Department on the accuracy of fingerprint examiners. The study found that fingerprint examiners make extremely few errors. Even when examiners did not get an independent second opinion about their decisions, they were remarkably accurate. But when decisions were verified by an independent reviewers, examiners had a 0% false positive, or incorrect identification, rate and a 3% false negative, or missed identification, rate.
Research Conducted by the Miami-Dade Police Department. Speaking in this video: Brian Cerchiai, CLPE, Latent Fingerprint Examiner, Miami-Dade Police Department.
The goal of the research was to determine if latent finger print examiners can make and be able to make identifications, exclude properly prints not visible to the naked eye. In this case, we had these 13 volunteers leave over 2000 prints on different objects that were round, flat, smooth and we developed them with black powder and tape lifts.
We did the ACE which is analyze compare evaluate. Where we gave latent examiners - 109 latent examiners - unknown finger prints or palm prints and latents to look at and compare to three known sources. So essentially, compare this latent to one of these 30 fingers or one of these six palms.
So as participants were looking at the latent list and comparing them to the subjects, we asked them if they could identify any of those three subjects as being the source of that latent print. In that case, they would call that an identification. If we asked them to exclude, we are basically asking them to tell us that none of those three standards made that latent or were not the source of that latent print.
That ACE verification (ACE-V) process works, secondly, the examiner looks at that comparison and does their own analysis comparison and gives their evaluation of that decision. When we found that under normal conditions where one examiners made an identification and the second examiner verified that no erroneous identification got passed that second latent examiner. So it had a false positive rate of zero.
So when we are looking at ACE comparisons where one latent examiner looked a print and one latent examiner analyzed compared and evaluate and came up with a decision. We came up- there was a false positive rate which basically an erroneous identification where they identified the wrong source.
Without verification, there was a three percent error rate for that type of identification. And we also tracked a false negative rate where given those three standards, people were erroneously excluded that source; where you’re given the source, check one of these three people and then you now eliminate that one of those latent print does not come from one of those three people, even though it did. So that would be a false negative. And that false negative rate was 7.5 percent.
And what we did during the third part of our phase in this test was – we were testing for repeat ability and reproduce ability. We sent back answers over - after six months we sent back participants their own answers and we also gave them answers from other participants. But all those answers came back as if they were verifying somebody’s answers.
Under normal conditions we’d give them the source, latent number and basically agree, disagree or inconclusive. With a biased conditions, we’d give them the identification answer that someone identified, given the answer of a verifier. So now, it’s already been verified and now we want them to give a second verification. Having those print verified, ending out those erroneous identification to other examiners not one latent examiner under just regular verification process-not one latent examiner identified that, they caught all those errors. What actually brought the error rate – the reported error rate dropped down to zero.
We maintained our regular case load, this was done in the gaps in between, after hours. The hardest part of doing this was not being dedicated researchers. That’s why it took us quite a long time to get this done. Now that it’s finally out here and we are doing things like this - giving presentations this year. We really hope to expand on this research. The results from this study are fairly consistent with those of other studies.
Miami-Dade Police Department Photography provided by Matthew Douglass and Janice Gaitan
The Crime Scene Investigator Network gratefully acknowledges the U.S. Department of Justice, Office of Justice Programs, National Institute of Justice, for allowing us to reproduce, in part or in whole, the video and transcript "How Reliable Are Latent Fingerprint Examiners?." The opinions, findings, and conclusions or recommendations expressed in this video and transcript are those of the speaker(s) and do not necessarily represent the official position or policies of the U.S. Department of Justice.