The report proposes further research to determine whether the validity of latent fingerprint analysis is strong enough to be used as evidence. Also warranted are studies on how police officers, judges, lawyers and jurors evaluate and understand fingerprint evidence, the report says. Yet, it concludes, there are insufficient scientific data on the potential pool of all human fingerprints to prove that a set of fingerprints constitutes a unique identifier of a single person, nor are there enough data to determine how many people might display similar features.
The report points out that research evaluating the accuracy of fingerprint analysts has found that those well trained can effectively compare a fingerprint lifted from a crime scene against a known fingerprint. Yet, such study results may be skewed because, in many instances, fingerprint examiners were aware they were being tested, the report notes.
In calling for additional research to ensure that latent print examinations are valid, the AAAS and the PCAST reports propose such studies be conducted without the knowledge of examiners. The FBI Laboratory, for instance, has adopted such workflow procedures, known as context management procedures, to limit the amount of information examiners are provided at different stages of their analysis to reduce the influence such information can have on conclusions drawn.
Such an approach presents challenges, the report concedes, requiring, for instance, cooperation from the police to produce and enter into the normal workflow systems simulated, or false, prints for examiners to study.
Another way to get around cognitive bias, the report states, is to improve the ability of automated fingerprint systems. The systems, however, are unable to match a fingerprint lifted from a crime scene to one gathered earlier by authorities from a known source, nor can they determine whether a comparison of two prints — one from a crime scene, the other from police records — is valid, the report says.
Either way, there is no generally agreed-on standard for determining precisely when to declare a match. Although fingerprint experts insist that a qualified expert can infallibly know when two fingerprints match, there is, in fact, no carefully articulated protocol for ensuring that different experts reach the same conclusion.
Although it is known that different individuals can share certain ridge characteristics, the chance of two individuals sharing any given number of identifying characteristics is not known. How likely is it that two people could have four points of resemblance, or five, or eight? Are the odds of two partial prints from different people matching one in a thousand, one in a hundred thousand, or one in a billion?
No fingerprint examiner can honestly answer such questions, even though the answers are critical to evaluating the probative value of the evidence of a match. Moreover, with the partial, potentially smudged fingerprints typical of forensic identification, the chance that two prints will appear to share similar characteristics remains equally uncertain. The potential error rate for fingerprint identification in actual practice has received virtually no systematic study.
How often do real-life fingerprint examiners find a match when none exists? How often do experts erroneously declare two prints to come from a common source? We lack credible answers to these questions. Although some FBI proficiency tests show examiners making few or no errors, these tests have been criticized, even by other fingerprint examiners, as unrealistically easy. Other proficiency tests show more disturbing results: In one test, 34 percent of test-takers made an erroneous identification.
Especially when an examiner evaluates a partial latent print—a print that may be smudged, distorted, and incomplete—it is impossible on the basis of our current knowledge to have any real idea of how likely she is to make an honest mistake.
Indeed, it is a violation of their professional norms to testify about a match in probabilistic terms. This is truly strange, for fingerprint identification must inherently be probabilistic.
The right question for fingerprint examiners to answer is: How likely is it that any two people might share a given number of fingerprint characteristics? However, a valid statistical model of fingerprint variation does not exist. Without either a plausible statistical model of fingerprinting or careful empirical testing of the frequency of different ridge characteristics, a satisfying answer to this question is simply not possible.
Thus, when fingerprint experts claim certainty, they are clearly overreaching, making a claim that is not scientifically grounded. Even if we assume that all people have unique fingerprints an inductive claim, impossible itself to prove , this does not mean that the partial fragments on which identifications are based cannot sometimes be, or appear to be, identical.
Defenders of fingerprinting identification emphasize that the technique has been used, to all appearances successfully, for nearly years by police and prosecutors alike. If it did not work, how could it have done so well in court? Even if certain kinds of scientific testing have never been done, the technique has been subject to a full century of adversarial testing in the courtroom.
The history of fingerprinting suggests that without adversarial testing, limitations in research and problematic assumptions may long escape the notice of experts and judges alike. First, until very recently fingerprinting was challenged in court very infrequently. Though adversarial testing was available in theory, in practice, defense experts in fingerprint identification were almost never used.
Most of the time, experts did not even receive vigorous cross-examination; instead, the accuracy of the identification was typically taken for granted by prosecutor and defendant alike.
Second, as Judge Pollack recognizes in his first opinion in Llera Plaza, adversarial testing through cross-examination is not the right criterion for judges to use in deciding whether a technique has been tested under Daubert. The point is that we cannot answer that question on the basis of what is presently known, except to say that its reliability is surprisingly untested. It is possible, perhaps even probable, that the pursuit of meaningful proficiency tests that actually challenge examiners with difficult identifications, more sophisticated efforts to develop a sound statistical basis for fingerprinting, and additional empirical study will combine to reveal that latent fingerprinting is indeed a reliable identification method.
But until this careful study is done, we ought, at a minimum, to treat fingerprint identification with greater skepticism, for the gold standard could turn out to be tarnished brass. Recognizing how much we simply do not know about the reliability of fingerprint identification raises a number of additional questions.
First, given the lack of information about the validity of fingerprint identification, why and how did it come to be accepted as a form of legal evidence? Second, why is it being challenged now?
Fingerprint evidence was accepted as a legitimate form of legal evidence very rapidly, and with strikingly little careful scrutiny. Consider, for example, the first case in the United States in which fingerprints were introduced in evidence: the trial of Thomas Jennings for the murder of Clarence Hiller.
The defendant was linked to the crime by some suspicious circumstantial evidence, but there was nothing definitive against him. However, the Hiller family had just finished painting their house, and on the railing of their back porch, four fingers of a left hand had been imprinted in the still-wet paint. The prosecution wanted to introduce expert testimony concluding that these fingerprints belonged to none other than Thomas Jennings.
The judge allowed their testimony, and Jennings was convicted. The defendant argued unsuccessfully on appeal that the prints were improperly admitted. What was striking in Jennings, as well as the cases that followed it, is that courts largely failed to ask any difficult questions of the new identification technique.
Just how confident could fingerprint identification experts be that no two fingerprints were really alike? How often might examiners make mistakes? How reliable was their technique for determining whether two prints actually matched?
How was forensic use of fingerprints different from police use? The Jennings decision proved quite influential. In the years following, courts in other states admitted fingerprints without any substantial analysis at all, relying instead on Jennings and other cases as precedent.
From the beginning, fingerprinting greatly impressed judges and jurors alike. Experts showed juries blown-up visual representations of the fingerprints themselves, carefully marked to emphasize the points of similarity, inviting jurors to look down at the ridges of their own fingers with new-found respect. The jurors saw, or at least seemed to see, nature speaking directly. Moreover, even in the very first cases, fingerprint experts attempted to distinguish their knowledge from other forms of expert testimony by declaring that they offered not opinion but fact, claiming that their knowledge was special, more certain than other claims of knowledge.
But they never established conclusively that all fingerprints are unique or that their technique was infallible even with less-than-perfect fingerprints found at crime scenes. In all events, just a few years after Jennings was decided, the evidential legitimacy of fingerprints was deeply entrenched, taken for granted as accepted doctrine. Why was fingerprinting accepted so rapidly and with so little skepticism? Moreover, the judicial habit of relying on precedent created a snowballing effect: Once a number of courts accepted fingerprinting as evidence, later courts simply followed their lead rather than investigating the merits of the technique for themselves.
First, fingerprinting and its claims that individual distinctiveness was marked on the tips of the fingers had inherent cultural plausibility. The notion that identity and even character could be read from the physical body was widely shared, both in popular culture and in certain more professional and scientific arenas as well. Similarly, Lombrosion criminology and criminal anthropology, influential around the turn of the century, had as its basic tenet that born criminals differed from normal law-abiding citizens in physically identifiable ways.
The idea that upon the tips of fingers were minute patterns, fixed from birth and unique to the carrier, made cultural sense; it fit with the order of things. One could argue, from the vantage point of years of experience, that the reason fingerprinting seemed so plausible at the time was because its claims were true, rather than because it fit within a particular cultural paradigm or ideology. Another benefit of fingerprinting, outside criminal justice contexts, is the establishment of an emergency means of identification.
In many unfortunate cases of a body's discovery, fingerprints confirmed the deceased's identity. Evidence collectors hunt for fingerprints at crime scenes to seek out potential suspects.
Depending on the crime and scene, thousands of fingerprints may be present. Most could be degraded and unusable. Police, detectives, and prosecutors may claim a suspect's fingerprints match fingerprints at a crime scene. Defense attorneys know a "match" may not be absolute. This uncommon knowledge is why suspects should remain silent and acquire representation.
We step in when pressures make the accused feel guilty before they are proven, as formidable lawyers who protect rights.
0コメント