A Leading Cause for Wrongful Convictions: Experts Overstating Forensic Results

A Leading Cause for Wrongful Convictions: Experts Overstating Forensic Results

More than 150 men and women in American prisons were exonerated in 2018, according to a recent report by a registry that tracks wrongful convictions. Combined, these individuals spent more than 1,600 years in prison, a record for the database, which has data back to 1989.

The leading culprit in convicting innocent people was official misconduct, according to the report by the National Registry of Exonerations. Nearly one third of these cases involved a police corruption scheme in Chicago through which a police officer framed individuals on drug charges.

Another prominent factor in wrongful convictions across the country was misleading forensic evidence. A close look at these cases reveals how experts in fields like hair analysis, bite marks and DNA analysis have used exaggerated statistical claims to bolster unscientific assertions.

Once experts meet the qualifications to take the stand in a courtroom, there are few limits on the words that come out of their mouths.

“An expert can say whatever they want,” said Simon Cole, the director of the registry and a professor of Criminology, Law and Society at UC -Irvine.

That includes offering up invented odds like “one in a million” or “1 in 129,600,” the registry says.

“A lot of the problem with forensic testimony is that the diagnosticity is overstated,” said Barbara O’Brien, a professor at the Michigan State University College of Law and author of the report. A hair sample at the crime scene that resembles a suspect’s hair “gets dressed up with this scientific certainty that isn’t justified,” she said.

Here are three examples from the study’s case files.

The tool: microscopic hair comparison

In 2013, the F.B.I. reported that testimony asserting that microscopic hair comparison could produce a “match” between two hairs was scientifically invalid.

Four years later, a man named Glenn Payne was still grappling with the consequences of three sets of misleading odds. In 1990, when he was 28, he was charged with sexually abusing his 2-year-old neighbor. Upon arrest, Mr. Payne was asked to disrobe. A hair was left behind on a sheet of butcher paper. Investigators located a second hair on a tablecloth draped over the girl.

In court, a lab analyst testified that the hair on the butcher paper had a 1 in 2,700 chance of matching someone other than the victim, and the hair on the tablecloth had a 1 in 48 chance of belonging to someone other than Mr. Payne. He then multiplied these figures together to get a “1 in 129,600” chance of anything other than a random occurrence.

In 2017, lawyers who were reinvestigating the case reached out to the analyst. He acknowledged that the statistical evidence was invalid. He said he should have indicated “that the hair sample found on the defendant could have come from the victim, and the hair sample found on the tablecloth used to cover the victim could have come from the defendant.”

A new medical report also suggested that the charges were a product of a misunderstanding. The little girl wasn’t suffering from abuse, it concluded: She had a strep infection.

The tool: bite marks matching

Ms. O’Brien said bite mark analysis was even more bogus than hair comparisons. Often you can’t even tell if a wound is a bite mark, she said. “It doesn’t even get past the barest suggestion of scientific reality.”

This pseudoscience cost Steven Chaney decades of his life. In 1987, Mr. Chaney was charged with murdering a couple who sold him drugs.

At trial, a medical consultant testified that he’d compared a wax model of Mr. Chaney’s mouth to a mark on the male victim’s arm. Mr. Chaney’s upper and lower arches “matched” the bite, he said, adding that “only one in a million” people could have made that impression.

In 2018, an appeals judge concluded that “scientific knowledge underlying the field of bite mark comparisons has evolved” since Chaney’s trial “in a way that contradicts the scientific evidence relied on by the State at trial.”

Though this was an extreme example, Mr. Cole said, exaggerated odds are common. “Often they are just saying this person is the source of the bite mark or it’s practically impossible that they are not the source of the bite mark,” he said.

He remains concerned that, even though this type of analysis has been widely disavowed by forensic scientists, “not one court in the entire United States has said that bite mark evidence shouldn’t be admissible in court.”

The tool: touch DNA amplification

DNA evidence analysis continues to be far more scientifically respected than the older methods of matching hair samples and bite marks, but the case of Mayer Herskovic is a reminder of how testimony about genetic odds can be misleading in court.

In 2013, as a group of men was attacking a victim, an assailant grabbed the victim’s shoe and flung it onto a nearby roof. The genetic sample collected from the shoe was too small to be useful.

But the Office of the Chief Medical Examiner in New York had developed software that it claimed could amplify samples. At trial, an expert testified that the probability that the shoe sample contained Mr. Herskovic’s DNA was 133 times greater than the likelihood that it came from an unknown person.

He was convicted. Two years later, a higher court concluded that the expert witness had oversold this newfangled technique. Mr. Herskovic was exonerated. And the medical examiner’s office abandoned the amplification tool.

Source Link