This web site gives the opinions of Dr. Greg Kane. Everything you read here is expressed only as my personal opinion.

© 2008 Nothing here may be reproduced without written permission; Trial Talk articles and raw study data excepted.

"[I]nformation underlying an officer’s decision is not documented and cannot be examined"
Colorado SFST Validation Study, official report

"Using" Threats?
Elsewhere:

Read NHTSA contract scientist Dr. Stuster's defense against my analysis of NHTSA validation science's "using" flaw.

NHTSA SFST VALIDATION THEORY imagines that SFST validation studies prove SFSTs are "extremely accurate" measures of Blood Alcohol Concentration. The elements of SFST validation theory are...

1

 

Fact

 

Validation study officers "using SFSTs" were about 90% accurate in their arrest and BAC estimate decisions.

2

Assumption

Officer accuracy is assumed to be entirely due to the SFSTs the officers "used."

San Diego validation study author Dr. Stuster's threatening (as I read it) email to me indicates he believes officers using breathalyzers should also be counted as "using SFSTs".
Read Dr. Stuster on this point.

3

 

Conclusion

 

The SFSTs done on defendants in current DUI prosecutions reproduce the 90% accuracy of the validation study officers.

Trouble is element two, the assumption, is silly. Every one of the NHTSA's own validation study reports admits it is false:

Colorado Validation Study
"[D]ecisions to arrest or release were based on performance of those tests together with observations of the driving pattern and the driver’s behavior and appearance. Some of the information underlying an officer’s decision is not documented and cannot be examined..."[emphasis mine]

Marcelline Burns and Ellen Anderson, A Colorado. Validation Study of the Standardized Field Sobriety Test (SFST) Battery, 1995. Section V-E

San Diego Validation Study
"It is unknown why the officers did not follow the test interpretation guidelines in these two cases.... Similarly, in seven of the false positive cases listed previously in Table 6 officers apparently [!!??] did not follow the test interpretation guidelines..." [emphasis mine]

Jack Stuster and Marcelline Burns, Validation of the Standardized Field Sobriety Test Battery At BACs Below 0. 10 Percent, 1998, Page 20

Florida Validation Study
"These incorrect releases included one person with a nystagmus score of six, one with a score of five, and five with a score of four. Why these individuals were released is unknown…"[emphasis mine]

Marcelline Burns, A Florida Validation Study of the Standardized Field Sobriety Test (S. F. S. T.) Battery, 1997, Page 17

So, every validation study admits officers did not base their decisions entirely on SFSTs, and every validation study fails to measure how much difference that makes to the study's results. In fact, validation studies fail to measure whether officers actually "use SFSTs" at all.

Using the previously unpublished San Diego validation study data, I've done the calculation. Officer accuracy and SFST accuracy are different. Officers were 90% accurate. SFSTs were only 78% accurate. (A coin toss is 50% accurate.)

The accuracy statistic is flim flam. Let's look at the real scientific accuracies. Using the standard scientific accuracy called specificity, on innocent people officers were 71% accurate.
On innocent people the SFST was only 29% accurate—worse than a coin toss.

Here are the contingency tables for officer and SFST results.

Officer Accuracy
SFST Accuracy

These results were released.

These results were not released.

Officer decisions were 90% "accurate." The validation study released this irrelevant and misleading datum.

But the accuracy of the officer decisions on innocent people (aka "specificity") was only 71%. If juries rely on officer decisions, they will wrongly convict 29% of the innocent people who go to trial. The SFST did much worse. it's innocent driver accuracy was only 29%, leading to a false conviction rate of 71%. The SFST study did not release these SFST results.


"Authors are expected to provide detailed information about all relevant financial interests and relationships or financial conflicts within the past 5 years and for the foreseeable future (eg, employment/affiliation, grants or funding, consultancies, honoraria, stock ownership or options, expert testimony, royalties, or patents filed, received, or pending), particularly those present at the time the research was conducted and through publication, as well as other financial interests (such as patent applications in preparation), that represent potential future financial gain."

Journal of the American Medical Association
Instructions For Authors, 2008
, pg 2;
also in JAMA, July 2, 2008-Vol 300, No. 1

I don't know why NHTSA SFST validation studies did not include this damaging information. Dr Stuster's threatening (as I read it) email to me says:

San Diego validation study author Dr. Stuster responds:
"I was NOT paid to discover that the SFSTs were accurate and I am offended by your libelous statements. I was paid to conduct a field study and to analyze and present the results. I have reported unwelcome results on many occasions when the data do not support a hypothesis and was under no obligation to perform my work differently during this study. I am angered by your unfounded accusations concerning my integrity."

I replied to Dr. Stuster asking why his SFST validation study report does not reveal the accuracy of the SFST itself. So far Dr. Stuster hasn't responded.

I never have and am not now saying the NHTSA or its contractors are deliberately deceptive. I never have and am not now saying all the money NHTSA paid its contractors, year after year, study after study, in any way influenced those studies' never-peer-reviewed always-SFTS-favorable reports. I do not know, I do not care, I do not have an opinion about what they knew or didn't know, or did or did not intend. Nothing here is a statement about the knowledge or intentions of the NHTSA or it's contractors.This web site is not about the NHTSA or its contractors. This web site is about the science of SFST validation theory.

Data prove validation study officers did SFSTs, but did not use SFSTs.
A quick look at the raw validation study data proves that officer did not—could not had they wanted to—base their BAC estimates on standardized SFST interpretation criteria. Here's how we know...

1. Officer's estimates more precise than SFST criteria allow.
In the San Diego study thirteen drivers failed the HGN test and passed both the OLS and WAT tests. These are their SFST results, and the officer's estimate of each drivers BAC.

Notice these thirteen drivers had identical SFST scores. According to the standardized FST interpretation criteria, each driver should have had a BAC estimate of ">=0.08". Instead, officers came up with nine different BAC estimates.

What's more, instead of the SFST's standardized BAC estimates— "<0.08" or ">=0.08"—officers were somehow able to estimate BAC levels to 1 part in 100. There were then and are now no standardized FST interpretation criteria for estimating BAC to 1 part in 100.

Officers did not—could not had they wanted to—rely on these identical SFST scores to come up with their nuanced, 1 part in 100, BAC scores. What's more, the officers somehow knew almost exactly which SFST results to throw out. All these drivers failed the SFST. Yet officers estimated that six of them had BACs in the legal range—flatly contradicting the SFST. Five of those six in-the-legal-range BAC estimates were correct. How'd officers do that? How did officers know almost exactly which SFSTs to ignore?

A detailed look at the SFST / BAC estimate results for one study officer

Seven officers assessed drivers for the San Diego SFST study. This table shows results for one officer, identified in the study as Officer 3661. These are the results for every driver this officer assessed.

Column 1 gives the SFST's BAC prediction, based on standardized SFST interpretation criteria, for drivers assessed by this one officer. The SFST said every driver tested was impaired, regardless of actual BAC.

Column 2 gives the actual driver BAC, simplified to Hi and Lo at BAC 0.04%

Column 3 gives the officers BAC estimate, simplified to Hi and Lo at BAC 0.04%

NOTICE
Every single driver given the SFST failed the SFST.

Every single time the SFST gave the wrong answer, officer 3661 rejected that answer, and correctly estimated the BAC as low.

Officer 3661 never once rejected the SFST when the SFST gave the correct answer.

The probability that this distribution of rejections was random is vanishingly small. Officer 3661 must have used some method other than the SFST for determining BAC level in every case, for every driver.

 

 

 

Same officer, now for BAC 0.08%

NOTICE
Officer 3661's predictions were perfect.

Every single time the SFST gave the wrong answer, officer 3661 rejected that answer.

Officer 3661 never once rejected the SFST when the SFST gave the correct answer.

The probability that this perfect distribution of rejections was random is vanishingly small. Officer 3661 must have used some method other than the SFST for determining BAC level in every case, for every driver.

2. Officer 3661 was not alone. The truth is, officers systematically ignored SFST results. Correct results were accepted; incorrect results were rejected.

This graph shows when officer BAC estimates and SFST results agreed and disagreed.
Data is from the San Diego Field Sobriety Test validation study.

 

EXPLAINING THE GRAPH
This graph shows which drivers' SFST scores were ignored by police officers. Each point represents one driver: FST score x-axis; BAC y-axis. Drivers above the dark 0.08 line were impaired as a matter of law. Drivers below the dark line were innocent. Open dots and open squares represent drivers whose SFST result, pass or fail, agreed with the officer's BAC estimate.

Every dark square represents a driver whose SFST result was rejected by the officer. Dark squares below the 0.08 line are drivers who failed the SFST, but who the officer correctly assessed as innocent. Dark squares above the line are impaired drivers who failed the SFST, but who were incorrectly assessed by the officer as innocent. (Squares stack. You can't count visible squares to get totals. Of 59 false positive SFSTs, officers rejected 35 = 59%)

Black boxes below the 0.08 line represent SFST mistakes corrected by the officer.
Black boxes above the 0.08 line represent SFST correct-calls mistakenly rejected by the officer.

WHAT THE GRAPH SHOWS
Officers ignored the SFST when it gave the wrong answer, but not when it gave the correct answer. When the SFST gave the wrong answer, officers rejected that wrong answer a whopping 59% of the time. When the SFST gave the correct answer, officers ignored that answer only 2% of the time. This distribution of rejections cannot have happened randomly. Officers systematically ignored the SFST.

The only way officers could have known which SFSTs to ignore and which to accept is to use some other method to assess driver impairment in every case. The data proves FSTs are extremely inaccurate—so inaccurate that officers in the NHTSA's own validation studies simply ignored the test's results.

How SFST validation theory misleads juries
Juries are told SFSTs have the same accuracy as officers in validation studies, because study officers were "using SFSTs." The truth is, officers weren't using SFSTs to make their BAC estimates. The truth is officer accuracy and SFST accuracy are very different. Here's the truth about how SFST validation theory works:

top