Skip to content

Archives

the coming world of automated mass anti-terror false positives

Man sues RMV after driver’s license mistakenly revoked by automated anti-terror false positive:

John H. Gass hadn’t had a traffic ticket in years, so the Natick resident was surprised this spring when he received a letter from the Massachusetts Registry of Motor Vehicles informing him to cease driving because his license had been revoked. […] After frantic calls and a hearing with Registry officials, Gass learned the problem: An antiterrorism computerized facial recognition system that scans a database of millions of state driver’s license images had picked his as a possible fraud. “We send out 1,500 suspension letters every day,” said Registrar Rachel Kaprielian. […] “There are mistakes that can be made.”

See also this New Scientist story. This story notes that the system’s pretty widespread:

Massachusetts bought the system with a $1.5 million grant from the Department of Homeland Security. At least 34 states use such systems, which law enforcement officials say help prevent identity theft and ID fraud.

In my opinion, this kind of thing — trial by inaccurate, false-positive-prone algorithm, is one of the most worrying things about the post-PRISM world.

When we created SpamAssassin, we were well aware of the risk of automated misclassification. Any machine-learning classifier will always make mistakes. The key is to carefully calibrate the expected false-positive/false-negative ratio so that the negative side-effects of a misclassification corresponds to the expected rate.

These anti-terrorism machine learning systems are calibrated to catch as many potential cases as possible, but by aiming to reduce false negatives to this degree, they become wildly prone to false positives. And when they’re applied as a dragnet across all citizens’ interactions with the state — or even in the case of PRISM, all citizens’ interactions that can be surveilled en masse — it’s going to create buckets of bureaucratic false-positive horror stories, as random innocent citizens are incorrectly tagged as criminals due to software bugs and poor calibration.

Comments closed