If there is a human in the loop, how can it be worse than the status quo in accuracy? And since policing is overall a benefit and necessary for a functioning safe society, making it cheaper while maintaining the same accuracy rate seems like a win. A nonzero rate of false positives or rare human mistakes isn’t a reason to reject this technology.
Yes, and the problem didn't start with facial scanning.
> This has become a growing problem for decision making as intensive care units, nuclear power plants, and aircraft cockpits have increasingly integrated computerized system monitors and decision aids
There was one incident in the 80s? where a US Navy ship shot down an Iranian airliner even though the screen showed it was gaining altitude and was squawking a commercial code, but the sailors fired anyways assuming it was going to fire missiles at them.
I’m very familiar with humans in the loop from my naval experience and agree fully that it’s hard to question the computer.
There is also no incentive for the human in the loop to question the computer. If the computer was right and a human thinks it's a false positive, then the human will be blamed later.
Whereas if you just rubber stamp what the computer said, then you're in the clear no matter what.
> Other than a photo lineup, the detective did no other investigation...he relied on [face recognition], believing it must be right. That’s the automation bias this has been referenced in these sessions.
...not just a problem in law enforcement. Pilots may rely too heavily on a planes' gauges, for ex.