By Shuangming Pang
In early June, thousands of Google employees signed a petition compelling the company to not renew Project Maven, a Pentagon contract using artificial intelligence and image recognition technology to improve drone strike accuracy.
As a result, Google will not renew the contract when a current deal expires next year. The success of the Google workers campaign lead to similar efforts at Amazon, Microsoft and Salesforce. Civil libertarians debated privacy and facial recognition technology at MIT on July 11 as it might relate to all tech workers.
Kade Crockford, the director of technology for liberty program at the American Civil Liberties Union of Massachusetts was on the panel. Other speakers included Sasha Costanza-Chock, an author of a recent open letter calling on Microsoft to drop its contract with Immigration Customs Enforcement; Valeria Do Vale, an undergraduate at Northeastern University; and Ben Tarnoff, a founding editor of Logic magazine.
The speakers argued the collaborate use of the technologies with the large company like Google, Amazon by law enforcement violates human rights.
Amazon’s facial recognition misidentified 28 politicians in ACLU text
In July, the ACLU conducted an experiment with Amazon’s facial recognition program, Rekognition, to compare images of all members of the House and Senate with a public database of 25,000 mugshots. The test results showed that it incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for a crime, according to an ACLU post.
“To conduct our test,” Jacob Snow, a civil liberties attorney at ACLU wrote in the post, “we used the exact same facial recognition system that Amazon offers to the public, which anyone could use to scan for matches between images of faces.”
The ACLU said it used the default threshold setting of 80 percent, which means the system matches a face with the similarity of 80 percent of the features. “These results demonstrate why Congress should join the ACLU in calling for a moratorium on law enforcement use of face surveillance,” Snow wrote.
However, David Kaye, a law professor at Penn State University, argued that the false-match rate doesn’t mean very much since the ACLU used a low level of similarity to make its matches in its test.
Amazon recommends that law enforcement use a 95 percent threshold, or above. It has indicated that using a low figure like 80 percent ensures that there will more be false matches among so many comparisons.
“The ACLU apparently neglected to adjust the level,” Kaye said, “or, worse, it tried the system at the higher level and chose not to report an outcome that probably would have had fewer ‘causes for concern.’ ”
The ACLU said that the face mismatches were disproportionate of people of color, including six members of the Congressional Black Caucus.
Kaye claimed that his observations do not negate the privacy concerns by applying facial recognition software to public surveillance systems. “Moreover, I have not discussed the ACLU’s statistics on differences in false-positive rates by race,” Kaye said, “There are important issues of privacy and equality at stake.”
The privacy concerns of using advanced technology
As the success of capturing a suspect in the decades-old Golden State Killer case using cutting-edge DNA technology shows a trend of wider adoption for law enforcement, civil libertarians argue it violates Fourth Amendment search and seizure rights.
The scientific validity seems to work well in some cases, and to some extent, it has become a trend for the court to use it, Kaye said. “Do we have an expectation of privacy? I think society is probably going to have some struggle adjusting to this idea,” Frederick Bieber, a Harvard professor of pathology and an expert in DNA analysis said.
Kade Crockford argues that for the moment, people still have privacy.
“The real issue here is the possibility to be free, the possibility to have control of your own life,” Crockford said.