Privacy groups demand transparency after news of ID.me biometric authentication system used by the Tax Administration and over 27 states – has failed to be fully transparent in the way its face recognition technology works.
In LinkedIn fast released Wednesday, ID.me founder and CEO Blake Hall said the company is checking new selfies of users who are signing up to a facial database in an effort to keep identity theft to a minimum. This is in contrast to the ways ID.me has kept more privacy in the past by posting its biometric products and has attracted the attention of proponents who argue that members of the public forced to use ID.me for basic government tasks have vague information.
On the company’s website and in the white papers shared by Gizmodo, ID.me suggests that its services rely on 1: 1 face match systems that compare user biometrics with a single document. This is in contrast to so-called face recognition systems 1: many (types used by now notorious companies like Clearview AI) that compare users to a database of (many) faces.
Privacy experts generally agree with 1: much more prone to error and bias (although groups like the Electronic Frontier Foundation have expressed concerns also over 1: 1). However, while ID.me is primarily set against the background of a 1: 1 face match, new comments from the company’s founders show, at least in some scenarios, that the company compares the faces of some users with a database rather than a single document. This potentially implies millions of Americans being told by federal and state governments to log on to the site to see their taxes online or claim unemployment benefits.
In particular, ID.me told Gizmod that it uses 1: many face recognition when users first join its system to prevent identity theft, which is in addition to checking 1: 1 users to verify someone’s identity. In other words, ID.me uses 1: 1 to make sure it’s you, and 1: a lot to make sure you’re not someone else.
Disclosure of ID.me’s use 1: Many face recognition has provoked direct criticism from a wide range of privacy groups. One of them, the non-profit digital rights organization Fight For the Future, announced a statement accusing the company of “lying about the scope of its face recognition surveillance”. In an e-mail statement, Fight for the Future campaign director Caitlin Seeley George said the findings should force government agencies to reconsider their partnership with ID.me.
“The IRS must immediately stop its plan to use face recognition checks, and all government agencies should terminate their contracts with ID.me,” Seeley George wrote. “We also believe that Congress should investigate how this company managed to win these government contracts and what other lies it could promote.”
They were not alone. In an interview with Gizmodo, ACLU senior political analyst Jay Stanley expressed deep concern over what he described as ID.mea’s lack of transparency, especially given its close relationship with government services.
“The fact that they [ID.me] they were not transparent about it is just another sign that we are devising important policies about how Americans treat their government by allowing private companies to invent things while going in secret, ”Stanley said. “If this company was a state agency, it would be subject to the FOIA and the Privacy Act and other checks and balances that have evolved over the decades to prevent the kinds of problems that may arise.”
Stanley also expressed concern about the database that ID.me maintains to prevent fraud and whose face could be found on it and who might be in it.
Meanwhile, in an email to Gizmod, the Surveillance Technology Monitoring Project (STOP) launched by previous concerns about ID.me’s relationship with the IRS, reiterated Stanley’s concerns about transparency and warned of news about ID.me using 1: much face recognition means the system could be more vulnerable to biases than previously known.
“This dramatically expands the risk of racial and gender bias on the platform,” STOP Executive Director Albert Fox Cahn told Gizmodo. “More importantly, we need to ask ourselves why Americans should entrust our data to this company if they are not honest about how our data is used. The tax administration should not give any company so much power to decide how our biometric data is stored. ”
In subsequent statements, ID.me reiterated that it is checking new users who are signing up to its own selfie database, “to check for prolific attackers and members of organized crime who are stealing multiple identities.” The company says less than 0.1% of all users are labeled as potential identity thieves. If a user is tagged with a face recognition system, he or she is not blocked completely, but is redirected to a video chat confirmation with one of the company’s team members.
“Without this control to detect repeated assailants, criminals would victimize thousands of innocent people a day,” ID.me said. “Given the threat environment, the alternative is either to accept huge amounts of fraud or simply shut down programs altogether.”
News about ID.me’s face recognition database comes a week later Gizmodo and others points of sale wrote about the Tax Administration’s decision to order ID.me’s verification process for anyone trying to access their IRS.com account. Since then, a number of activist groups, including the ACLU and STOP, have publicly spoken out against the issue.
The issue also caught the attention of Democratic Senator Ron Wyden. In a tweet, Wyden said he was “very upset” that some taxpayers may feel like they need to undergo a face recognition scene. “While e-returns remain unchanged, I am asking the IRS to make this plan more transparent.”
Although this particular case is narrowly focused on ID., Stanley, an ACLU attorney, said the highlighted transparency issues are evidence of an overall system that needs top-down revision.
“The infrastructure here is for a for-profit company to do what is probably an essential function of government [verifying identities] is a broken way to build this kind of authentication system. ”