The Federal Trade Commission (FTC) warned that increasing use of consumers’ biometric information and related technologies, like those powered by artificial intelligence/machine learning, significantly elevates consumer privacy and data security concerns and the potential for bias and discrimination, and could be illegal. False or unsubstantiated claims about the accuracy or efficacy of biometric information technologies or about the collection and use of biometric information could violate the FTC Act, the agency says.
The notice comes amid a renewed bipartisan push on Capitol Hill to set up a federal commission to regulate the fast-moving AI technology field, and as a progressive policy group calls for President Joe Biden to sign an executive order increasing government oversight of AI development.
“In recent years, biometric surveillance has grown more sophisticated and pervasive, posing new threats to privacy and civil rights,” said FTC Director Samuel Levine. “Today’s policy statement makes clear that companies must comply with the law regardless of the technology they are using.”
Consumers face new risks associated with biometric data, like biometric data being used to identify consumers in certain locations could reveal sensitive personal information such as accessing certain types of health care services or attending political meetings, according to FTC. Large bases of biometric data could be targeted for malicious actors who could misuse the information and some biometric technologies may have higher rates of error for certain populations than others.
FTC said it is committed to combating deceptive acts and practices related to the collection and use of consumers’ biometric information, including the marketing and use of biometric technologies. There are several factors the agency considers when determining whether biometric companies have violated the FTC Act, including failing to assess foreseeable harms to consumers before collecting biometric information and to address known or foreseeable risks.
Meanwhile, Democratic Sens. Michael Bennet (CO) and Peter Welch (VT) introduced the Digital Platform Commission Act to create an expert federal entity to provide comprehensive regulation of digital platforms to protect consumers, promote competition and defend public interest. The new commission would help to regulate AI/ML technologies and social media with tools to develop and enforce rules in the technology sector.
“Big Tech has enormous influence on every aspect of our society, from the way we work and the media we consume to our mental health and wellbeing,” Welch said in a press release. “For far too long, these companies have largely escaped regulatory scrutiny, but that can’t continue.”
Currently, the Department of Justice and the FTC largely regulate technology but, according to Welsh, lack the expert staff and resources necessary for robust oversight. Both agencies are also limited by existing statutes to react case-specific challenges raised by digital platforms when long-term rules would be required to manage technologies.
Biometric information is data that depicts physical, biological or behavioral traits relating to an identified or identifiable person’s body. For example, facial iris or fingerprint recognition technologies collect and process biometric information to identify individuals on technologies like smart phones.
FTC says it will consider several factors to determine whether a business’s use of biometric information or biometric information technology could be unfair in violation of the FTC Act including:
- Failing to assess foreseeable harms to consumers before collecting biometric information.
- Failing to promptly address known or foreseeable risks and identify and implement tools for reducing or eliminating those risks.
- Engaging in surreptitious and unexpected collection or use of biometric information.
- Failing to evaluate the practices and capabilities of third parties, including affiliates, vendors, and end users, who will be given access to consumers’ biometric information or will be charged with operating biometric information technologies.
- Failing to provide appropriate training for employees and contractors whose job duties involve interacting with biometric information or technologies that use such information.
- Failing to conduct ongoing monitoring of technologies that the business develops, offers for sale, or uses, in connection with biometric information to ensure that the technologies are functioning as anticipated and that the technologies are not likely to harm consumers.
In addition to regulatory and legislative efforts from government entities, the Center for American Progress is urging the Biden administration to issue an executive order that requires federal agencies to implement the Blueprint for an AI Bill of Rights.
Specifically, CAP is urging the administration to:
- Require federal agencies to adopt and implement the Blueprint for an AI Bill of Rights for their own use of AI and create a White House Council on AI.
- Direct federal agencies and contractors to assess their own automated systems under the government’s National Institute of Standards and Technology AI Risk Management framework.
- Require federal agencies to assess the use of AI in enforcement of existing regulation and address AI in future rulemaking.
- Direct the federal government to prepare a national plan to address the economic impacts from AI, especially job losses.
- Order the national security apparatus to prepare for potential artificial intelligence systems that may pose a threat to the safety of the American people.
“Artificial intelligence (AI) will not change everything overnight, but its public availability is already setting in motion potentially large shifts in many areas of society,” CAP wrote. “Once again, there is a sense of deja vu as a new technology is poised for introduction to a society unprepared for its attendant consequences and without an adequate comprehensive response from the government.”