The Benefits and Risks of Facial Recognition
Who has your photo and how are they using it?
All levels of government operate various facial recognition systems. This is a growing concern, however, because consistent policy to regulate this activity doesn’t exist. And just to cut to the chase, systems may be unwittingly problematic and have other serious issues.
A category of biometric security, facial recognition uses artificial intelligence to detect faces, analyze them, convert the image to data and find a match. The information can be applied in various ways, such as recognizing an approved user, finding a missing person, identifying a criminal suspect, tracking attendance, monitoring health conditions, creating more targeted marketing and more.
Here are a few examples in use today. Some local law enforcement agencies use facial recognition to identify suspects. The technology is widespread in the federal government too. Homeland security uses biometric identification to evaluate people at border crossings. The FBI allows for broad matching capability in criminal investigations. The Department of Energy uses facial recognition to control entry and exits from controlled locations. The Department of Justice uses it to prevent exploitation of children or to address human trafficking.
Those are good purposes, right?
The problem is, algorithms have more difficulty discerning dark skin faces than light skin faces. On one end of the spectrum, algorithms can discern a white adult male far easier than a black adult female. This may be just a minor annoyance if it compels someone to enter a backup password when a device does not recognize their face. It gets more serious when identities are not matched correctly by law enforcement and someone is wrongfully detained, or a person is barred from entering a secure area that they actually have authorized access to.
Despite growing opposition, nearly all areas of government are already using facial recognition, and plan to increase their use, according to a report released in August 2021 from the U.S. Government Accountability Office.
This is despite a strong request from the U.S. Technology Policy Committee of the Association for Computing Machinery to stop using facial recognition systems immediately. However, the advice of the world’s largest educational and scientific computing society, plus notable other groups and individuals such at the ACLU and American Library Association, has largely been ignored.
Here are the key issues. There are:
- A lack of standards and regulations for how facial recognition technology can be used. A bill has been introduced in Congress, and some cities and states have created their own limiting legislation, but nothing comprehensive or widespread has been put in place.
- Wildly inadequate privacy laws to prevent collection of our personal images in the first place.
- A proven difference in accuracy for facial recognition systems in identifying persons of different ethnic groups, causing racial disparity. There is less accuracy in identifying darker skin over lighter skinned persons.
- Varying degrees of accuracy of facial recognition systems between different vendors. If you read statistics that tout facial recognition is capable of 99 point something accuracy, there are also systems that are providing half that. Amazon, IBM and Microsoft stopped selling facial recognition systems to police departments, but lower tiered and sometimes far less accurate systems are still available.
- Other human features like age, disease and disabilities reveal inconsistencies according to studies, causing more bias and risk to certain people.
No one asked our permission
Granted, I am still bothered by someone calling me by name when I phone a business because they’ve read my caller ID. So, you can imagine the concern when realizing that companies are actively scouring social media, Venmo, news articles and other sources, cataloging a file of images of me (and you) without notice.
Last year, a girlfriend sent me a photo of someone she saw in a restaurant who looked just like me. In fact, she was so convinced it was me, she introduced herself and asked if she could take a picture of her. It wasn’t me, but it did look just like me—same approximate age, body size, hair color and even from the Pacific Northwest like me (but I would have never worn that dress, lol). It was like looking at my own face. If my doppelganger performs a heinous crime, I could be in trouble!
When we are so rightly concerned about our privacy online, where are the rules regarding collection and use of personal data to be used for facial recognition systems?
Clearview AI, is one such company that has collected a massive amount of photo data from many different applications, vendors and systems. You have no way of knowing if Clearview, or another vendor, has a file on you. You don’t know how they obtained the data, how it is stored, secured from hacks or shared.
It is not just the end user—such as law enforcement—that needs regulation in their use of the technology, but also the companies collecting and holding the data. How do we hold them accountable when the system is wrong and causes harm?
The public interest
Getting back to the Association for Computing Machinery’s report, the contributors sum up that facial recognition systems have the potential to cause significant harm to people through errors and racial bias.
It is not likely in our Information Age that we can stop forward progress on development of facial recognition and other invasive technology. In the words of the Center for Strategic and International Studies on the matter, “Government action should be calculated to address the risks that come from where the technology is going, not where it is currently.”
Until that time, when the technology is better understood and safeguards in place, halting use of facial recognition systems by the government, and especially law enforcement, is gaining widespread support.