Education

Wichita State researcher working on alternative to facial recognition technology

“Did you run facial recognition yet?” Tony Stark asks the artificial intelligence powering his Iron Man suit as he receives a photograph from police in the Marvel Avengers superhero blockbuster “Civil War.”

These days, Stark is not the only one asking that. You can use facial recognition to unlock your smartphone or computer. Airport security can use it to verify the identity of travelers. Police can use it to identify suspects.

But with common use comes concerns that range from misidentification to false perceptions.

“Facial recognition is a tool that, if used properly, can greatly enhance law enforcement capabilities and protect public safety, but if used carelessly and improperly, may negatively impact privacy and civil liberties,” Kimberly J. Del Greco, deputy assistant director of the FBI’s Criminal Justice Information Services Division, told a House oversight committee in 2019.

As the market for such technology continues to grow — it’s expected to be worth $7 billion by 2024 — various efforts seek to create laws, squelch misleading uses and find more accurate ways to identify people.

One such effort is at Wichita State University, where a researcher is working on a biometric alternative intended to address possible bias from facial recognition.

Ajita Rattani, an assistant professor of electrical engineering and computer science at WSU, recently won a $200,000 grant from the National Science Foundation to conduct studies to investigate whether ocular recognition — or eye scans — can identify people equally well, regardless of race or gender.

How facial recognition works

Although other biometric methods such as ocular recognition or fingerprint recognition might be more accurate, facial recognition is of more interest because it is easy for humans to check the computer’s work, Rattani said.

“If your system is processing my image, there can be [a] human supervisor as well. He could also verify [my image], but he could not verify two fingerprint patterns by looking at that,” Rattani said.

To recognize a face, the computer algorithm first needs to find any faces in the image and then determine whose face it is, said Vir Phoha, a professor of electrical engineering and computer science at Syracuse University.

There are two main methods for doing that. The first measures distances between key parts of the face. When the computer encounters a new face, it records those measurements and searches its database of known faces to try to find a match.

Alternatively, the computer can use a standard face and record variations from that standard face. It then uses those variations to recognize individual faces.

Rattani described the process as storing a “template” of your face and the computer matching the faces it sees to the templates it has.

One benefit of the technology is that it is more secure than a password.

“It is more secure. You don’t need to learn a password, because your face is your password, your fingerprint is your password,” Rattani said. “It has already been proven that this technology is unique to each individual, [meaning] that my facial image or my fingerprint pattern doesn’t match with anybody else.”

How facial recognition can compromise privacy

However, there are concerns that it can compromise people’s privacy.

“I think on one level, it’s just a massive invasion of privacy,” said Caitlin Seeley George, the campaign director at Fight for the Future, a nonprofit digital rights advocacy group. “People probably are encountering it in their day-to-day life without realizing it.

“We know police are using it across the country in order to track and identify people, and folks might think that, well, they’re just trying to catch bad guys, but we actually know that police are using it for other reasons, too, like to track people attending protests and exercising their First Amendment rights,” George said. “We also know that most government, law enforcement, FBI, ICE, Department of Homeland Security all use it as well.”

Even if you have never committed a crime, law enforcement might still have your face on record. Half of American adults are in the law enforcement face recognition network, a 2016 study by the Georgetown Law Center on Privacy & Technology found.

The ocular technology Rattani studies might be able to minimize the amount of privacy lost while maintaining the benefits.

“People have already proven that ocular technology is actually at par with face technology when it comes to accuracy and security, and it offers more privacy,” Rattani said. “[Some people] do not like to share their face images with others, but people are OK with sharing ocular images.”

How bias can enter biometrics technology

Because biometrics technology requires a data set to compare to, the technology can be susceptible to bias, where it is better at identifying some people than others, such as white men.

“Let’s say in a data set of white guys, if facial recognition is 98% accurate, it may not be 98% accurate for Asian [Americans], or let’s say maybe for African Americans,” Rattani says.

Facial recognition algorithms were 10 to 100 times more likely to incorrectly identify Asian and African American faces than they were to misidentify Caucasian faces, a 2019 study conducted by the National Institute of Standards and Technology found.

As government agencies such as the U.S. Customs and Border Protection use biometric identification to identify people entering the country via airports such as Kansas City International and law enforcement uses the technology to identify suspects, the consequence of a misidentification can be detrimental to individuals.

“We’ve seen multiple cases where specifically Black men have been misidentified by facial recognition,” George said. “These people have been misidentified and arrested for crimes that they didn’t commit.”

According to the New York Times, at least three Black men have been wrongly arrested based on facial recognition technology.

So far, no such cases have been documented in Kansas.

The Wichita Police Department did not return requests for comment about whether it uses facial recognition technology. In response to a 2019 Freedom of Information Act request, the department said then that it did not use such technology.

Kansas agencies are using facial recognition in other ways. A spokesperson from the Kansas Department of Revenue wrote in an email that the department uses “facial technology to the extent necessary to verify a person presenting a driver’s license or identification credential is the same person on the credential during the renewal process.”

Citing the need to protect security protocols and limit efforts to circumvent those protocols, the spokesperson did not provide further details.

How your children might be affected

Concerns around facial recognition often focus on adults, but children are increasingly encountering it as well. Schools have begun using it to track students and adults in buildings for security, to perform temperature checks and, during the pandemic, to proctor students taking tests.

For example, Topeka Public Schools rolled out facial recognition technology last year as a way to track staff members’ temperatures and to keep track of who was in the building. The U.S. Army is testing facial recognition to help monitor children in a child development center at one of its bases.

Although facial recognition might have benefits for schools, Shobita Parthasarathy, a professor of public policy and the director of the science, technology and public policy program at the University of Michigan, warned that it could normalize surveillance and narrow the range of acceptable behaviors among students.

“You have to behave in a certain way, wear a certain set of clothing in order to not raise alarms from the technology, and the technology is used to surveil and increase the kind of restrictions on how one should behave,” she said. “That has a real psychological impact. You always feel like you’re being watched. You feel like you can’t do much. You feel powerless.

“Those kinds of implications are likely to be even more enhanced among communities that are already marginalized because you have to kind of manipulate yourself essentially to be acceptable to the technology,” Parthasarathy said. “That’s the kind of a future that I think is likely if we continue down this road.”

If a school district were to consider implementing facial recognition, she recommended that parents and guardians ask administrators how the technology will be used, how it will be maintained, how accurate it is and what happens if there is an error.

The Wichita school district is not allowed to collect student biometrics data under Board of Education policy 5502 regarding student privacy unless the adult student or parent gives written consent.

“Our student devices do not use any facial recognition or other biometric data,” a district spokesperson wrote in an email. “We don’t use biometric data for test security nor building security.”

More sinister uses for facial recognition technology

Loss of privacy and normalization of surveillance aren’t the only concerns brought about by facial recognition technology. The underlying principles that allow for facial recognition can also be used to create photos of people who don’t exist and videos of events that didn’t happen.

“[It] also gives us a method of creating artificial faces. That means if you have the base face and you look at variations and you combine these variations and generate a new face,” Phoha said. “If I have a picture of you and I have a picture of myself and in a video I can transfer, for example, while you’re talking, I can replace your face by my face. In the video it would be hard to distinguish to the human eye. This can create serious challenges because social manipulation, elections, many things can be affected by this.”

As an example, Phoha said, someone could take a video, place an influential person’s face into it and make him say something he did not say. In fact, American actor and director Jordan Peele did just that to make it appear as though former President Barack Obama gave a warning about the dangers of false information and fake news.

These synthetic digital media are collectively referred to as “deepfakes.”

A 2021 report from the National Security Commission on Artificial Intelligence included deepfakes among artificial intelligence-related threats that “have been or soon will be developed and used against the United States.”

Foreign powers have already used such techniques to spread their narratives. In one case, a Reuters and Graphika investigation found that Russian operatives used artificial intelligence to generate profile pictures to create fake editorial personas to promote pro-Donald Trump, anti-Joe Biden messaging in 2020.

In a private industry notification, the FBI wrote that “[machine learning]-generated profile images may help malicious actors spread their narratives, increasing the likelihood they will be more widely shared, making the message and messenger appear more authentic to consumers.”

They added: “Foreign actors are currently using synthetic content in their influence campaigns, and the FBI anticipates it will be increasingly used by foreign and criminal cyber actors for spearphishing and social engineering in an evolution of cyber operational tradecraft.”

In response to the growing threat of deepfakes, Sens. Rob Portan, R-Ohio, and Gary Peters, D-Mich., introduced the Deepfake Task Force Act in July. A press release said the legislation would create a Department of Homeland Security task force charged with exploring how to reduce the spread of deepfakes, develop tools to authenticate content and increase trustworthy communication about deepfakes.

“Deepfakes represent a unique threat to our national security and our democracy,” Portman said in a statement. “For most of human history seeing meant believing, but now that is becoming less and less true thanks to deepfakes.” The aim of the legislation, he said, is “to develop standards so that companies, tech platforms, journalists, and all Americans can track and authenticate content so we can better separate the truth from the lies.”

The new act builds on the Deepfake Report Act that passed the Senate last year. It directs the Department of Homeland Security to conduct annual studies of deepfakes and countermeasures.

As legislators and officials try to prepare for future deepfakes, other groups are determining how well current detection methods hold up against more and more sophisticated deepfakes. Earlier this year, researchers at the University of California San Diego demonstrated how if a bad actor had some knowledge of how a deepfake detector system works, they could adjust their deepfake to bypass the system.

To keep from being misled by potential deepfakes, the FBI recommends the SIFT method when consuming information online: stop, investigate the source, find trusted coverage, and trace the original content.

A patchwork of laws

Currently, restrictions on facial recognition are at the state or local level and vary greatly.

“There are basically piecemeal regulations on different areas. For example, both Illinois and California have laws that are related to data privacy and security,” Parthasarathy said. “There is nothing at the national level.”

In 2008, Illinois became the first state to regulate the collection of biometrics information with the passage of the Biometric Information Privacy Act. Under the act, companies collecting and storing biometrics information must inform the person what data is being collected, how it will be used and stored and obtain written consent before doing so. Earlier this year, Facebook was ordered to pay $650 million to Illinois residents for violating the act.

Kansas has no such law.

In February, Kansas Sen. David Haley, D-Kansas City, introduced SB 198 regarding law enforcement use of body cameras.

As part of that bill, law enforcement would not be allowed to use a “computerized facial recognition program or application” with a body camera unless authorized by a warrant.

The bill was referred to the Committee on Judiciary but has not advanced.

At the federal level, a group of Democratic senators reintroduced legislation in June that would impose limits on biometric surveillance systems, including facial recognition, by federal and state government entities.

The Facial Recognition and Biometric Technology Moratorium Act of 2021 has been referred to the Committee on the Judiciary but has not advanced. The act also has been introduced in the House.

The act “is an important step to halt government use of face recognition technology,” Kate Ruane, senior legislative counsel for the American Civil Liberties Union, said in a statement.

Trade associations have said they do not support such bans.

“Facial recognition technology makes our country safer and brings value to our everyday lives when used effectively and responsibly,” the Security Industry Association wrote in a statement. “Greater transparency and accountability measures are the best ways to address concerns and ensure responsible use of the technology, without unreasonably restricting tools that have become essential to public safety. We do not support a moratorium or ban the use of this critical technology.”

For companies that do use facial recognition technology and want to be transparent about their algorithms, the National Institute of Standards and Technology offers a face recognition vendor test as a way to see how well their technology works and whether it may contain bias.

It is up to companies to decide whether to undergo the test.

“It’s voluntary, so lots of these new companies aren’t submitting their data … and so we have no idea about the accuracy,” Parthasarathy said.

Some companies have instituted their own bans on the technology to prevent possible misuse.

In the wake of protests after the death of George Floyd last year, IBM, Microsoft and Amazon said they would not allow police departments access to their facial recognition technology.

“We think it’s important to push these companies to not do it, but then ultimately, without policy in place, they can do whatever they want, and so that’s why we think the legislative piece is really critical,” George said.

“People can engage in that work and push their local legislators, their City Council, their mayors, their state legislators and their federal legislators to support legislation that addresses this issue.”

This story was originally published September 19, 2021 at 4:00 AM.

Related Stories from Wichita Eagle
Get unlimited digital access
#ReadLocal

Try 1 month for $1

CLAIM OFFER