Search

Facial recognition technology: UMass Amherst professor calls for creation of federal office to regulate contr - MassLive.com

kajasada.blogspot.com

A Massachusetts professor is arguing for the creation of a federal agency, akin to the U.S. Food and Drug administration, to regulate the controversial use of facial recognition technology, a piece of software that continues to be lambasted for its potential civil liberties violations and racial biases.

The proposal was put forward by University of Massachusetts Amherst computer science Professor Erik Learned-Miller during a livestreamed discussion Wednesday about the hotly debated software, its ramifications and even its possible positive uses.

Despite the technology’s controversies and criticisms, facial recognition remains largely unregulated at the state and national levels.

A growing number of communities across the country, including several in Massachusetts, have taken it upon themselves to pass municipal restrictions on the use of the software in the absence of more sweeping legislation.

But many advocates, politicians and researchers do not think going town by town or city by city regulating the software is enough. The technology is moving far more quickly than policymakers can act and is sometimes being used in the shadows without the public’s knowledge, they have argued.

Individual measures implemented by towns or cities are not sufficient in guaranteeing the “consistent protection of people’s rights,” Learned-Miller said in a statement earlier this summer.

“We should seriously be thinking about a new federal office to regulate face recognition, because there are so many complexities and so many issues that it entails,” the UMass professor said during Wednesday’s online discussion. “We need a coordinated effort to do this.”

Learned-Miller’s question-and-answer session comes as citizens across the country continue to speak out strongly against systemic racism, calling for an end to police brutality and demanding a demilitarization of American law enforcement agencies.

Facial recognition technology has not been exempt from such discussions, and Massachusetts has been at the forefront of debates on the software.

Seven communities throughout the state have either outright or temporarily banned the municipal use of the technology, with Boston becoming the second-biggest U.S. city to restrict the software. San Francisco was the first and largest community in the country to ban the technology, doing so in May 2019.

The roughly hourlong event Wednesday was part of the UMass College of Information and Computer Sciences’ series of webinars called “Technology and Social Justice.” Each discussion focuses on a different technology whose impact on society is significant in both visible and invisible ways.

The rapid adoption of new softwares and the quick pace of innovation amplify the positive and negative impacts of computer science research, Laura Haas, the dean of the college, pointed out.

“As computer scientists, we must grapple with research challenges while critically accessing the potential for misuse, hidden bias, lack of transparency and unintended consequences of our technical innovations,” she said.

UMass Amherst professor calls for independent federal agency to regulate use of facial recognition technology

University of Massachusetts Amherst computer science Professor Erik Learned-Miller, during a livestreamed discussion, called for an independent federal agency to regulate the use of facial recognition technology, a controversial piece of software criticized for its potential to violate citizens' privacies and for its racial biases. (Jackson Cote/MassLive via Zoom)

An award-winning expert on the form of biometric surveillance, Learned-Miller cowrote a report titled “Facial Recognition Technologies in the Wild: A Call for a Federal Office” that provides actionable recommendations for overseeing the use of the software.

The project, funded by a grant from the MacArthur Foundation, was coauthored by computer scientist Vicente Ordóñez of the University of Virginia, Jamie Morgenstern of the University of Washington and Joy Buolamwini, a Massachusetts Institute of Technology researcher and founder of the Algorithmic Justice League.

Buolamwini’s work has been central to the dialogue on facial recognition, and she even testified in favor of a statewide moratorium on the software’s use in Massachusetts in October.

In 2018, the researcher conducted a study called the Gender Shades project that put the software’s racial bias problems on full display.

She ran more than 1,200 faces through recognition programs offered by Face++, IBM and Microsoft and found the technology frequently misidentified women of color.

Her and Learned-Miller’s paper talks about everything that could go wrong with the software and urges officials to adopt a federal office to manage the technology, according to the UMass professor.

There are a slew of issues that plague facial recognition, from its potential to breach people’s privacies to its track record of racial profiling, Learned-Miller noted.

During Wednesday’s online discussion, the computer science instructor brought up the recent case of Robert Williams, a Black man living in a suburb of Detroit, Michigan, who was wrongfully identified by the software and taken into custody for a crime he did not commit, according to the American Civil Liberties Union of Massachusetts.

Williams’ case, advocates and experts like Learned-Miller have pointed out, is a prime example of the real-world consequences of facial recognition technology.

“He was arrested in front of his family and detained for 30 hours,” Learned-Miller said of Williams. “Also, he was humiliated and intimidated, and his kids went through trauma.”

Learned-Miller added, “These are really serious problems, and we want to try to figure out how we can avoid these things.”

The professor’s report, according to a statement, specifically calls for a governmental model, inspired by some of the FDA’s offices that regulate medical devices and pharmaceuticals, that would categorize facial recognition technologies by degrees of risk and issue corresponding restrictions.

“The FDA provides a model or precedent of centralized regulation for managing complex technologies with major societal implications,” the statement said. “Such an independent agency would encourage addressing the facial recognition technologies ecosystem as a whole.”

Fielding a question from MassLive on Wednesday about how technology companies that sell facial recognition software would be monitored, Learned-Miller answered, “The way it would work is just the way the FDA works.”

“Let’s say you’re a medical device company and you want to make a new implantable pacemaker, then what you do is you show the design of the pacemaker to the FDA,” he said. “The design, the tests, the engineering practices, the quality-control procedure results, you submit 1,000 pages of stuff to them that says, ‘Look, we really know what we’re doing.' "

Such products also have to go through clinical trials, where they are tried out on volunteers, and the companies that create the devices have to argue they are safe and effective as well.

“I would like to see all those same kinds of ideas implemented so that a company that wants to get into the business has to provide this level of support,” the UMass professor said of the facial recognition technology industry.

Aside from its potential dangers, facial recognition technology offers some positive applications, Learned-Miller noted during the discussion.

It could be used to diagnose certain medical conditions, unlock personal computer devices and even find minors who have disappeared or been abducted, he claimed.

“It’s currently used that way by the FBI and has been responsible for helping to find a large number of missing and abused children,” the professor said.

The software, Learned-Miller emphasized, has beneficial applications that should not be thrown away.

“That raises the follow-up question, ‘Can we find a way to allow the reasonable applications while regulating the dangerous ones?,' " he said.

At one point during the online discussion, Learned-Miller was asked by fellow facial recognition technology expert Buolamwini what approaches the UMass professor would recommend for addressing the harmful aspects of the software in the absence of an FDA-style federal office.

Noting that Buolamwini’s question was nuanced, Learned-Miller answered that a temporary ban on certain facial recognition technologies would be a “pretty reasonable option right now.”

However, such restrictions would outlaw certain uses of the software that people believe to be benign, like sorting through photographs on a personal computer.

“I’d like to be able to move as quickly as we can towards allowing the low-risk applications and holding off on the high-risk applications,” Learned-Miller said, “which means I would support a temporary ban on applications deemed to be high-risk.”

Related Content:

Let's block ads! (Why?)



"creation" - Google News
July 16, 2020 at 05:36PM
https://ift.tt/3ez3hk4

Facial recognition technology: UMass Amherst professor calls for creation of federal office to regulate contr - MassLive.com
"creation" - Google News
https://ift.tt/39MUE4f
https://ift.tt/3bZVhYX

Bagikan Berita Ini

0 Response to "Facial recognition technology: UMass Amherst professor calls for creation of federal office to regulate contr - MassLive.com"

Post a Comment

Powered by Blogger.