2020-07-29

Face masks break algorithms of facial recognition, says new government research

Article Edited by | Jhon N |

Image:-mask

According to a new study by the US government agency NIST (The National Institute of Standards and Technology), face masks break up some of the most common facial recognition algorithms. For some algorithms, error rates have increased to between 5% and 50%.

Face masks are one of the best defenses against COVID-19 spread, but their increasing adoption has a second and unintended effect: rupture of algorithms of facial recognition. A study by the US National Institute of Standards and Technology ( NIST) found that the wearing of facial masks, which adequately cover mouth and nose, causes some of the most common facial recognition algorithms to spike to between 5% and 50%. The more errors were made by the black masks than by the blue masks and the harder the algorithms found the nose covered by the mask to identify the face.

"With the pandemic coming into being, we need to understand how face recognition technology deals with masked faces," said Mei Ngan, a report writer and IT scientist at NIST. "We started by concentrating on how subjects wearing face masks could affect an algorithm developed prior to the pandemic. We will test the exactness of algorithms which have been deliberately developed with masked faces in mind later this summer.

Facial recognition algorithms like those tested by NIST are used to measure the distance between characteristics on the face of a goal. By eliminating most of these functions, but still some, masks reduce the accuracy of these algorithms. This is somewhat different from, for example, how face recognition works on iPhones, which uses depth sensors for additional safety so as not to fool the algorithms by showing a picture to the camera (a danger that the NIST scenarios do not present).

While anecdotal evidence has been shown that facial masks do not prevent face recognition, the NIST study is especially definitive. NIST is the governmental agency responsible for evaluating the precision of these algorithms for the federal government (along with many other systems) and its classifications of different vendors are extremely influential. Notably, the NIST report has tested only one-to-one facial recognition. This procedure is used in frontier and passport control scenarios, where the algorithm examines whether the face of the target matches your ID. This is different from the type of face recognition system used for mass monitoring, which scans a crowd to find matches in a database with faces. This is referred to as a system one by many.

While NIST reports do not include single to multiple systems, these are usually considered to be more error-pone than one-to-one algorithms. It is difficult to picture faces in a crowd because the angle or the light on the face can not be controlled and the resolution is reduced generally. This suggests that if facial masks break one-to-one systems, at least the same, but probably higher frequency, will break one-to-many algorithms. This corresponds to reports from the government we have heard. The Intercept has reported in an internal US Homeland security newsletter earlier this year that the agency is concerned about "possible consequences for safety operations incorporating facial recognition systems which could lead to widespread utilisation of protective masks."

This is welcome news for privacy advocates. Despite the chilling effects that this technology has on civil freedoms and the widely recognised racial and gender distortions that these systems tend to do worse for people not white men, many have warned about governments around the globe rushing to adopt facial recognition systems.

Meanwhile, companies that produce face recognition technology quickly adapt to this new world and develop algorithms which only use the area around the eyes to identify face. Some sellers, like NtechLab, the leading Russian company, say they can identify people with their new algorithms although they wear a balaclava. However, such claims are not fully credible. It usually comes from internal data that can be picked for flattering results. There are therefore standardised testing offered by third-party agencies as NIST.

NIST plans to test, in addition to testing the efficacy of single to several systems , specifically tailored facial recognition algorithms for mask wearers later this year The Agency expects technology to continue, despite the problems caused by masks. "We expect technology to continue to improve as regards precision with face masks," Ngan said.

"Internet would abandon what has been its principal source of competitive advantage for 50 years, presumably through the outsourcing of leading edge technology to TSMC," Caso of Raymond James said.