Microsoft’s Major Move Concerning Facial Recognition Technology

719

A recent news article published in the NBC News states that Microsoft is removing emotion recognition features from its facial recognition tech.

The science of emotion is far from settled

When Microsoft announced last week it will remove several features from its facial recognition technology that deal with emotion, the head of its responsible artificial intelligence efforts included a warning: The science of emotion is far from settled.

“Experts inside and outside the company have highlighted the lack of scientific consensus on the definition of ’emotions,’ the challenges in how inferences generalize across use cases, regions, and demographics, and the heightened privacy concerns around this type of capability,” Natasha Crampton, Microsoft’s chief responsible AI officer, wrote in a blog post.

Microsoft’s move, which came as part of a broader announcement about its “Responsible AI Standard” initiative, immediately became the most high-profile example of a company moving away from emotion recognition AI, a relatively small piece of technology that has been the focus of intense criticism, particularly in the academic community.

Live analysis of emotions

Emotion recognition technology typically relies on software to look at any number of qualities — facial expressions, tone of voice or word choice — in an effort to automatically detect emotional state. Many technology companies have released software that claims to be able to read, recognize or measure emotions for use in business, education and customer service. One such system claims to provide live analysis of the emotions of callers to customer service lines, so that employees in call centers can alter their behavior accordingly. Another service tracks the emotions of students during classroom video calls so that teachers can measure their performance, interest and engagement.

The technology has drawn skepticism for several reasons, not least of which is its disputed efficacy. Sandra Wachter, an associate professor and senior research fellow at the University of Oxford, said that emotion AI has “at its best no proven basis in science and at its worst is absolute pseudoscience.” Its application in the private sector, she said, is “deeply troubling.”

Inaccuracy of emotion AI

Like Crampton, she emphasized that the inaccuracy of emotion AI is far from its only issue.

“Even if we were to find evidence that AI is reliably able to infer emotions, that alone would still not justify its use,” she said. “Our thoughts and emotions are the most intimate parts of our personality and are protected by human rights such as the right to privacy.”

It’s not entirely clear just how many major tech companies are using systems meant to read human emotions. In May, more than 25 human rights groups published a letter urging Zoom CEO Eric Yuan not to employ emotion AI technology. The letter came after a report from the tech news website Protocol indicated that Zoom may be adopting such technology because of its recent research into the area. Zoom has not responded to a request for comment.

In addition to critiquing the scientific basis of emotion AI, the human rights groups also asserted that emotion AI is manipulative and discriminatory. A study by Lauren Rhue, an assistant professor of information systems at the University of Maryland’s Robert H. Smith School of Business, found that across two different facial recognition softwares (including Microsoft’s), emotion AI consistently interpreted Black subjects as having more negative emotions than white subjects. One AI read Black subjects as angrier than white subjects, while Microsoft’s AI read Black subjects as portraying more contempt.

Azure targeted

Microsoft’s policy changes are primarily targeted at Azure, its cloud platform that markets software and other services to businesses and organizations. Azure’s emotion recognition AI was announced in 2016, and was purported to detect emotions such as “happiness, sadness, fear, anger, and more.”

Microsoft has also made promises to reassess emotion recognition AI across all its systems to determine the risks and benefits of the technology in different areas. One usage of emotion-detecting AI that Microsoft hopes to continue is its use in Seeing AI, which assists vision-impaired people through verbal descriptions of the surrounding world.

Andrew McStay, professor of digital life at Bangor University and leader of the Emotional AI Lab, said in a written statement that he would have rather seen Microsoft stop all development of emotion AI. Because emotion AI is known to be dysfunctional, he said he sees no point in continuing to use it in products.

“I would be very interested to know whether Microsoft will pull all forms of emotion and related psycho-physiological sensing from their entire suite of operations,” he wrote. “This would be a slam-dunk.”

Other changes in the new standards include a commitment to create fairness in speech-to-text technology, which one study has shown has nearly twice the error rate for Black users as white users. Microsoft has also restricted the use of its Custom Neural Voice, which allows for the nearly exact impersonation of a user’s voice, due to concerns about its potential use as a tool for deception.

Crampton noted the changes were necessary in part because there is little government oversight of AI systems.

“AI is becoming more and more a part of our lives, and yet, our laws are lagging behind,” she said. “They have not caught up with AI’s unique risks or society’s needs. While we see signs that government action on AI is expanding, we also recognize our responsibility to act. We believe that we need to work towards ensuring AI systems are responsible by design.”

 

Did you subscribe to our daily Newsletter?

It’s Free! Click here to Subscribe

Source: NBC News