Published on Jun 29, 2022
The head of Microsoft’s responsible artificial intelligence effort warned last week that the science of emotion is far from settled.
It is difficult to make generalizations across use cases, regions, and demographics, according to Microsoft’s chief responsible AI officer, Natasha Crampton.
A relatively small piece of technology, emotion recognition AI, has been the subject of intense criticism, especially among academics, since Microsoft announced its “Responsible AI Standard” initiative this week.
Automated emotional detection software analyzes facial expressions, tone of voice, or word choice. Business, education, and customer service software can read, recognize, and measure emotions. Using emotions to adjust call center behavior in real time can be an example of this. Teachers can measure students’ performance, interest, and engagement using another service that tracks their emotions during classroom video calls.
For several reasons, including its disputed effectiveness, the technology has drawn skepticism. A professor at the University of Oxford, Sandra Wachter, stated that emotion AI has “at best no real evidence of its effectiveness in science, and at worst, it is a form of absolute pseudoscience.” She also said that its application in the private sector is “deeply troubling.”
Crampton also pointed out that emotion AI’s inaccuracy is not its only problem.
Presentations
Browse LSET presentations to understand interesting…
Explore Now
eBooks
Get complete guides to empower yourself academically…
Explore Now
Infographics
Learn about information technology and business…
Error: Contact form not found.
Error: Contact form not found.
Error: Contact form not found.
Error: Contact form not found.
Error: Contact form not found.
Error: Contact form not found.
Error: Contact form not found.
Error: Contact form not found.
Error: Contact form not found.
Error: Contact form not found.
Error: Contact form not found.
Error: Contact form not found.
Error: Contact form not found.
[wpforms id=”9030″]