• Home
  • Science
  • Science News
  • AI To Interpret Human Emotions: Researcher Calls For Regulatory Oversight For Such Tools Being Pushed In Schools And Workplaces

AI To Interpret Human Emotions: Researcher Calls For Regulatory Oversight For Such Tools Being Pushed In Schools And Workplaces

Predicting a worker’s emotional state is of so much interest to employers that the emotion-recognition industry is likely to grow manifold by 2026

AI To Interpret Human Emotions: Researcher Calls For Regulatory Oversight For Such Tools Being Pushed In Schools And Workplaces

Photo Credit: Pixabay

Allowing AI technology without auditing effectiveness could lead to unfair results

Highlights
  • AI that maps facial features to emotions is becoming more popular
  • It is being used for things ranging from schooling to HR to policing
  • Professor Kate Crawford at Microsoft Research says relegation is needed
Advertisement

While the pandemic has led to people and authorities shifting their focus on fighting the coronavirus, some technology companies are trying to use this situation as a pretext to push “unproven” artificial intelligence (AI) tools into workplaces and schools, according to a report in the journal Nature. Amid a serious debate over the potential for misuse of these technologies, several emotion-reading tools are being marketed for remote surveillance of children and workers to predict their emotions and performance. These tools can capture emotions in real time and help organisations and schools with a much better understanding of their employees and students, respectively.

For example, one of the tools decodes facial expressions, and places them in categories such as happiness, sadness, anger, disgust, surprise and fear.

This program is called 4 Little Trees and was developed in Hong Kong. It claims to assess children's emotions while they do their classwork. Kate Crawford, academic-researcher and the author of the book ‘The Atlas of AI', writes in Nature that such technology needs to be regulated for better policymaking and public trust.

An example that could be used to build a case against AI is the polygraph test, commonly known as the “lie detector test”, which was invented in the 1920s. The American investigating agency FBI and the US military used the method for decades until it was finally banned.

Any use of AI for random surveillance of the general public should be preceded by a credible regulatory oversight. “It could also help in establishing norms to counter over-reach by corporations and governments,” Crawford writes

It also cited a tool developed by psychologist Paul Ekman that standardised six human emotions to fit into the computer vision. After the 9/11 attacks in 2001, Ekman sold his system to US authorities to identify airline passengers showing fear or stress to probe them for involvement in terrorist acts. The system was severely criticised for being racially biased and lacking credibility.

Allowing these technologies without independently auditing their effectiveness, would be unfair to job applicants, who would be judged unfairly because their facial expressions don't match those of employees; students would be flagged at schools because a machine found them angry. The author, Kate Crawford, called for legislative protection from unproven uses of these tools.

Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Loki, Black Widow Clips Feature Banter and a High-Speed Chase
Poco M3 Pro 5G Specifications Teased Ahead of May 19 Launch; to Get Triple Rear Cameras, 90Hz Display
Share on Facebook Gadgets360 Twitter Share Tweet Snapchat Share Reddit Comment google-newsGoogle News
 
 

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2024. All rights reserved.
Trending Products »
Latest Tech News »