Sense and sensibility: Do we want AI to master emotions?
Imagine you come house one day in a bad mood, shout at the door for not opening quick enough and at the light bulb because it burned out — and the smart speaker immediately starts playing chill music, and the coffee machine pours you a mocha. Or, as you walk into a store, the robot assistant that was about to approach sees your unhappy face, backs off, and helps other customer instead. Sound like science fiction?
In fact, emotion recognition technologies are already being introduced into much areas of life, and in the around future our mood could well be under the watchful eye of gadgets, household appliances, cars, you name it. In this post, we explore how such technologies work, and how useful — and sometimes dangerous — they might be.
Most existing emotion-recognition systems analyze an individual’s facial expression and voice, as well as any words they speak or write. For example, if the corners of a person’s mouth are raised, the machine might rule that the person is in a good mood, whereas a wrinkled nose suggests anger or disgust. A tall, trembling voice and hurried speech can indicate fear, and if somebody shouts the word “cheers! ” they are perhaps lucky.
More-complex systems also analyze gestures and even take into consideration the surrounding environment along with facial expressions and speech. Such a system recognizes that a person being forced to smile at gunpoint is perhaps not overjoyed.
Emotion-recognition systems usually learn to determine the link between an emotion and its outside manifestation from big arrays of labeled data. The data may include audio or video recordings of TV shows, interviews and experiments inclusion real people, clips of theatrical performances or movies, and dialogues acted out by professional actors.
Simpler systems can be trained on photographies or text corpora, depending on the purpose. For example, this Microsoft project tries to guess people’s emotions, gender, and approximate age based on photographs.
What’s emotion recognition for?
Gartner predicts that by 2022 one in ten gadgets will be fitted with emotion-recognition technologies. However, some organizations are already using them. For example, when stepping into an office, bank, or restaurant, customers might be greeted by a friendly robot. Here are just a little areas in which such systems might prove to be beneficial.
Emotion recognition can be used to prevent violence — domestic and otherwise. Numerous scientific articles have touched on this issue, and entrepreneurs are already selling such systems to schools and another institutions.
Some companies deploy AI capable of emotion recognition as HR assistants. The system evaluates keywords, intonations, and facial expressions of applicants at the initial — and most time-consuming — stage of the selection process, and compiles a message for the human recruiters on whether the candidate is genuinely interested in the position, honest, and more.
The Roads and Transport Authority in Dubai launched an interesting system this year at its customer service centers, with AI-equipped cameras comparing people’s emotions when they enter and leave the building to determine their level of satisfaction. If the score calculated falls adown a determined value, the system advises center employees to take measures to improve the quality of service. (Photos of visitors are not saved for privacy considerations. )
Socialization of children with particular needs
Another projects aims to help autistic children interpret the feelings of those near them. The system runs on Google Glass smart glasses. When the baby interacts with other person, the glasses use graphics and sound to give clues about the latter’s emotions. Tests have shown that children socialize faster with this virtual helper.
How effective are emotion detectors?
Emotion-recognition technologies are far from perfect. A case in point is the aggression-detection technology deployed in many US schools. As it turns out, the system considers a cough more alarming than a bloodcurdling scream.
Researchers at the University of Southern California have found that facial-recognition technology is also light to dupe. The machine automatically associates certain facial expressions with particular emotions, but it fails to distinguish, for example, malicious or gloating smiles from genuine ones.
As such, emotion-recognition systems that take context into account are more accurate. But they are more complex and far fewer.
Of relevance is not only what the machine is looking at, but what it was trained on. For example, a system trained on acted-out emotions might struggle with real-life ones.
Emotions as private data
The spread of emotion-recognition technologies raises other significant issue. Regardless of how effective they are, such systems invade people’s private space. Consider, for example, the next scenario: You take a fancy to a random passerby’s outfit, and till you know it, you’re being bombarded with ads for clothes from the same brand. Or you frown disapprovingly during a meeting and subsequently get passed over for promotion.
According to Gartner, more than half of all US and British residents do not want AI to interpret their feelings and moods. In some places, meanwhile, emotion- and facial-recognition technologies are prohibited by law. In October, for example, California introduced legislation banning law enforcement officers from recording, collecting, and analyzing biometric information using body-worn cameras, including facial expressions and gestures.
According to the drafters of the bill, the use of facial-recognition technology is tantamount to demanding passersby show their passport each second. It violates the rights of citizens, and it can cause people guilty of minor misconduct, such as having an unpaid parking ticket, to be wary of reporting more serious crimes to the police.
The problem of privacy is so acute that the fooling of emotion detectors is even the subject of scientific research. For example, scientists at Imperial College London have developed a privacy-preserving technology that removes feelings from the human voice. The result is that a voice assistant fitted with emotion-recognition technology can understand the importance of what is said, but not interpret the speaker’s mood.
Placing limits on AI will surely complicate the development of empathy in AI systems, which even now are prone to error. But it’s good to have a safeguard in case our world turns into Black Mirror the machines start to poke near too deeply internal our subconscious. After all, we shouldn’t count on emotion recognition being scrapped, especially given that in some areas the technology does actually do good.
Was this article helpful?427 Posted by: 👨 David A. Hill