Affectiva's Series D Shows Great Value Ahead for More Emotional Robots

June 01, 2016
By: Steve Anderson

While many companies contemplate the move from paying human employees to using a slave labor battalion of machines, the shortcomings of said machines become all the clearer, and other companies are working to fix these. One such company is Affectiva, who recently raised $14 million in a Series D funding round to pursue its work of giving robots a better understanding of human emotion and how it's displayed.

The funding round, led by Fenox Venture Capital, will give Affectiva the means to augment its current

developments in machines understanding emotion. Affectiva currently uses a combination of computer vision and deep learning systems, according to co-founder Rana el Kaliouby, and is out to be the primary means by which customers add both empathy and emotional intelligence to interactive products. It takes in digital images from video chat sessions and the like, and processes these reactions accordingly, first starting with simple emotions like happiness, sorrow and anger, and then moving on to the more subtle and complex emotions.

Affectiva's systems draw on a library of 4.25 million videos taken so far, encompassing users from 75 different countries to yield a combined total of better than 50 billion data points directly connected to emotion. It's sufficiently capable, reports note, of distinguishing between a happy smile and a polite smile, a distinction commonly seen in India or in other Asian countries. The "politeness smile," as it's known, is generally not seen in Europe, Africa, or either North or South America, making it unusual to the region and a key point for sales reps and marketers.

Some might find this intrusive, but Affectiva's el Kaliouby made it clear that Affectiva both was not and will not be using its systems to work on lie detection. Plus, Affectiva reportedly requires its purchasers to get explicit opt-in and consent from potential targets apparently as a condition of sale.

Regular users might not be exactly happy about the notion that their video calls may end up as fodder to teach artificial intelligences what humans look like when they're happy. It's almost like loading the gun Skynet will use to kill us all or at least take our jobs. While it's not a bad idea to have machines better understand what humans are thinking—this could be especially useful as part of an omnichannel effort involving video calling—it's going to be rather limited. Using nonverbal cues is great in an environment where such become visible, like on a video chat or in a face-to-face interpersonal reaction, but nonverbal cues are lost in phone calls, emails, social media chatting, and Web chat operations.

Still, slightly smarter machines could be useful in allowing businesses to cover more of the waterfront when it comes to an omnichannel environment, freeing up humans for work elsewhere. As long as the humans remain employed, much of the downside to this concept is lost, and Affectiva's work becomes valuable instead of destructive.




Edited by Maurice Nagle


Original Page