This post is also available in: heעברית (Hebrew)

Emotion recognition technologies can advance researchers’ ability to get deep insights into consumer behavior, but it seems that they could also be applied to the security field for the purpose of recognising an intent, for example to help recognise a terrorist waiting to act.  Emotion recognition methods could also be useful for the screening of mental status in the military context – detecting emotional change under stress, as suggested by a research published on

Video insight specialist Voxpopme has partnered with the Affectiva, specializing in  emotion recognition software, according to a statement in Affectiva’s Emotion AI will be integrated into the Voxpopme platform to enable the meticulous analysis of facial expressions within video feedback, instantly coding it into powerful emotion data. The integration means researchers using Voxpopme’s video insight platform will be able to accurately measure and quantify human expressions of emotion in new and existing video feedback.

The integration will make video research fast, accessible and scalable. The patented technology behind Affectiva’s emotion detection is useful for delivering emotion metrics and understanding the subtleties of facial expressions across any number of Voxpopme projects.

Affectiva claims that it has built the largest emotion data repository with nearly 4 million faces from over 75 countries, amounting to over 40 billion emotion data points, according to its website. This data fuels the training and testing of the company’s classifiers, and provides unique insights and analytics on which they build robust and relevant norms and benchmarks.

Affectiva’s emotion-sensing and analytics technology began with groundbreaking research at MIT’s Media Lab. Using computer vision and deep learning methodologies, the company developed face and emotion algorithms (“classifiers”) that are trained and tested for broad coverage of nuanced emotional expressions achieving high accuracy.

How does Affectiva’s technology work? The Affdex emotion-sensing and analytics technology just requires an optical sensor, typically a standard webcam or device camera. The computer vision algorithms identify key landmarks on the face, such as the tip of the nose, the corners of the eyes and the mouth. Machine learning algorithms analyze pixels in those regions to classify facial expressions (“action units”) based on pixel color, texture and gradient. Combinations of facial expressions are mapped to emotions. The moment-by-moment emotion data can be processed in real time through SDKs, made available as a data feed through APIs, or visualized in a dashboard.