Sensum present their latest emotion AI research at the AISB Annual Convention 2017

Sensum Ai

A couple of weeks ago, Sensumite Damien Dupre attended the Symposium on “Computational Modelling of Emotion: Theory and Applications”, in association with the AISB Annual Convention 2017, to present some of our latest research around emotion AI.

With strong focus on AI, the two day conference aimed to “facilitate movement towards a mature integrated field with a deeper and richer understanding of biological minds by setting out interrelationships between emotion models more clearly”. The event brought together a community interested in exploiting emotion modelling technology, including Sensum’s Data Scientist, Damien Dupre, who shared findings from our latest work “Dynamic Analysis of Automatic Emotion Recognition Using Generalized Additive Mixed Models”

The first point Damien made to the audience was the importance of implementing facial expression recognition as a component in artificial intelligence, since it is facial expression that allows us to communicate and recognise emotions. If we create robots that display and recognise human emotions, they will be obliged to analyse the facial expression. And that’s why investigating the automatic analysis of facial expression is so important.

Within our latest research, we conducted a study using an online automatic recognition system. Damien explained to the audience that we wanted to know whether a specific piece of media is triggering the expected emotion. In this case, the expected emotion was sadness.

The online study involved 836 participants, who were asked to watch a short (1min 30sec) Sci-fi Commons piece called ‘Tears of Steel’. Whilst they were watching the clip, our online system was running ‘Faciometrics’ to recognise the following emotions:

- Happy

- Surprise

- Sadness

- Disgust

- Focus

- Attention

Before running the analysis, we expected a major spike of sadness expression at the beginning. Overall, results are very interesting:

- On average, the participants mainly expressed a neutral face when watching the Sci-Fi media. This wasn’t an overly surprising finding because, as humans, we are likely to only express emotion at key moments rather than the full period of the video.

- The emotion mostly expressed was sadness, which is exactly what we had expected.


- Due to participants’ differing natural facial expressions, an overall analysis of emotion recognition can miss important characteristics.

- Natural facial expressions are dynamic, so it was essential to perform a dynamic analysis. This allowed us to take into account the individual’s pattern of expressions.

- Expressive patterns can be different according the participants. In addition, these expressions are subtle and quick, and most of the time people were showing a neutral expression.

- Even if some unexpected emotions such as happiness can be recognised, a dynamic analysis is able to distinguish expected to unexpected emotion.



- Looking at individual’s patterns, we found a lot of sadness being expressed at the beginning of the media, which is exactly what we originally expected. However, the individual recognition pattern removes the possibility to identify any common expressive pattern. This highlights the importance of taking the individual expression patterns of participants into account.

- To analyse the significance of the individuals patterns we needed to use Generalized additive Mixed Model (GAMM) and Significant Zero Crossing of the derivative (siZer) analysis: it can test data against an assumed model or an estimated model and error structure.




Ultimately, our research shows that a dynamic analysis is essential for understanding the individual characteristics of human emotion. Using statistical analysis, it is possible to analyse these dynamic individual patterns and take into account the natural facial expressions, leading to more accurate facial recognition. This not only allows robots to better analyse and interpret human emotions, but it also leads the pathway of emotion AI.