Choose Your Sensors for Empathic AI

Equipment 003

One of the most common questions we encounter is, 'What sensors do I need to measure emotions?' So here's a quick overview of the main sensor types, how we use them, and a few shortcomings to watch out for.

Our empathic AI engine Synsis works with the widest range of sensors and data types on the market. This 'sensor-agnostic' strategy evolved out of two main conditions:

  1. For software to reliably and accurately interpret how a person is feeling, the data you throw at it should be as diverse as possible (more on sensor fusion here).
  2. In order to measure human states in settings that have ranged from the Arctic Circle to racing cars, we have had to work with many different brands, modes and formats of sensors.

There is no universal sensor solution, each choice has its pros and cons. Furthermore, the market is developing fast. Over the last decade biometric sensors finally crawled out of the science lab and started to run, dropping in price and size so much that they have found their way into millions of consumer-grade devices such as fitness bands, smartphones and laptops. Now we are witnessing an explosion in the form-factor of sensors, and the data they can collect. All the while, prices keep falling as accuracy and reliability rise.

Common Sensor Types

Cameras

...provide data from the user’s face & body (mainly for coding facial expressions) as well as the situation surrounding the user (for context). Cameras are also increasingly able to measure remote biometrics like heart rate (eg. through changes in skin colouration).

+ only requires cameras, which are commonly available.

– signal affected by facial hair & glasses, movement, etc. and user's facial expression is often neutral, especially in tasks with high cognitive load and/or low social interaction.


Wearables

 ...provide nonconscious physiological signals from the user’s body, like heart rate or skin conductance.

+ offers a constant feed of subtle & quick changes.

– requires the user to bring their own device.


Microphones

...provide voice patterns, indicating physiological & emotional changes, while also giving further situational context.

+ only requires relatively cheap and lo- sensor hardware.

– doesn’t provide any signal unless the user is speaking.


Off-Body Biometric Sensors
...use technology such as radar, laser, infrared to measure biometrics like heart rate (eg. through tiny body movements). 

+ remote detection avoids the ‘intrusion’ of wearing a sensor.

– new technology is at an early stage.

Multimodal Sensor Fusion For The Win

Our system works with various formats of data from all the above types of sensors. Taking this broad approach allows us to integrate with the choice of sensors that suit your specific product needs, and to adapt to the market as it changes. And it's changing fast.

Being able to take in many data types, fuse them together and cross-correlate the analysis we apply to them, is reflective of our overall view on human-state classification. Beyond trying to tease narrow, subjective emotional labels out of individual data streams, our algorithms model the user as a human being who experiences complex, ever-changing and highly contextual states of affect. What someone might consider fear in one context could be excitement in another.

Rather than fighting against the weird, contradictory world of human emotion, we have tried to work with it by designing a sensor-fusion pipeline around our empathic AI algorithms. The pipeline syncs, cleanses and tags the incoming data streams to feed the algorithms with a batch of signals from multiple modes of sensors. With this the algorithms are able to produce a universal classification of the user's state from one moment to the next. From this universal modelling space, our customers can focus in on what types and strengths of human affect are more valuable to them and their users, then build tailored solutions to suit.

For more on our approach to psychological modelling you might like to read A Primer on Emotion Science for our Autonomous Future.

Ben Bland

Chief Operations Officer