Empathic Technology: A Decade Past & Future
As the twenty-first century finally grows out of its teen years, we look back over the emergence of a whole new field of innovation and interaction, welcoming the coming age of empathic technology. With a near-decade’s experience of problem-solving in this new space, we also share some big changes we expect to see through the 2020s.
Back in 2011, I walked into the office of an innovative production agency called Filmtrip, which consisted of four people in one room on the top floor of a shared office building in Belfast. Filmtrip’s founder and CEO, Gawain Morrison, was a hell of a sight, sporting a vast crop of hair and a plaited beard that ran down to his belly. And he was a force of nature – full of drive and creativity. We hit it off immediately, and were soon rapping on various ways that technology could help us build a positive future for our world. At the time, Filmtrip was developing a portfolio of three or four major production projects, each incorporating a unique tech angle to its design.
One of the projects over which we strategised, in an endless mural of whiteboard scribbles and sticky notes, was called Biosuite. It was a collaboration with the world-class Sonic Arts Research Centre (SARC) at Queen’s University Belfast, literally a stone’s throw from the office windows at Filmtrip. The concept was very cool: a short horror movie that would play out differently depending on how the audience felt at the time. I attended the trial screening of Unsound in SARC’s unique 360º audio lab. Shortly after that, the movie premiered at SXSW in Austin, Texas.
Nobody could have predicted that Unsound’s creative media experience would evolve into Sensum – a standalone company pioneering empathic AI solutions for some of the world’s biggest brands. Around when Unsound was launched, I went off to work with other clients before coming back to a full-time role at Sensum in early 2016. But I recently dug up my old notes from those early days of consulting with Filmtrip, and was pleasantly surprised to see how well they stand up to this day. Some of the grand concepts we’re now dealing with in our work at Sensum were outlined in ink back then. At the heart of these big ideas was what Gawain called the ‘digital self’, by which he meant the digital extension of a person’s existence, as they interact with connected technologies, and leave a footprint of identity, behaviour and actions in their wake. Our thinking coalesced around a shared belief that data should belong to the human who generates it, and that a vibrant tech future could exist in which we are all empowered to protect or exploit our personal data however we wish. We still hold firm to that belief.
Measuring Human States Over the Years
With each project that Sensum tackled, there was always a new technical problem to be solved. New sensor and data types, new environments and scenarios, new visualisation and interaction methods. It was never simple, because the questions our clients were asking had typically never been answered before. Even in 2016, when we looked back over that work, we realised just how far our solutions had travelled, summarising it like this:
It didn’t feel like a stretch, therefore, to claim ourselves ‘world leaders at measuring human states in the wild’.
Over the last couple of years, the tide has shifted. Across all relevant industries, the prevailing attitude towards empathic technology has changed. What had been generally viewed as an exciting but vague concept for the future, is now widely considered to be an important inclusion in AI-enabled systems currently in development. And nowhere have we found this thinking to be more prevalent than in the mobility sector – automotive, in other words. This is now where the vast majority of our work resides. Automakers are sailing into a perfect storm of converging factors that are driving global disruption, transforming the market’s primary commodity from steel to services.
For a bit of background on this tectonic reshuffle within the mobility industry, read The Role of Human Emotions in the Future of Transport.
In the third quarter of this year, we completed a major drive to pack all that accrued experience into a single kit that would allow our customers to start measuring multiple modes of human data in minutes rather than months. And while the Synsis Developer Kit is far from our first product, it feels like we have finally reached a point where we can provide organisations with the tools to let them do what only a handful of pioneering empathic AI outfits have done until now: get out in the wild, measure how people feel, and design products that respond to those insights.
From a groundbreaking horror movie in 2011, to a comprehensive human-state prototyping kit in 2019, we’ve come far. That’s a long time for a startup to persist at launching a breakthrough innovation. So now, as we leave the second decade of the twenty-first century, and perhaps enter the maw of the fourth industrial revolution, it feels at last like the world is opening its arms to technologies that understand how we feel. I hope you’ll join us for the next part of the journey, it should be the best yet.
The Next Decade
Nobody can know the future, and hindsight is always twenty-twenty. But, looking ahead to the decade of twenty-twenties, we will try here to lay out a few key developments that we expect to see in our space.
1) Empathic Tech will Move from Projects to Products
The next few years are likely to bring on the widespread adoption of empathic AI into consumer technology. Until now, innovation in this space has sat mainly in the realm of experimental projects. There has been plenty of enthusiasm for the technology but little understanding of realistic business models and use-cases. Now we are seeing many organisations taking empathic tech seriously, testing various outcomes in order to design optimal product & service implementations. In the first half of the coming decade we can expect to find cars, phones, computers and many other ubiquitous products imbued with varying levels of empathic measurement and interaction capabilities.
2) Adoption Will Start at the Edges and Move Inwards
Right now for instance, many smartphones can detect their users’ faces and offer a range of silly selfie filters that are tracked to the users’ facial features. But consider that Apple bought the facial-coding startup Emotient back in 2016, so the same software that tracks your facial movement could feed emotional measurement algorithms in your mobile. So why doesn’t your iPhone respond to your feelings? Are Apple and others waiting for the ‘killer app’ that could introduce empathic AI gently and enjoyably, without freaking out their customers? Soon we expect to see many examples of soft, toe-dipping exercises, as brands cautiously roll out empathic features.
From the other end of the spectrum, look at perhaps the primary impetus for the growing interest in empathic tech in the automotive space: driver monitoring for safety. With the Euro NCAP Road Map 2025 endorsing the implementation of driver monitoring systems in vehicles this coming year, the automakers are seeking advanced monitoring capabilities to save lives. We predict that the human-state modelling features in vehicles will spread incrementally from life-saving safety functions over to comfort and experiences.
3) The Technology will Disappear
We have seen increasing demand, and concurrent innovation, in ever less intrusive sensors for collecting human data. Heart rate, as a key example, can be measured accurately via a sensor strapped to the chest. Failing that, you might use a wrist-worn sensor. But now there is a growing set of contactless solutions such as ultra wide-band and millimetre-band biometric radar, or biometric imaging techniques such as infrared and transdermal optical imaging. These increasingly accurate and inexpensive technologies, combined with clever product design and ergonomics, will continue to reduce the ‘footprint’ that the sensors impress on the user, eg. by being embedded invisibly in your seat or keyboard.
4) Empathic AI will be Infused in the Connected World
There is one major innovation that feels inevitable but it is hard to predict when and how it might emerge. While organisations are currently developing empathic technologies largely in isolation from each other, there are obvious advantages to connecting with other services in an integrated digital ecosystem. As people move from, say, their car to their office and back to their house, they could be carrying a persistent digital profile that allows each device or interaction to perform better due to the rich information that is generated by the user’s ongoing activities.
Maybe this kind of cross-platform connectivity will be contained within separate brand ecosystems, like the way Apple or Samsung offer advantages to customers who purchase multiple products from their lines. Maybe instead there will be solutions that provide a handover capability, for the customer to move between multiple providers with a seamless continuation of service.
This kind of data ecosystem model, which is being explored seriously in projects such as smart cities, could provide a utopian technology environment that supports a raft of lifestyle advantages for the humans that use them. However, developing such solutions is not just a complex challenge, the ethical implications are pressing. Potential abuse of this increased connectivity, by oppressive governments, aggressive marketing outfits, cyberterrorists and others, cannot be ignored.
5) Empathic AI Ethics Will Move into the Public Conversation
No sane person wants Black Mirror to become reality, or The Matrix, or The Terminator, or (insert your favourite sci-fi trope here). Artificial Intelligence could generate the greatest boost to human flourishing since our species spread out from Africa, or it could trigger our demise. There is a growing body of arguments for outcomes that lie along every point of that spectrum. Now, with the introduction of empathic processing, whereby AI-enabled systems can infer and respond to their user’s feelings, our interactions with machines will become more intimate than ever.
As these new sensing capabilities have been growing, there has been a parallel increase in public discourse, and backlash, against some of the technologies’ effects, both real and potential. For example, there has been a surge in news stories about facial recognition systems in the last year, urging caution about their misuse. And although facial recognition is very different to face-based emotion measurement, the two could understandably be conflated in the public mindset. In any case, similar levels of concern might be starting to arise for wider empathic processing methods, including facial coding, voice analysis, physiological signals and beyond.
As empathic AI starts to enter the public discussion, it feels like we are at crucial balancing point. The potential problems that this technology could trigger need to be discussed, and both governments and organisations alike have a responsibility to act preemptively to protect people. However, it would be a great loss to society if smart technology, boosted by empathic processing, was derailed by misguided assumptions or unethical actors.
We’re a small company but we are trying to get out ahead and take a strong ethical stand while our tech is still in a relatively contained, experimental stage of adoption. Our involvement in conferences and debates with the likes of RightsCon and Bangor University, as well as the launch of a new global standard for ethics in empathic technology with the IEEE, has been a great privilege and a step in the right direction. Expect more from next year.
6) There will be Increased Sophistication in Understanding Human States
Following on from the AI ethics point above, we predict that there will be a better understanding – both in public and in organisations – of the latest thinking on what emotions and affect really are, and how empathic technology could work. While there are very reasonable concerns about the potential misuse of technology that can measure emotions and other human states, there is also widespread oversimplification and misunderstanding about what those states are, and how effective such technologies are capable of being. Take for instance the recent critique of facial coding and the six ‘basic emotions’, which attempts to unpick long-standing notions and clarify where the science of emotion is going.
Our industry, along with government, academic institutions and others, have a responsibility to educate our stakeholders and the public to help them make more sophisticated judgements about this kind of technology. Psychology is a relatively young science, and the fields of emotions and affective computing are still quite green. We must manage expectations if we are to avoid a well-deserved backlash.
See You in the Future
Now, as we enter a new decade and look forward to the widespread adoption of some amazing innovations, all that is left to say is: seasons greetings, and here’s to a fruitful few years!