Researchers’ Guide to Human-Generated Data & Biometric Sensors – for Human Factors, UX, HX & Psychology Insights

Participants Chamber

Studying humans is hard. You can’t set them on fire, drop them from platforms, cut them open or pour chemicals on them. Well, not legally anyway. So until now the tradition has simply been to watch them, and to ask them questions. However, observation and self-reporting can produce highly subjective data and be riddled with bias and confounding effects. New tools offer us a way forward.

Note: This is part of a four-part series on Human Data-Based R&D – see end for more.

Since the recent advent of low-cost sensors, such as cameras and wearables, we now have the opportunity to measure constant, real-time, nonconscious data from human participants. This innovation tempts us with the holy grail of human research: a live stream of objective data, from naturalistic environments – and that without an experimenter or survey form in sight. Sensor-led research is quietly revolutionising a wide range of psychological research, including but not limited to human factors, user experience, cognitive psychology, and human-computer interaction.

Ironically, just as we are granted the tools to take a huge step closer to the true feelings and experiences of our participants and users, we are simultaneously plunged into a new problem space. The challenge becomes less social and more technical.

And we think we’ve gone some way to solving it.

Build Your Own Multimodal Research Lab

If you’re looking to research human states using biometric sensors, you’re likely to face a similar journey to the one we have taken many times over a decade of setting up sensor-based labs; for ourselves and our customers, both in the lab and out in the wild. We feel your pain.

You can expect the hurdles on this path to include:

  1. Selecting hardware providers, e.g. sensors.
  2. Selecting software solutions.
  3. Understanding, launching and customising the tools you have now selected – both physical and digital, which may require coding and engineering, and will certainly require testing.
  4. Connecting your sensors wirelessly to whatever data-capture systems they are paired with.
  5. Managing your hardware for optimal data quality, uninterrupted connectivity, etc.
  6. Recording, aligning and synchronising all those data streams together.
  7. Wrangling all that data from multiple sources, in different formats, at different frequencies, from different communications protocols, all into one consistent dataset.
  8. Tagging key events and features – both manually (when you directly observe something happening in the study) or automatically (when a data stream exhibits a particular behaviour).
  9. Managing, graphing and reporting your data – along with any associated media and contextual information, such as video, audio, location, etc. – through a single viewport, in order to gain a holistic understanding of the participants’ feelings and actions in the moment.
  10. Running statistical analysis on the data you have now acquired – including complex models such as machine learning-based human state models.

Not a lot, then.

Now, we’re not trying to put you off with our own horror stories, or claim that we’ve solved all this so you don’t have to go through it… Well, alright, maybe we are.

It is of course beneficial to try things for yourself, get your fingers burned at various points along the journey – you learn a lot by doing so. Indeed, our customers come to us having reached different milestones, bringing varied experience and issues with them. But as with any complex endeavour, you have to figure out how to apply your resources optimally. If your team has available skills, you might prefer to code your own interfaces, create your own models and so on. However rich your pool of resources though, you will need to weigh up the benefits of the DIY, learn-on-the-job approach, against the efficiency of being able to skip ahead to tackling the task at hand: measuring how people feel.

Participants Tech
Participants in a Sensum field study.

Many researchers exploring psychological factors don’t want to have to put themselves through a self-taught comp-sci diploma just to view data from a sensor, let alone several different types of sensors running concurrently. They want to design robust experiments, study participants, and learn from the results – and then go again, in rapid iterations.

Synsis™ is a Lab in your Hand

We have tried with our new Synsis Empathic AI Kit to automate all the steps listed above into one plug-and-play solution. You can just power on the kit, and focus on what matters: running well-controlled studies. Synsis aims to give you a multimodal biometric sensor lab, packed into a single box. This grants new superpowers to any researcher wishing to measure human data, such as:

  1. Multimodal research – body, face and voice data, along with other data & media sources for context – automatically time-synced and combined into a single dataset, to provide a holistic picture of the person.
  2. Establish your own ground truths for any human state you want to measure, and test them against real-time syncronised sensor data.
  3. Automated insights – a feed of objective data, both raw (e.g. heart rate and facial coding) or derived by our psychophysiological models – e.g. valence (positive-negative), arousal (excited-relaxed), and stress.

What’s Standing in your Way?

Now that Synsis is proudly entering its second major iteration, after our first release last Autumn, we have packaged the end-to-end process described above into a single platform, requiring no technical expertise, that can be set up in a few minutes. But there is so much still to address, such as onboarding new sensors and data types, optimising for cloud vs edge processing, providing more advanced analytics, and so on. And it’s your feedback that will drive our roadmap for delivering those features. So please tell us if you have any experience with this kind of research, what you’ve learned along the way, and what you would like to see in an empathic AI research kit.

We can also bring humans to the table. Having designed and managed complex studies for some of the world’s largest brands in environments as awkward as a forested mountainside or a freezing wind tunnel, we offer expert support for your research and development. Our team’s capabilities include cognitive psychology, data science and software development, so we can consult with you through the entire process, from protocol design to product decision. Here’s a bit more about our services – or just get in touch.

Read on…

In this series:

Related:

Ben Bland

Chief Operations Officer