top of page
Search

Why Emotional AI Is Not the Future of Workplace Wellbeing

Technology is rapidly changing how organisations approach wellbeing. From digital check-ins to AI-driven emotion tracking, using facial expressions, speech, and biometric data — employers are turning to tools that claim to understand how people feel at work.

Often introduced with good intentions, these systems aim to spot early signs of burnout and improve support. But their use raises tough questions: What happens when emotional data is collected out of context? Who controls it? And what if it’s wrong?


When Wellbeing Becomes a Data Point

Marketed as supportive, many of these tools track stress and sentiment through wearables and communication monitoring. But instead of tackling root causes, they often shift the burden onto individuals, encouraging them to self-regulate in unhealthy environments.

And their accuracy is questionable. A University College London study found emotional recognition tools correctly interpreted real-world expressions only 48–62% of the time, compared to 72% accuracy by humans. That’s a worrying gap, especially when wellbeing decisions may be based on flawed data.


What Psychology Tells Us

Occupational psychology highlights the importance of psychological safety, the belief that you can be open and honest at work without fear of negative consequences. But when people feel watched or interpreted without input, that safety starts to wear away.

Emotional monitoring can lead to impression management, not wellbeing. People smile through stress. They withhold how they really feel. They adapt to fit a version of ‘wellness’ that looks good on a dashboard.

And for those who are neurodivergent, disabled, or from diverse cultural backgrounds, the risk of misinterpretation is even higher. What gets flagged as disengagement or mood changes may simply be a difference in communication, but it can still affect how people are viewed.

The UCL research also flagged consistent racial bias in emotion AI tools. According to their findings, these systems often mislabel Black faces as angry or negative, even when no such emotion was present. The idea that facial expressions alone can reliably indicate feelings like fear or disgust is, as they noted, scientifically unsupported.


Even If Organisations Say They Will Not Use the Data Against Staff

Even when organisations say emotional data will only be used for support, that depends on an overly ideal view of workplace behaviour. People do not always act in line with company values. Those overseeing the results may behave in ways that are self-serving, biased, or quietly punitive, especially when systems give them extra insight into how someone is feeling.

A manager, for example, may already view an employee as underperforming or difficult. This type of data can then be used to confirm that view. It can be brought into performance reviews, questioned in private conversations, or used as justification for decisions that have already been made. The employee has no real say in how their emotional patterns are interpreted or applied.


This is not just about trust. It is about how easily these systems can enable poor management and make it harder for staff to protect themselves.

Take the example of Network Rail, which trialled emotion-detecting AI via Amazon Rekognition at UK train stations. According to The Times, although the aim was to enhance public safety, observers criticised the system as "privacy invasive" and ultimately unreliable. Concerns were raised over the lack of transparency about how emotional data was being collected and used, the absence of meaningful consent from individuals being scanned, and the broader implications for civil liberties. Independent observers noted that the emotion detection was inconsistent, with unreliable results often leading to unnecessary interventions, and that such surveillance technologies could normalise intrusive monitoring without public accountability.


And these issues go beyond single-use cases. MIT’s Gender Shades research, originally carried out in the US, revealed alarming error rates with up to 35% for darker-skinned women compared to under 1% for lighter-skinned men. These findings are now influencing public conversations and evaluations in the UK. Investigations by UK media and independent watchdogs have begun referencing this research when assessing similar AI technologies used in policing, public services, and workplaces. It is a reminder that the same tools being questioned abroad are already operating in our own systems, often without sufficient scrutiny or safeguards. This same type of facial recognition software is now being used in some UK public services and workplaces, despite clear warnings from experts and researchers. The dangers are not hypothetical, biased systems can have consequences, especially when used without proper oversight or safeguards.

 

Redesigning Support

Technology can play a role in workplace wellbeing, but it cannot replace relationships, leadership, and accountability. Systems must be designed to protect people, not expose them.

That means:

  • Transparency around what is being measured and why

  • A genuine opt-in process with no penalties for opting out

  • Keeping data separate from performance

  • Focusing on fixing systemic stressors, not flagging individual reactions

Support should not rely on everyone doing the right thing. It should be built to prevent harm when they do not.

 

Wellbeing cannot be outsourced to an algorithm. Support requires listening, not watching. Before using emotional AI, organisations must ask themselves:

Are we supporting our people, or just measuring how well they cope?

 

Every Wellbeing helps organisations take a practical, ethical approach to workplace wellbeing. We offer strategy audits, training, coaching and guidance focused on people, not just data.


Contact us to explore how we can support your team.


 

 

 
 
bottom of page