top of page
Search

We Wouldn’t Trust Bad Science. So Why Do We Trust Bad Wellbeing?

We hear it all the time: “Wellbeing matters.” And rightly so. But have you ever stopped to ask whether the wellbeing strategies being used in the workplace, are actually effective?


In occupational psychology, we are trained to carefully assess how strong and reliable research is before using it in practice. This focus on evidence became even more important after psychology faced a major challenge known as the replication crisis. Researchers found that many well-known studies could not be repeated with the same results, which raised concerns about how dependable some past findings really were.


Although the issue began in psychology, it has since spread across other areas of science, including medicine, neuroscience, and economics. In response, psychology and related fields have taken steps to improve the way research is designed, reported, and reviewed.


Personally, I believe workplace wellbeing is due for a similar wake-up call.


Are We Just Copying What Sounds Good?

Many organisations roll out wellbeing initiatives like lunchtime yoga, meditation apps, or offer resilience webinars. These can be helpful, but only if they fit the needs of the people using them. Too often, these ideas are copied from other companies without checking whether they are actually working in that new environment.


This is a bit like publishing research without testing if the results can be repeated. In the end, we are left with something that sounds good, but might not do good.


What’s Missing? Evidence and Evaluation

In psychology, we use tools like pre-registered plans and open data to make our work more trustworthy. What if wellbeing programmes were held to similar standards?


Instead of relying on vague claims or popularity, we should be asking:

  • Were the goals of the programme clearly defined from the start?

  • Has it been tested in similar workplace environments, or is it a copy-paste solution?

  • Are the outcomes based on meaningful measures, not just participation rates or feedback forms?

  • Were the results made transparent, including what didn’t work?

  • Did employees genuinely find it useful, and did it lead to any lasting change?


Without proper evaluation, it is hard to know whether these programmes are making a difference, or simply giving the illusion of progress.


Why One-Size-Fits-All Well-being Strategies Fail

Workplace wellbeing is often reduced to yoga classes or mindfulness sessions. While these can support some employees, they rarely address the deeper, more complex issues affecting staff day to day.


A session in a quiet office might benefit one team but feel completely out of touch for others, such as night shift workers, frontline staff, or those under intense pressure from customers or management.


Stress at work is shaped by many factors: unrealistic workloads, poor management, unresolved conflict, stalled career growth, chronic health conditions, and pressures outside of work. During the pandemic, for example, even highly skilled NHS surgeons faced extreme burnout. Their experience shows that dedication and competence do not protect against poor workplace conditions.


That is why generic wellbeing strategies so often fail. Real wellbeing support must be layered, inclusive, and rooted in the actual working conditions people face, not just surface-level fixes.


What Can We Do Instead?

Let’s take inspiration from the way good science is done:

  • Test what works in your workplace, don’t just copy what’s trendy.

  • Ask employees what they actually need and co-create solutions.

  • Measure outcomes honestly and be willing to adjust.

  • Be transparent about what’s working and what isn’t.


Time for a Replication Crisis in Wellbeing

The replication crisis pushed science to be better. Maybe it’s time workplace wellbeing had its own version.


It is time to move from repeating feel-good fixes that do not deliver, and start asking better questions about what really helps people at work.


Because wellbeing shouldn’t just look good on a slide. It should make a difference in people’s lives.




Key Moments:


  • 2005: John Ioannidis published the now-famous paper “Why Most Published Research Findings Are False,” which raised alarm bells about low statistical power and publication bias in science.

  • 2011: The field was shaken by Daryl Bem’s paper claiming evidence for precognition (ESP),  using accepted scientific methods, which sparked widespread debate about methodological flaws in psychology.

  • 2015: The Open Science Collaboration published a major study in Science attempting to replicate 100 psychology experiments. Only about 36–39% of the studies were successfully replicated. This marked a tipping point.



 
 
bottom of page