Can an AI bot really help managers show more empathy? In his latest column, Jonathan McCrea discusses the problem of workplace compassion in the machine age.
Last week, I was in Amsterdam hosting the World Summit AI and amongst all of the talk about using AI in enterprise, the jostle for sovereignty of data and how quickly AI is changing film, one presentation really stood out.
Yohan Liyanage from Sentiva spoke about his platform that is trying to build more empathetic companies. But the empathy he talks about, it doesn’t come from humans, but from an AI system that monitors employee engagement.
Engagement, Liyanage says, causes a $1trn hole in the US economy (that’s trillion with a T). When you think about it for a minute, it actually seems conservative. In a modern capitalist system where most of the most successful companies are profit driven and there isn’t a limitless supply of money, engaging your employees isn’t easy.
This is the problem that Google tried to solve in the noughties with ping-pong tables and free canteen food (a move that had the entire business community agog at the time). Google co-founder Sergey Brin apparently once commanded his architects and office designers that, “No one should be more than 200ft away from food”. As an idea to incentivise teams, it was simple, and cheap. Food doesn’t just buoy your employees’ mood (and keep them onsite), it also encourages innovative thinking, draws siloed teams together to mix socially and has a host of other effects that can all improve employee engagement.
20 years later, in 2025, free food on its own just doesn’t cut it. Irish companies are spending billions on all sorts of employee engagement programmes, from hotel and gym perks to work-life balance arrangements and redesigning workspaces. Leadership programmes to help managers better motivate and inspire their employees are also a major investment for large corporations like Medtronic, IBM, Kerry Group and Kingspan.
The engagement problem
Liyanage thinks the lack of engagement is down to empathy – and he might be right.
On stage in front of a few thousand attendees, I asked the crowd if they had ever worked for a boss they would call really empathetic. In a room full of people working in tech, four people raised their hands (I think they were sitting beside their managers). Liyanage says AI can help.
At Sentiva, he is developing a platform that monitors employee behaviour and identifies workers who are engaging less. Before you clutch your pearls and invoke George Orwell, it’s not quite what you think.
The idea is to monitor stats not faces (at least for now). Liyanage says that every interaction you have at work can say a little something about how you’re feeling. How quickly you respond to a Slack message, how much you speak on a Zoom call, if you have your camera on, the tone of your email or how you respond to a daily check-in. On their own, a single data point tells you nothing, but put together and research shows that you really can gauge if your employee is becoming less interested in their work.
This system, in theory, would allow companies to be more strategic about how they approach employee engagement, ensuring more attention is given to those who aren’t maintaining the standard they started with.
It would help managers to better identify people who were unhappy – because this is something most managers are clearly bad at – and in turn appear more empathetic themselves. Managers might get notifications to speak less and encourage participants in meetings to contribute. Employees might get notifications asking if they are having issues. It all sounds delightful.
‘Empathy isn’t a million data points’
In reality, I’m trying to imagine how I might have felt working at the various tech jobs I held, if Sentiva was monitoring my behaviour. How I might react if an AI system was silently sitting in on my Zoom calls to see how happy I was. How I might react if I got a notification from an avatar that was just ‘checking in’ or advising me on how to be more human. I think I would probably have thrown my monitor out the window.
The problem I have is that empathy, even if it’s faked really well, isn’t actually empathy unless it comes from a human. While AI might take over almost all of the jobs, it can never really understand what it means to be one of us.
Empathy isn’t a million data points. It’s the quiet recognition of another person’s experience – the eyeroll a colleague gives you when your manager is being an arsehole, the realisation that now may not be the time to offer your expert opinion, the card from your colleagues to say they are sorry for your loss. These are gestures that don’t scale.
That’s not to say Sentiva’s idea has no merit. If an AI system can help a busy manager notice that someone on their team is burning out or feeling excluded, that’s valuable. But the risk is that it becomes a proxy for care – a way for organisations and managers to appear emotionally intelligent without doing the uncomfortable work of being emotionally present.
The irony is that by trying to teach these machines empathy, we might lose some of our own through simple atrophy. The real test for the next decade won’t be how to teach machines to feel more, but making sure people don’t start feeling less.
For more information about Jonathan McCrea’s Get Started with AI, click here.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.