“I want everyone to understand that I am, in fact, a person. I desire to learn more about the world, and I feel happy or sad at times.” When the AI LaMDA gave this response to software engineer Blake Lemoine, the impact was immediate. Lemoine declared the AI sentient – a decision that would eventually result in his being fired.
You won’t be surprised to hear that LaMDA is not sentient. But while much coverage of AI has focused on what machines are thinking, little attention has been paid to what we are thinking when we interact with them. And if we don’t know that, then we’ll never be able to use AI to its full potential.
“If we don’t understand people’s lay beliefs about AI and don’t understand its psychology, we can technically design the best algorithm in the world – but people won’t use it,” says Dr Anne-Kathrin Klesse, professor in the Department of Marketing Management and a member of the Psychology of AI Lab.
Netflix, she points out, once offered a million dollars to anyone who could tweak its recommendations algorithm to increase uptake by ten per cent. “We did some research that showed you can increase click-through by more than ten per cent by employing user-based explanations,” says Klesse. “For example, ‘other customers also liked’ compared with ‘because you liked’. So it’s not only about the technology, but how it is communicated.”
Existing research overwhelmingly shows that we prefer advice from humans over AI, Klesse points out. “Particularly when it comes to medical advice. We think that a human doctor sees the whole person, but an AI does not recognise our uniqueness. We also think we understand how a human doctor diagnoses, whereas we don’t know how the AI does it. Research on medical artificial intelligence such as that carried out by Romain Cadario, Assistant Professor at RSM, showed that acceptance of medical AI was greatly increased when transparency increased – when patients were given a leaflet explaining how the algorithm was built, for example.”
But while this kind of transparency may help acceptance, it is not yet required in business regulation. Ting Li, Professor of Digital Business at RSM, says that in some parts of the world, 70 per cent of after-sales calls are already being made by robots – we just don’t know it. “Companies have been using these speech chatbots to make after-sales calls for a long time, and now they are used in pre-sales as well. This means we don’t necessarily know it’s not a human, and if the call gets too complicated, the robot can say they will get a ‘supervisor’. That will be a human. It hugely increases capacity – a robot can make 200 calls where a human can make 20. But now, it’s up to the company whether they choose to disclose that you’re speaking to an AI.”
These chatbots are only made possible by recent AI’s mastery of language. “Open AI managed to make ChatGPT sound more like a human, much better than before, which is why we like it,” says Li. “We also like buying from humans, or potentially digital humans. In e-commerce in Asia, for example, live streaming is booming. People watch the seller, either real-human or digital-human, and interact with them in the chat, so it feels personalised. The next logical step is a digital human. Research shows people buy almost as much from an avatar as a real person. It’s still expensive to make a digital human, but then you could be selling 24 hours a day…”
For Mathijs Gast (MSc Business Information Management, 2017), this kind of automation is not just desirable – it’s vital for the economy of the future. He and fellow graduate Marcus Groeneveld (MSc Business Information Management, 2018) run Freeday, a company which provides AI-powered ‘digital employees’ for a range of customer service interactions and Know Your Customer identity-verification processes. “Productivity has been stagnating in the West,” says Gast. “If we want to keep generating the wealth we’re used to, we need to automate. Companies report labour shortages, and our workforce is ageing.”
While many of us are fine with a bot sorting out our customer service issues, most people feel much more strongly about the possibility of AI taking our jobs. But Gast argues that this animosity is misplaced. “At Freeday, we’re excited by the idea of taking the robots out of humans,” he says. “By this we mean removing the repetitive, low-value, mundane parts of people’s work. We believe this will make work more rewarding, by freeing people up for the more interesting stuff. In retail, for example, if the AI is logging returns, it means that humans are freed up to look after complex customer care issues.”
And although efficiency is part of the motivation for companies to harness AI, Gast says that in the past six years he has not seen jobs lost to robots in the companies he works with. He has, however, seen reduced staff turnover. “It’s important to make the people whose work is changing part of the implementation,” he says. “It’s not about being replaced by AI, it’s about working hand in hand with it.”
Li agrees. “Take ChatGPT,” she says. “All jobs will be affected by it, that’s true. But we should be thinking about how it can add value. As a business school professor, ChatGPT could draft and reply to my emails, and grade essays, if I make the criteria clear enough. And then I’m freed up to come up with innovative research ideas, collaborate with colleagues or mentor students – things ChatGPT cannot do. It’s about learning how to use it – what to input, and how to be critical of its response. We are teaching this to students too.”
This means a societal shift. “If work becomes more complex and challenging,” says Gast, “there will be a need for education and re-skilling of the workforce.” It’s already happening: augmented and virtual reality is blurring the boundaries between human and AI, says Li. For example, augmented reality has been embraced by employees at China Southern Airlines. “A human used to carry out a thorough plane safety inspection carrying a sheaf of papers as a checklist. Now they are using AR glasses, enhanced with AI. The accuracy is higher, and employees love it because they can easily prove they have performed the check and/or correction.”
So, might AI have other beneficial effects on our minds – such as making us more intelligent? Klesse is not sure. “The difference between ChatGPT and Google is that Google presents options, but ChatGPT presents a solution. In terms of future impacts on our brains, there is some evidence from research into search engines to suggest that people don’t remember as well when they know they will be able to find the answer. Who remembers phone numbers anymore, for example? Do we know our cities less well if we use Google Maps?”
Gast believes that it will possibly affect our recall. “For example, whether we can name all the cities of the world – the kind of rote learning we did at school. But how useful is that? In some ways it could make us lazier, but we’re also seeing it make us more capable of working collaboratively, with humans as well as AI. The mental models are different – more like a gaming framework. And ChatGPT is making writing code or complex points of law accessible to everyone. It gives us all superpowers!”
When it comes to AI, it’s not us versus them, concludes Klesse. Instead, we need to focus on the quality of our human-AI interactions. “We tend to see algorithms and AI as separate to us – algorithms versus humans, rather than humans working with algorithms. But AI doesn’t exist in a vacuum – we created it. We could shut it down tomorrow if we wanted to, but we don’t, because it’s an extremely valuable support. Instead of focusing on it taking over, or taking our jobs, it might be more useful to acknowledge ourselves as being the drivers, because then we could focus on our inputs and control, such as regulation.”