Perilous transitions: beware the chatbot that claims to solve all your problems
Chatbots. When you work in any part of digital service design and delivery, it’s hard to avoid hearing about them, and I think the current hype around generative AI models like GPT-4 will only increase the discussions about (much less sophisticated) chatbots for service delivery.
I am something of a chatbot sceptic, and I want to show you why. Not theoretically, but by taking you through a genuine interaction I’ve had with a chatbot. In this case, to close a savings account with Santander. I hope to show that the work of doing good service design is made more challenging, not less so, by choosing a chatbot. This is the real world of chatbot experience, not the happy path you might see in a demonstration.
I write this not to pick on Santander. I’ve had similar chatbot interactions with companies like NatWest, Curve and Moneybox, and it’s pretty standard for chatbots in my experience. But an example is the best way to show you what I mean.
What I needed to do
I set out with a pretty simple goal: to close a Santander savings account. All Santander needed to do was:
- Verify that it’s really me
- Find out which account I want to close
- Find out where I want the money in the account to go, if there are different options
Even though this is a simple task, organisations often require convoluted approaches to close accounts or cancel a service — letters, phonecalls or even an in-person visit. This is for a few reasons — some companies want to make it hard to close accounts, and in others it’s just seen as an experience that doesn’t make a profit and so doesn’t warrant attention. However, this is starting to change. More and more service providers are recognising that cancelling a service is an important part of the experience for a customer that might well come back to using their service in the future.
Unfortunately, along with a move to self-service cancellation, there’s a new trend of companies allowing customers to cancel only through a chatbot. It’s seen as the easy way to “channel shift” this task online — a telephone call becomes a chatbot interaction. As Lou Downe says in Good Services, “the simple fact that our services weren’t designed for the channel they’re delivered in is one of the most common causes of service failure”. If service providers started by thinking about how best to let users close an account in a mobile app context, I doubt that this is where they’d end up. But I’ll let you judge that for yourself.
So, here goes: a description of my attempt to close my account.
Round 1
The interaction starts off apparently OK, with some expectation-setting.
Now the main questions start. First I’m asked why I am closing the account. Most of the options make sense, but “Regular eSaver upgrade” seems out of place. The language is also inconsistent — the account I want to close is called a “Regular eSaver”, but it’s described as “Monthly Saver” in the list of accounts in the app. Not a great start.
Now, the service is collecting the most important piece of information it needs to complete my request. It’s vital that it gets this right. But this question is cluttered with lots of unnecessary information:
- “💡You can only close one at a time” — I’d be surprised if this contradicts a common expectation users have
- “If you submit a closure request before your account has matured, you might miss out on interest” — this brings unneccessary anxiety to the user. In my case, my account has not yet matured, but I won’t miss out on interest.
- The remaining information isn’t relevant to closing a savings account
Remember that I’m on a mobile phone, with limited screen space. This clutter means that I can’t see the question and the responses at the same time. The chatbot pattern actually reduces the available screen space further, as the messages need to be aligned left and right to make it clear which are mine and which are from the service. On my device, that just leaves 36mm of space to fill — not much more than the width of a Nokia 3310 display, which was around 33mm.
This is an example of a common problem in services: not using information that the service already has. The information about credit cards could be omitted, because I don’t have a Santander credit card. I’d sum this up like this:
In a transactional service, using the word “if” with your users is normally a sign your service is not well designed. Use the information you already have to avoid this and provide only the relevant information to your user.
Increasingly, services are facing requirements to better use the information they have already. At GDS’s Open Show and Tell as part of Services Week, Tom Read talked about Estonia’s once-only principle, a legal requirement that government doesn’t ask for the same information twice. And in the new WCAG 2.2 accessibility standard, which will become a legal obligation for public sector bodies later this year, the new success criterion Redundant Entry prohibits asking for the same information in the same process. When you’re addressing these problems, consider whether you can banish the word “if” from content in your transactions entirely.
Alright, let’s get back to the interaction!
Here comes one of the worst interactions with this chatbot. I can’t see this wall of text on a screen in one go, and it’s full of flaws:
- We have business language creeping in — I’d never heard of “trailing interest” before
- The “if” word has crept in again: “If you don’t have any other instant access accounts” (Santander knows that I do), “you’ll receive a push notification if you have them enabled” (apps know if people have enabled or disabled push notifications).
- An important request (“In your message please confirm your full address” is in the middle of this text, where it’s likely to be missed. Think about primacy and recency effects when designing messages, and don’t bury important things in the middle.
- More irrelevant information has appeared. I don’t have an ISA or Fixed Term Savings Account with Santander, but I’m told about both.
- Transitions out of the chatbot are managed poorly. “website” is an abysmal label for a link, and why ask a user to move out of the chatbot to find a phone number? There’s an easy opportunity here to include a link to call the phone number from right within the chatbot.
Right. So I’ve clicked on the correct link to “Leave message”, despite it once again being buried in the middle of a message. But why is this step even necessary? The response is not very helpful — good interfaces don’t need instructions. If I didn’t understand that I needed to “tap the arrow to send it”, I wouldn’t have got this far. Never mind the fact that there are three ‘arrow-like’ buttons on the screen, and the one that looks least like an arrow is the one that I need to press:
At least I finally get the chance to enter some free text describing my request:
Once I see the message saying “we’ll reply as soon as possible”, I immediately quit the app. Remember that I was told at the beginning, in expectation-setting, that the chatbot would “ask a few questions and pass [my] message to a colleague to close your account within two working days”.
Unfortunately, this meant I didn’t see the following messages until I went back into the app, an hour later.
Another wall of text appears. This time, with lots of information about how their customer services operation works. I am not really concerned with their “personal queue”, and I expect that a bank will “deal with multiple customers”. Look at the timings on these messages: it’s not necessary to say that “I will be with you shortly” or “thanks for your patience” when the next response comes 60 seconds later.
The fact that I even saw this message at all demonstrates another chatbot challenge. In websites, loading messages, or interstitial pages, only appear briefly. If you switch back to a tab after it’s loaded, the loading message won’t appear. But in chatbots, these messages tend to persist.
Erin also asks me to confirm my full address — I suspect they have to do this a lot, given the poor placement of this request in the earlier message.
Oh dear! I’m being asked to “get back in touch… if [I] need any further support.” But my query is far from resolved. The chatbot has poorly managed a transition between bot interaction and human interaction — by unexpectedly requiring the customer to interact live. Good chatbots won’t require a customer to interact live, but instead, allow the customer to respond later.
Now there is a flawed attempt to measure the success of this transaction — and Erin asks me to complete the survey while ignoring “system processes or procedure which [they] have no control over”. I feel the frustration, Erin. We’re in this together. Well, sadly, we’re not any more — our conversation is “resolved”. ☹️
Even so, let’s try to carry on…
Round 2
WTF? This is nothing to do with my query at all. The chatbot appears to have no awareness of what just happened!
I’m now left at a complete dead end. I resort to trying techniques I learned from rumours that saying a word like “help” or “agent” would allow you to escape tricky phone IVR systems.
I’m back to a familiar place. It’s not a great feeling — I’m anticipating having to go through this whole process again.
This time, probably more frustrated than before, and also thinking that I know my way around the process, I whizz through the questions, deciding to copy and paste my previous query. But remember before that we talked about the unnecessary “Leave message” button? This time I didn’t press it, and look what happened:
We’re back to the start of the process yet again! This is another transition that hasn’t been handled well, this time the transition from structured to unstructured interaction. The chatbot is programmed to expect the “Leave message” button to be clicked, and anything else sends it back to the beginning.
Round 3
By this stage I’m just copying and pasting.
This really isn’t the moment to try and market new products to a customer.
Phew, I remembered to press Leave message this time!
Since I was asked for a sort code and account number earlier, I assume that they’re looking for a non-Santander account (they already have my Santander account details!) But this time, I’m told that they cannot send interest to an external account — and they don’t actually need those account details. Better this time round, though the inconsistency isn’t helpful.
But they can’t finish this off without another wall of text. Sure enough, all this information is irrelevant — the account has no cards or cheque books, and I don’t have a standing order to the account. All of this information is known to Santander.
Phew, that was hard work.
62 messages over and back.
62 messages.
It could have been done in just a couple of messages. Or, just a regular old interaction in an app could handle this in a few clicks.
So what does this all mean?
If you’re thinking about building a chatbot, in most cases:
- Just don’t.
Experiences like this will be typical for your customers. If you really insist on going this route, make sure that you handle the three perilous transitions well:
- Transition between live and time-delayed interactions. Expect users to drop out in the middle of a live process, and don’t treat it like they hung up the phone on you.
- Transition between structured responses and free text responses. This includes receiving free text from users when you expect a structured response.
- Transition between bot interaction and human interaction. Make sure that the human has the information provided to the bot, and that the bot can act based on context of previous human interactions.
And remember that all the principles of good user experience design apply here, in particular:
- Cut out the rubbish. Be very sparing with how much content you share with your users. Take the good principles you’d apply to web content, but taken to the next level.
- Don’t assume or require specific behaviour from users. Expect the unexpected.
- Use the information you already have. (Part of doing the hard work to make things simple for your users).
If you think you can do all of that well, ask yourself — if a company with as large a budget as a major bank still makes such a mess of this, do you really have the capability to do better? You might well find that build a regular old interaction to complete common tasks is much easier and cheaper to do well.