Improving Customer Service
and Security with Data Analytics
The advantages
of analytics to customer service have already been shown. Now the question
becomes: How can analytics be used to improve security?
Organizations are collecting more and more data. And
while rich data allows personalized service, detailed data about real people
(rightly) often raises concerns. Just as this data is increasingly valuable to
organizations, it can be valuable to criminals as well, leading to an
ever-escalating series of data breaches. Data analytics exacerbates trade-offs
between security and service; the analytical processes on data can, at a
minimum, raise privacy concerns for individuals because much of marketing analytics
tries to learn as much as possible about potential customers. These analytics
processes are becoming increasingly powerful at de-anonymizing people from
their trace data.
However, these de-anonymization techniques are an
example of a way that analytics offers at least a partial solution to the
problems it has exacerbated.
Consider, for example, placing a call to your bank
for help after losing your debit card. The core problem is that, before
providing customer service, the bank must authenticate that you are who you say
you are. This authentication process must begin with the assumption that the
caller is a malefactor impersonating the real customer — guilty until proven
innocent. The bank will help the caller only after being convinced of the
caller’s identity.
While this process is annoying when we’re customers
seeking help, we actually want and need this level of security. It is in our
best interests that the bank will verify that we are who we say we are before
continuing to assist us. After all, we don’t want the bank to hand out our
money (or our new debit card) willy-nilly to just anyone.
Historically, this telephone authentication process
involves answering a set of questions. What is your account number? What is
your personal identification number (PIN)? What is your Social Security number?
Can you verify the last three transactions in the account? What is your prior
address? The process continues, potentially escalating to security challenge
questions based on shared secrets, until the bank is convinced of our identity.
This process is adversarial by design. Even the name
“security challenge question” evokes a combative stance, a challenge. The
initiator of the call is not trusted until passing through a gauntlet. For
banks, it is unfortunate that so many initial interactions with a customer are
adversarial in nature.
But data and machine learning, specifically speech
processing, offer a great example of an invisible way that analytics can
simultaneously help improve security and service. The technology itself isn’t
that new, but speech processing has progressed to the point now where financial
services companies can match a caller’s voice to their prior calls, allowing
the authentication process to occur behind the scenes as the customer service
conversation progresses.
Fidelity Investments, for example, encourages the
use of voiceprints to confirm identity within the first moments of a
conversation. HSBC is beginning to do this not just for premier clients, but at
scale for retail clients as well. And the change doesn’t just help the
customers avoid yet another password or secret question: Barclays notes a
20-second reduction in time to authenticate — and those 20 seconds add up
quickly to considerable savings in employee time for the bank.
The convenience and savings may be the initial
drivers of this change. However, perhaps a bigger effect, more elusive to
quantify, is the change in orientation. Data and machine learning can ensure
that the customer interaction begins by focusing on assistance rather than
challenge. Customer service can work with, not against, a caller who (in all
statistical likelihood) is a genuine customer, not a con artist — innocent
until proven guilty, in other words. Customer service doesn’t have to assume
initially that callers might be nefarious — and identity validation can occur
in parallel while the conversation is getting started. This means that the
unlikely (but potentially damaging) scenario that a security threat exists
doesn’t have to poison the majority of interactions with valid customers —
without leaving it unaddressed. Organizations can relegate the pesky security
issues to behind the scenes, where they should be kept. The authentication
process is passive, churning along in the background. Security must only become
visible if a problem is found. In this case, the artificial intelligence is
augmenting the human employee in ways that are not visible to the customers.
As a result, valuable and expensive training time
for customer service employees can be spent more on banking and less on
security. While the direct result is more effective customer-service training,
the indirect result is scale. When a new security threat emerges, the bank can
deploy countermeasures quickly to all customer service interactions.
And more can likely come from this initial
application. For example, a customer may in fact be who they say they are, but
may be being coerced. Or they may be suffering from some impairment. Speech
patterns that indicate these possibilities can be brought to the attention of
the customer service agent for further assessment.
Because it is, by definition, an invisible process,
examples like this may get far less attention than humanoid robots or chatbots.
But analytics can help mitigate some of the trade-offs between the security and
service that increased data collection exacerbates. These applications may have
a far greater effect on customer relationships for organizations than the
ostentatious examples that may be more effective at marketing than managing.
Source | http://sloanreview.mit.edu
Regards!
Librarian
Rizvi
Institute of Management
No comments:
Post a Comment