Question: What do Bill Clinton, state-of-the-art call centers, and NASA all have in common?
Answer: They all understand the importance of language and personality.
NASA recognized more than three decades ago the value of personality and used a linguistics-based model called the Process Communications Model to vet astronaut candidates and build successful teams.
“Bill Clinton studied this same model during his first presidential campaign in order to connect with voters, with phenomenally successful results,” says Mattersight CMO Jason Wesbecher. In fact, Clinton used it to connect with the woman to whom he famously replied: I feel your pain.
And now some call centers are doing language analysis and personality profiling to deliver a better customer experience.
“The call center is a microcosm of the world, with each phone call serving as a miniature relationship,” Wesbecher wrote in a recent TMCnet blog. “Except that in most cases, personality pairing is left to chance, not choice. Essentially, everyone’s stuck in a forced marriage. If they’re a mismatch, the two people on the call probably have a rocky conversation ahead of them. And with four primary personality types at play in a given call center, the odds of being a good match are not in their favor.”
Mattersight aims to address that by tuning in to grammar, tempos, tones, and syntax to provide context about conversations and identify the personality types of the speakers. The company, which has 10 million language and behavioral algorithms, acquired an exclusive license to the language-based behavior model that was developed in conjunction with NASA.
It enables customers who value facts and data, for example, to be paired with agents who are inclined to offer such information.
“Those who tend to need a little emotional support route to agents who are great at providing that,” says Wesbecher. “The old forced marriages are being pushed aside in favor of well-thought-out pairings, and every metric that stems from those pairings feels the love.”
As discussed in the December issue of CUSTOMER magazine, Interactive Intelligence (News - Alert) is another company with new applications that match customers with agents based on their personalities.
The company’s PureMatch solution leverages agent attributes from personality profiles and collected caller criteria to dynamically match a customer with the best agent, Jason Alley, senior solutions marketing manager for Interactive Intelligence, explained in an interview with CUSTOMER late last year. This is done without the caller playing an active role in the matching process. The second application puts the power of selection in the callers’ hands by presenting them with information about agents such as personal characteristics and interests, service ratings, and wait time. It then enables callers to pick the agent they think is best.
“Our customer selection application lets callers shop for a service experience just like they would shop for a product online,” said Alley. “The caller in this case can decide what’s most important to them.”
Meanwhile, Nexidia (News - Alert) offers a solution that enables organizations to capture and analyze vocal cues to more effectively understand what’s happening in conversations with customers so they can both improve the customer experience and meet their other goals.
For example, explains Nexidia CSO Ryan Pellet, one large wireless service provider leverages such technology to understand when a conversation is “going South,” so it can take steps to get things moving in the right direction. It put sentiment on all calls coming in to its contact center and outfits supervisors with iPads with an application that shows the sentiment on the calls of all agents. If a call gets into trouble, supervisors then have the information to see that, can offer coaching, transfer the call to another agent, or take over call themselves.
Nexidia in June of 2014 unveiled version 11 of its Interaction Analytics solution, which features a trademarked technology called Neural Phonetic Speech Analytics that leverages large vocabulary continuous speech recognition and phonetic indexing. It produces word-level transcription, a phonetic index, and sentiment scores. The company has built-in machine learning to find when there is good or bad sentiment, by analyzing such factors as whether a person is talking loud, fast, and/or is laughing; if people are interrupting each other; and by looking at the context of the words being said.
Edited by Dominick Sorrentino