Mobile Customers Need UC Choice for All Interactions

Art of the Customer Experience

Mobile Customers Need UC Choice for All Interactions

By Art Rosenberg  |  January 10, 2014

Customer services are moving away from legacy telephone call centers and IVR applications to support multimodal mobile consumers. This is particularly critical for BYOD mobile users who will be using a variety of smartphones and tablets, with different form factors and mobile operating systems, for all their mobile interactions with people and online applications.

UC-enabled speech has now become part of the multimodal approach to online mobile apps and personal assistants (like Apple’s (News - Alert) Siri), as well as an option for all forms of messaging between both people and online applications.

I have long viewed multimodal unified communications as not only the choice of interface medium, but also the choice difference between synchronous and asynchronous connectivity. This important factor will increasingly come into play for mobile customer services in the form of click-for-assistance options within self-service mobile apps. (The most notable recent development here has been Amazon’s Mayday button for live customer assistance from video agents in learning to use the many features of their latest Kindle Fire HDX tablet.)

Because consumers are increasingly using multimodal mobile devices for online self-service applications, it is time to accommodate the practical combination of speech input with visual informational output (text, pictures, video) which will make interactions faster and easier for the user. When other modes are needed, e.g., voice or visual only, there must be no impact on the basic application interface logic.

The flexibility for a web-based, mobile self-service application to use VUIs for input and GUIs for output means that such applications have to move away from their old development silos, which assumed either all online GUIs for desktops or IVR telephone user interfaces for telephones. What is implied is a new and separate layer of interface control by the end user consumer, not by the application itself. The application will get input and generate output through a standard data connection, but that original input will be converted from whatever medium the end user created it in, and the application response will be converted, likewise, to whatever medium the end user dynamically needs.

An article I recently read on this subject suggests that new W3C (News - Alert) standards will facilitate this coming change with an interaction manager that will allow for the dynamic use of different UIs, depending on the context of the end user’s device usage. The multimodal mobile devices will be able to dynamically accommodate which medium will be used for input independently of which medium will be used for output. In addition, as I discussed in a recent post, all end user needs for live assistance can be contextually and flexibly accessed in their choice of modes (text/voice message, IM chat, voice/video connection). Such flexibility will be supported with the adoption of the new capabilities provided by WebRTC for real-time connectivity. 

This approach will also support the need for consistency by designing applications to be functionally independent of the medium used for inputs and outputs. This will apply to both user control commands, as well as to any form of informational content. Just as person-to-person communications have become multimodal at the contact initiator level, independently from that of the recipient’s, self-service applications must now support end user multimodal end users consistently and flexibly.          




Edited by Blaise McNamee
Get stories like this delivered straight to your inbox. [Free eNews Subscription]
blog comments powered by Disqus