Watching science fiction films back in the 1960s and 1970s, it was a natural assumption to believe that by 2014, we would all be talking to computers by now. (“Open the pod bay doors, HAL.”) 2014 has arrived, and not very many of us are talking to computers. We dabble with it … a little. We say “agent” to an interactive voice response unit on the phone or ask Siri to find out what the weather forecast is. (Though fewer iPhone (News - Alert) users seem to be using Siri nowadays after the novelty has worn off.) The truth of the matter, though, is that the technology really isn’t there yet.
Customer support centers make some of the broadest uses of speech technology, but the true Holy Grail – natural language processing, or enabling a computer to recognize conversational human speech – has stalled. While some companies did and still do offer natural language processing solutions for contact centers so employees can speak naturally to describe what it is they are calling about (“Yes, I’m looking to find out about returns information for a product I ordered last month”), these technologies are prohibitively expensive in many cases and don’t deliver the types of results that customers expect.
The technology isn’t much further along on the mobile front. Part of the reason is quality, and another reason is that these systems are often too slow for humans to have patience with. Solutions that operate in the cloud may be to blame: after all, the device or solution needs to record your voice, compress it, send it to servers that might be located on the other side of the country or the globe, interpret it, and choose the right responses, sending it back to you on a return trip. While it may take only seconds, any delay is unacceptable to us in a conversational process.
Intel recently announced that it may have a game-changing solution. The company’s head of wearable technology, Mike Bell (News - Alert), said in a recent interview with Quartz that Intel (News - Alert) has partnered with an unnamed company to make speech technology software that can be loaded onto devices themselves, eliminating the delay that comes with a cloud-based solution. The result, according to Quartz, is a prototype wireless headset called “Jarvis” that sits in the wearer’s ears and connects to his or her smartphone.
“How annoying is it when you’re in Yosemite and your personal assistant doesn’t work because you can’t get a wireless connection?” said Bell. “It’s fine if [voice recognition systems] can’t make a dinner reservation because the phone can’t get to the cloud,” he adds. “But why can’t it get me Google (News - Alert) Maps on the phone or turn off the volume?”
The project is said to be in a phase where Intel is currently shopping for mobile phone manufacturers. Should the project come to full fruition, we might all be able to carry a voice-enabled personal assistant with us wherever we go, and regardless of how great the wireless signal is.
Edited by Rory J. Thompson