I was asked the other day by one of my colleagues what’s next for our tablet and smart phone screens. Will they be bigger? Smaller? Do more things? After I thought about it, I told her I thought the next thing wouldn’t be screens or visual at all. Instead it will be all about screenless technology. Voice. Using our voice to control apps and do things we now do with our fingers and eyes.
Lately there have been a plethora of devices that are controlled or will be controlled entirely by voice and touch with no visual element. Earlier this year Amazon came out with Echo. It’s an entirely voice controlled and screenless digital assistant. Echo can do many things: play music, call up Uber, order from Amazon, even turn your lights and HVAC system off and on—all by voice activation only. No screen.
And then Pebble, the first smart watcher maker, announced Pebble Core. The Core is also a screenless device that at least now is marketed primarily to health enthusiasts. But it can play music from Spotify, track your workout and store the data, order Uber (if you get tired) and even has a GPS to tell you where you are—all controlled by voice and touch. You can even program it to send an emergency text when you leave home. Since it connects to Echo, you can use it to do everything an Echo does. Get weather and traffic. Unlock your car. Again, no screen.
And now there’s Airpods, Apple’s serious entry in the blue tooth wireless world. Airpods are cordless and designed to replace the ubiquitous white earbuds that all Apple phones and iPods come with.(Does anyone still buy iPods?) Airpods link to your iPhone and allow you to perform various functions wirelessly. Touch it and you can talk to Siri. Siri can then activate music, make a call, even draft an email for you.
AirPods will have optical sensors and accelerometers, which detect when the AirPods are in your ear or determine when you’re talking. You can use one or both, and they’ll only activate when you’re ready to listen. Remove them from your ears to automatically pause music, or move to one to have a conversation; your music will automatically resume when you put it the other one back. And you don’t need to look or touch a screen.
Why is this important? Like it not, Apple is a trend setter. Like Amazon, Apple is now all in the voice activated world. And while the voice features of Airpods are relatively primitively, much like apps on the original iPhone, they will increase exponentially as technology improves. The average error rate on voice activated and controlled devices is 7%: meaning Siri and Echo understand about 93% of your words. But that’s an average. They do better with some people and worse with others. And they learn from use—the more you use them, the better they get at understanding.
7% is still too high. But when the error rate gets down to the 1% range or less, imagine the possibilities and uses. Imagine a personal assistant that takes notes about your conversations, a helpful researcher, a companion who listens to your problems, and may even suggest a course of action, silently consulting the collective knowledge of experts. With in-ear technology, this assistant can always be connected to you.
And combined with better voice recognition (VR) technology, the uses of voice activation devices will explode. VR will become your password: Want to check bank balance? Ask you device to do it. The bank will recognize your voice which will serve in effect as your password and communicate with you directly all without touching or looking at a screen. Want to dictate an email on your way to work? VR will log you in and allow you to dictate it. It will then read it back and you can correct errors, all without touching a screen.
Check your email and have it read back to you? No problem. Check you Twitter feed? Ask the device to pull it up and read you the highlights. Want to hear more, ask the device to read the whole post to you. Want to prepare a bill? Enter your time via your device and have the bill read back to you to check for errors. Virtually anything you do with your apps now by touching and seeing can be replaced by a voice controlled function. And voice shortcuts could be created that would allow you to even more quickly and efficiently make your apps work.
And there is one more thing. VR and activation technology could allow you to create a virtual transcript of all your conversations and call them up on demand. Creepy? Maybe, but virtual transcripts may not be such a bad thing. How often do disputes arise from “he said, she said” controversies? How often do we fight with other lawyers over what was said in a conversation? Would you like a transcript of your consultation with your Doctor discussing a serious illness and what to do?
But obviously there’s a lot of privacy and ethical issues to be considered. What would the discovery implications be? And how would be speak if we knew everything was being recorded? Would our conversations all sound like depositions? Plus how advertisers might use this. (For a great discussion of the pro and cons of the virtual transcript, there is a great podcast called “Note to Self”; the latest episode is devoted to this issue).
Nevertheless, I’m convinced the use of touch, VR and voice controlled devices will be the next frontier. And instead of all of us looking at our smart phones every idle moment, soon we will all walk around talking to ourselves (so to speak). Wait….not sure that’s an improvement.