It’s tough to do much of anything involving technology these days without running into a virtual assistant.
Pick up your Android phone or Chromebook, and there’s Google Assistant waiting for a chat. Power up any Amazon-made gadget, and Alexa’s standing by with an open ear. Apple’s got Siri, poor Samsung’s got Bixby, and even random companies like Bank of America are getting in on the action with their own woefully unnecessary A.I. personalities (sorry, “Erica”).
We’ve talked plenty about the reasons why everyone and their mother wants you to get friendly with their flavor of robot aid — and why that, in turn, has led to what I call the post-OS era, in which a device’s operating system is less important than the virtual assistant threaded throughout it. It’s no coincidence that Google is slowly expanding Assistant into a platform of its own, and what we’re seeing now is almost certainly just the tip of the iceberg.
Something we haven’t discussed much, though, is a painful reality that often gets overlooked in all the glowing coverage about this-or-that new virtual assistant gizmo or feature. And for anyone who ever tries to rely on this type of talking technology — be it for on-the-go answers from your phone, on-the-fly device control in your home, or hands-free help in your office — it’s a reality that’s all too apparent.
The truth is, for all of their progress and the many ways in which they can be handy, voice assistants still fail far too frequently to be dependable. And the more Google and other companies push their virtual assistants and expand the areas in which they operate, the more pressing the challenge to correct this problem becomes.
Here’s the really interesting part about this: By almost every measure, Google Assistant is consistently ahead of every other virtual assistant when it comes to its success rate — the percentage of the time it manages to understand what you’re asking and then provide an appropriate action or response.
In one test by an investment group called Loup Ventures, for instance, Assistant answered 88 percent of queries correctly. Siri followed at 75 percent, then Alexa at 72 percent, and finally Cortana at 63 percent.
More than a few other tests have reached similar conclusions. And sure, at a glance, seeing Google Assistant get 80-some-odd percent of requests right may seem impressive. But here’s the thing to remember: When it comes to technology, if a feature doesn’t nail it and do what it’s supposed to do almost flawlessly, it’s gonna get frustrating fast. Missing one out of every five or even one out of every 10 attempts is more than enough to make something annoying. And once the novelty wears off, you reach a point where you say, “Screw it — it’s quicker and easier just to do what I want myself rather than roll the dice and see if this thing will do it for me.”
It’s the same reason why tools like Google Now on Tap, Google Lens, and any number of Samsung phone features over the years have never been especially effective and why most of us stop using them after a while: They work some of the time but fail just enough to make them unreliable. And that, in turn, teaches us to avoid them when we’re really serious about getting stuff done — because they start to become more of time-consumers than time-savers.
Anecdotal evidence aside, we’re seeing this start to manifest itself in some measurable ways. A report published in Wired this week highlights recent voice shopping research from market research firm Forrester, for instance. The study looked at the virtual assistants from Amazon, Apple, Google, and Microsoft and found that 65 percent of the time, the services failed to properly answer shopping-related queries.
“In one case, when asked where to buy diapers,” the report notes, “Alexa inexplicably directed the Forrester researchers to the town of Buy in Russia.”
Wired also dug up some research from e-commerce software firm Elastic Path that says only 6 percent of people have used a virtual assistant device — a Google Home, Smart Display, Amazon Echo, or whatever — to buy something over the past six months. The main reason folks gave for avoiding the process? You guessed it: “the high rate of miscommunication or errors.”
(Last August, the website The Information presented some even more bleak stats specific to Alexa. According to “people briefed” on Amazon’s “internal figures,” the site reported, a mere 2 percent of Alexa users had bought anything by voice in the first seven months of 2018.)
Maybe it shouldn’t be surprising, then, that numerous analyses suggest most people are using virtual assistants primarily for shockingly simple stuff.
“There are kind of a cluster of features people are coming to expect for voice: a daily news summary, weather, timers and a random fact,” James Moar, a voice-software-focused analyst at Juniper Research, told Bloomberg earlier this year. (The quote was part of a broader story about how hardly anyone is taking advantage of all the “skills” and “apps” the companies behind these virtual assistants love to brag about.)
And you know what? Call me crazy, but maybe — just maybe — there’s a connection between the limited way so many of us use these tools and the limited consistency with which they perform.
It’s a daunting challenge for Google, Amazon, Apple, and the other virtual assistant hawkers to overcome — and as the features related to these services get increasingly ambitious (voice-guided car rentals, anyone?), the need for nailing the basics is only going to grow ever more critical.
After all, having things work well in an on-stage demo is one thing. Having them work consistently well in the real world — and actually be useful, valuable tools we tech-loving land rovers can rely on — is something else entirely.
Sign up for my weekly newsletter to get more practical tips, personal recommendations, and plain-English perspective on the news that matters.