The Next Mobile Phone Revolution Is Coming – Bloomberg
For years, mobile phone owners have had access to just one digital assistant — Siri on an iPhone, Google Now/Google Assistant on an Android device, Cortana on a Windows one. Now that’s changing as multiple assistants proliferate to multiple phones. It sounds like an epidemic case of multiple personality disorder but it’s actually a step toward a future in which the artificially intelligent entities will take over our gadgets and make them more powerful and easier to use.
Google Assistant, the unimaginatively named software that pops up on Android phones when you say “OK Google,” is now available for iPhones. You can’t ask it to take a selfie or perform some other tasks — Apple keeps that functionality for its own Siri — but you are free to enjoy its superior speech-recognition technology.
With some technical skill, you can also run Google Assistant on Windows systems alongside the native digital assistant, Cortana (which, in turn, has been available on Android and iOS for a while). Since millions of people are using Alexa, the helpful voice inside the Echo speaker, smartphone maker HTC’s new flagship phone, the U11, responds to both “OK Google” and “Alexa!” Samsung’s Galaxy S8, for its part, includes both Google Assistant and the Korean company’s own Bixby.
In other words, the digital assistants are getting untethered from makers and even operating systems. Soon, they will supersede them: The assistant becomes the primary interface, and the user doesn’t really care about what’s under the hood unless he or she is a determined geek.
But that shift is predicated on improvements to voice interaction technology that have eluded developers so far. Whatever the makers of these systems claim about their voice recognition technology, it’s highly imperfect — there are lots of funny examples on Reddit’s r/SiriFail. So Google is adding keyboard input support to Assistant, and camera support will also come soon, the company announced at its developer conference (now under way). That means we’ll be able to train the phone’s camera on a flower or a building, and it’ll tell us what it is, or translate text from a foreign language without a special app.
The idea is that the digital assistant will do all the work for the user. If you want to post to Facebook, for example, you won’t need to open the app — the assistant will give you a window to do it; if you’re trying to figure out if the restaurant in front of you is worth entering, you won’t need to google it or bring up the Yelp app, just train the camera on the sign and the assistant will give you all the relevant information.
The benefits for the user are obvious. Eventually, it may become unnecessary to install apps on a phone — the assistant will just pull the necessary data from various services in the cloud.
Digital assistant developers, for their part, can point users toward their own services, something vitally important for Google and Amazon but also helpful for Microsoft and Apple. Sure, that will probably prompt antitrust complaints, but regulators are taking years to consider such cases even with more easy-to-understand technology. In 2010, the European Commission started its investigation of how Google allegedly prioritizes its own shopping comparison service in search results; the case is still dragging on. By the time it’s over, new technology will be in place that may take watchdogs another decade to parse, and since the assistants are powered by artificial intelligence, developers will have a new defense: These algorithms have a life of their own, and it won’t always be possible to completely control their choices.
Now, people hardly ever use the digital assistants on desktop and laptop computers, very often on home speakers and not that frequently on phones. Talking to a gadget when you’re not driving a car remains a turn-off for many people (and for me, definitely). Only 18 percent of smartphone users call up the assistants daily. The speakers, however, will be niche products for a long time. Not everyone can think of a use for them. Phones, however, are ubiquitous — and suddenly the whole market is up for grabs for the company that develops the perfect digital assistant and makes it the interface of choice. That’s why Samsung, for example, believes it can get in the game — and why Amazon is already in it.
The mission is far from trivial. Artificial intelligence needs training to get better, and that means more interactions with humans. Companies need to entice customers to use highly imperfect products. That takes a lot of hype and some quick improvement, otherwise disappointed phone users will drift away. Google and Amazon appear to be the most determined developers at this point, but there’s still time (not that much) for Apple and Microsoft to catch up.
In the end, getting the digital assistants to take over our gadgets is a collaborative effort between developers, users and machine-learning algorithms. It’s not quite clear yet if a complete takeover is even possible; that depends on whether the algorithms ever get as good at giving us what we want as we are at getting it manually (an increasingly relative term). But given the enormous resources being invested into this play, it’s likely to bring about change in the way we use our communication devices even if it doesn’t succeed 100 percent.
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
It can even save your life. In a recent experiment, several Florida doctors tested the different digital assistants on queries related to asthma and allergy-related queries such as “food allergy” or “I’m wheezing.” The Google Assistant recognized the queries and provided clear instructions on what to do. Siri didn’t even get most of the questions.
To contact the author of this story:
Leonid Bershidsky at firstname.lastname@example.org
To contact the editor responsible for this story:
Mike Nizza at email@example.com