Every decade or so, there comes a new technology (or more than one) that inspires us to think beyond our current limitations and ponder how far humanity can truly go. For our generation, that tech may be artificial intelligence, as applied to the world of digital assistants.
Digital assistants emerged somewhat stealthily, with the iPhone’s Siri meeting lukewarm reception upon its unveiling. But now, Siri and other assistants are pushing the boundaries of what we thought was possible.
This is perhaps best evidence by Google’s recent demo of its Duplex technology, applied to Google Assistant at I/O 2018. Google Assistant was shown to be able to make phone calls for basic tasks, like scheduling a hair appointment or making a dinner reservation; it held a conversation indistinguishable from other human beings and was able to react to unexpected changes in conversational rhythm.
It invites the question: just how far can our digital assistants go? Could we one day live in a world where the vast majority of our tasks and problems are solved by digital assistants? Will we be able to tell digital assistants and human beings apart?
Where we stand today
Already, digital assistants are making impressive progress. More than 46 percent of Americans are using digital voice assistants, with the majority of those use instances on mobile devices. Siri, Alexa, Cortana, and Google Assistant are some of the most popular. Some software companies are introducing their own digital assistants as well, such as Spiro’s AI assistant, which is capable of taking instructions via email to update system records and generate reports.
In terms of natural language recognition, digital assistants are far ahead of where we thought they’d be. Microsoft’s Cortana, for example, is able to recognize speech with just a 5.1 percent margin of error, which is in line with trained human professionals. Natural language processing is perhaps the most sophisticated AI element of virtual assistants, with search functions being delegated to external engines; for example, Siri now relies on Google Search to acquire the content necessary to address a user’s question.
The speech patterns of personal assistants have also grown more sophisticated. Early iterations of assistants sounded cold, robotic, and stunted; it was clear you were talking to a machine. Today’s conversations are practically seamless, driven by the same conversational pattern recognition that drives their understanding of human speech.
For the most part, digital assistants are relegated to focusing on simple, objectively describable tasks. Even making a dinner reservation, the breakthrough demonstrated by Google, is relatively straightforward; negotiating a price for a phone contract is a different story entirely.
The limits
As much as we like to think of AI as a technology with unlimited potential, there may be some fundamental limitations in what AI-driven assistants can accomplish.
Raw data
AI assistants are frequently driven by deep learning, a mathematical process that allows computer programs to recognize patterns, even abstract ones, given enough repetition. The problem is, we can’t replicate the human brain’s astonishing capacity to recognize patterns at a glance; instead, even the best deep learning software needs to run simulations thousands of times before it can grasp the basics. This makes deep learning entirely unfit for certain types of learning and task management and makes it hard to scale in other areas. In other words, if we want a digital assistant’s AI to be capable of more complicated tasks, we may need to engineer an entirely different type of algorithm or find a way to increase its processing power.
Outlier situations
Assistants are also notorious for being unable to handle completely novel situations. If they encounter a word they’ve never heard before, or a situation that has no similarities to any previous situations, they’re practically unable to improvise. This may not seem like a big deal when it happens in less than 1 percent of cases, but for important tasks—like preparing a trip to the hospital or driving a car—these outlier situations are worth considering. To break free from this constraint, we would need to be able to create a digital assistant with a more human-like neural network. Until then, it may not be wise, or even possible, to entrust assistants to automate everything.
Security, privacy and regulation
The explosion in popularity of digital assistants, especially those in smart speakers, have also influenced a conversation about the importance of user privacy and security. These digital assistants are able to listen to everything you say, and everything that happens in your home—but should they be allowed to do so? If so, who’s allowed to view that data, and how are they allowed to use it? The fact that hackers may be able to infiltrate these devices is reason enough to pause before projecting an optimistic future, and regulators are nervous about the repercussions. Unfortunately, lawmakers tend to move slower than the technologies they’re trying to regulate, which means we may see our innovation stifled on the path toward greater regulatory oversight.
So, will we one day have digital assistants that can do everything for us, from cooking breakfast to solving our toughest existential problems? It seems incredibly unlikely. Deep learning is a phenomenal technology that’s already changing how we live our lives, but it’s fundamentally limited in some key ways. Accordingly, it’s best if we consider it as one tool in humanity’s toolbox—not perfect for every job, but efficient at the jobs it’s best at. For assistants that can do even more, we’ll have to wait for an even better technology—one yet to be conceived of.
This article is published as part of the IDG Contributor Network. Want to Join?