When you watch a film about artificial intelligence or hear it discussed in public, it’s usually hyped up in some way. You might be excited about a new phone update that lets you search faster, or you might speculate about a new function for your digital assistant. Or you might be worried about some AI program developing consciousness and exterminating the human race.
But the realities of AI aren’t as sexy as we imagine them to be. Culturally, we’ve developed a sense of magic-like awe and wonder about robots and AI, but before we achieve the general AI to unlock the next phase of human development, there are some fairly un-sexy, routine, and technical challenges to address.
These are some of the biggest—yet least popularized—problems we’ll have to solve to keep moving forward in the realm of AI.
1. Computing power
First, we need to think about how much computing power it takes to fuel the average AI-driven program—and how much more a general AI program would take. Any data processing framework is going to require careful attention to detail, balancing the sheer scale of operations with expected speed and performance. However the equation works out, it demands some seriously expensive hardware. AlphaGo, the computer that beat Lee Sedol back in 2016, for example, required 1,202 CPUs and 176 GPUs to get the job done. The challenge here can be addressed in a few different ways; for example, we could engineer AI that can operate much more efficiently, thus demanding less computing power in the first place. Or we might engineer computational power solutions that can meet AI’s epic demands.
2. Understanding intelligence
Before we can master the creation of artificial intelligence, we first need to describe what intelligence is—and that’s a problem that neither programmers nor philosophers are properly equipped to handle. While imprecise definitions can be useful in guiding our progress, it’s notoriously difficult to work on a problem when the problem itself hasn’t been concretely defined. In the meantime, our AI efforts have been focused on solving specific problems as efficiently as possible, with a disregard for how they’re being solved; this is useful in the short term but leaves us trapped in the realm of narrow AI, rather than helping us make progress toward general AI. If you’re more concerned with results, and less concerned with specific definitions, this becomes less of a problem—but it’s still one we’ll need to solve if we want to keep moving forward.
3. Enabling long-term learning
Another problem is the vast discrepancy between human learning and machine learning. People start creating broad assumptions and frameworks with which they can understand the world around them, starting at a very young age. Their brains are also highly adept at recognizing patterns and evaluating whether new information is worth committing to long-term memory, as another framework. Accordingly, they’re good at making snap judgments and responding to novel situations. Machine learning isn’t so fortunate; general concepts are difficult to create, so we’re all but forced to “teach” machines with thousands of examples, which makes the process inefficient and tedious.
4. Ensuring sensible, understandable rules
No matter how we develop AI in the future, we’ll need to develop rules for it to follow, but those rules can get semantically tricky. You might teach your machine that it’s “always wrong to harm a human being,” but then, what will it count as a “human being?” How will it consider the “wrongness” of an action? And when you say “always,” does that mean a program is forbidden from harming a person even if it means saving that person’s life? This is the key challenge illustrated by science fiction author Isaac Asimov’s Three Laws of Robotics, and one with many possible solutions—though none is foolproof.
5. Developing an ethical framework
When you think about the ethics of AI, you might imagine a semiconscious robot deciding whether or not it’s okay to harm people. But the bigger ethical dilemmas in AI are less dramatic; we need to decide who’s responsible for making the rules for AI development, who’s responsible for enforcing them, and who gets to wield the power of AI in the first place. The Future of Life Institute is dedicated to working out some of these problems, but we haven’t made much progress; ethical questions, after all, rarely have clear and objectively provable answers. Even if we attain the computing power or logical basis for developing next-generation AI, we may spend years, or even decades, debating how it’s to be used before we see it in action.
Can we move forward?
Can we solve these problems and keep moving toward a general AI? Slowly but surely, the answer is yes. Some may be naturally resolved as our access and usage of technology steadily improve. Others will require us to break down the fundamental concepts we’ve learned so far to make room for some yet-undiscovered methodology for perfecting AI.
Experts in the field are divided on when we might see the next major AI breakthrough. Futurist and technological optimist Ray Kurzweil estimates that we may see the technological singularity by 2045—at this point, general AI would exceed human capacity. Kurzweil’s reasoning is based on the logarithmic progression of human inventions and achievements, however, and isn’t calculated based on when we might overcome the specific hurdles outlined above. Other futurists, including John Smart, estimate it to come somewhere closer to 2065.
In any case, the AI market is expected to grow from $2.4 billion in 2017 to more than $59.8 billion in 2025. With so much money—and functionality—at stake, the world’s greatest minds aren’t going to rest until these challenges are met.
This article is published as part of the IDG Contributor Network. Want to Join?