Artificial intelligence is on everyone’s lips and everyone’s mind. I travel the world and speak to customers, colleagues, industry analysts, partners, and reporters. Invariably, every conversation these days turns at some point to AI: the opportunities, the threats, the limitations, and the future.
AI systems have been around since the 1950s. After overhyped expectations could not be met, the first AI winter set in, briefly thawed with successes of rule-based expert systems in the 1980s. More freezing temperatures followed until the recent AI spring in this decade.
Artificial intelligence is unavoidable; amplifying and externalizing our abilities through technology is what we do. It is who we are. That we would not stop with the invention of tools to give us greater muscle power, but rather move on to create external intelligence, should not be a surprise.
AI is an extension and continuation of advanced analytics
Today we celebrate AI systems with incredible abilities—superhuman abilities in many ways. Systems that can operate vehicles autonomously, diagnose illnesses on medical images, translate accurately between languages, detect credit card fraud, guide customer relationships, and run entire data centers.
This new generation of AI writes another chapter in the long-running, age-old story of mankind: making the most of the information available to us. This new generation of AI is built on algorithms trained on data. It is based on analytics.
An algorithm is a methodological recipe to transform one thing into another. It is how cooking ingredients turn into soup, or how light is processed in the visual cortex of the brain and transformed into images of the world. Algorithms can be expressed through if-then-else programming logic, through connections and weights in an artificial neural network, through biochemical reactions and neurological processes, or through other means.
The algorithms that accelerate artificial intelligence today are derived from processing data—through analytics. Training a deep neural network means finding an acceptable solution in an overparameterized nonlinear system, iteratively improving from a set of starting values by reducing the discrepancy between the solution and the training data while locating a solution that generalizes to other data. Fundamentally, this is no different than finding the solution to a logistic regression problem or finding the optimal split values and pruning depth of decision trees in a random forest.
While deep learning and reinforcement learning are the tip of the spear in the analytic toolbox of the AI builder, analytics of all stripes are used to derive algorithms from data and to drive decisions. A study by McKinsey Global Institute of 400 AI use cases across 19 industries shows the breadth of analytics used to solve AI problems. The successful AI systems of today, and very likely the future, are based on advanced analytics and its extensions, turning data points into decisions and actions.
Carbon- or silicon-based: May the best algorithm win
Carbon-based algorithms—such as people—have evolved over billions of years. The quality assurance system is natural selection. By comparison, silicon-based algorithms are a very recent invention. Yet they are getting increasingly sophisticated and complex, solving more and more difficult problems that were thought to lie in the domain of highly trained people.
Silicon-based, synthetic algorithms are also advancing much faster than carbon-based algorithms; progress is measured in weeks. Max Tegmark calls “the idea that you can only be smart if you are made of meat” carbon chauvinism. We have to accept that intelligence no longer requires a brain.
Human vision is much more complex than computer vision, but an algorithm trained to identify objects is still highly capable and can augment people because it sees things differently. AI algorithms are not as good as people in understanding language in context, but they are more flexible than us in translating between languages. Algorithms do not have to beat people at everything to be impactful and transformational.
As a society, we are still working on the quality assurance system for silicon-based algorithms.
The QA system for artificial intelligence is not just about making sure the algorithms and software work. It is about making sure algorithmic decisions are understood, transparent, and unbiased. It is about developing the infrastructure and engineering discipline that supports autonomous decision making by machines. As Michael I. Jordan points out, realizing autonomous vehicles at scale will require a complex traffic control system, more complex than the current air-traffic control system. He argues that we need to conceive of a new, human-centric engineering discipline.
As Tegmark says, we tend to be reactive in our technology strategy. We invent first, then we learn from our mistakes: First the fire, then the fire extinguisher. First the automobile, then the seat belt and traffic lights. If artificial general intelligence is within our grasp in the next decades, we need to be proactive and make sure that we get this right the first time. Otherwise, who knows if we get another shot at it?
This article is published as part of the IDG Contributor Network. Want to Join?