FeaturedIT topics

IDG Contributor Network: The hidden transformation underlying AI transformations

As companies race towards “AI transformation,” applying machine learning and other techniques to optimize their processes, products, and services, there are numerous simultaneous infrastructure transformations occurring throughout AI stack. These transformations impact both suppliers and adopters and introduce yet more uncertainty to a market already driven by extensive “FOMO, FUD, and feuds.”

Put simply: We’re building the plane as we’re flying the plane

We’re building AI applications as we’re reconstructing the infrastructure and information architecture that will support them.  

New configurations are emerging to address core challenges across the stack

The billions of dollars poured into the AI market aren’t just about chasing the bright shiny tech du jour, they are about solving fundamental problems for which current architectures are inadequate or too brittle. Consider just a handful of examples illustrating how innovations across the stack are influencing AI’s development.

Transformation is occurring in the data pipeline itself

Because AI requires data, businesses that want to deploy AI inevitably find themselves in a deep evaluation of their data pipelines: the mechanisms for sourcing, collecting, cleansing, processing, storing, securing, and managing enterprise data. But existing big data lakes and warehousing tools such as Hadoop fall short when it comes to needs for real-time analytics, standardization of data across silos, and rapid learning. In many traditional data warehousing tools, usable data is the only the result of the pipeline rather than a mechanism to continuously train the infrastructure (the pipeline itself). For high-throughput environments such as real-time trading or fraud detection in financial markets, mission-critical IoT, or even personalized pricing in retail, it’s critical that infrastructural models themselves undergo continuous optimization as existing and new live streams are fed into the system. Companies such as Informatica, IBM, Oracle, and Neeve Research are working to support high-volume environments in which intelligence isn’t just applied for the output but sits in multiple layers of the pipeline.

Transformation is occurring in how AI is developed

Evolution is occurring across the stack, but software has the highest velocity of change. In the realm of machine learning itself, you find the emergence of automated machine learning, also called auto-ML. This is an umbrella term for machine learning used in the development of AI itself, including coding, programming, design, parameter tuning, model selection, and more. Google’s AutoML, Amazon’s SageMaker, DataRobot, and H20’s Driverless AI product now support automated feature engineering and automated model selection. Informatica also recently announced a host of machine learning-driven features that enhance various aspects of its broader machine learning and data management offering, such as a data domain discovery engine, recommendation engines for data analysts and engineers, and simulating impacts on risk scoring before deploying a model. Innovations in auto-ML introduce new narratives for AI tool resources, companies must monitor these developments closely.

Transformation is occurring in how we define edge computing

How much data processing, storage, analytics, and inferencing occurs locally—as opposed to in the cloud— is yet another realm of innovation that directly influences commercial application viability. With the intense focus on industrial and commercial applications like autonomous driving, real-time object detection, and others that require algorithms to change based on dynamic inputs, capabilities at the edge are shifting. While a nascent market, chips capable of running specialized neural nets or advanced processing, such as those developed by ARM, Nvidia, Intel, and startups like Graphcore, are once again reshuffling how we think about the what and where possible for AI.

Another example can be found in the external storage hardware used to support industrial IoT applications. Instead of simply storing large batches of raw data in “dumb” solid-state devices (SSDs), companies like Virtium are infusing software and intelligence into edge storage itself. Features such as endurance optimization, automated data pruning and structuring, and the ability to analyze and predict storage life based on actual usage render new insights and controls into industrial IoT data pipelines. Here, advancements at the edge help expedite and optimize for deeper learning in the cloud.

Why this matters

Certainly, the pace of technological advancement has always outpaced businesses’ abilities to keep up, so what’s different now? Aside from the fact that enterprise AI adoption already introduces a range of novel considerations, from fears of job displacement to robotic ethics, the shifting sands of AI infrastructure, and tool options impact resource planning and potential outcomes.

  • Development resources: Many of the innovations above directly impact the time data scientists have to spend preparing machine learning models. It is estimated that data scientists spend between 50 to 80 percent of their time today preparing data to even train such models. As these steps become increasingly automated, implications abound. First, data scientists will have much more time to do actual data science; second, automated model selection is specializing to support specific domains (e.g., sentiment analysis, language translation, etc.); third, potentially broader skill sets can contribute to machine learning development; and fourth, companies will have to reshuffle resources to monitoring advancements, new risks, and management requirements for when more AI (and thus fewer people) train AI.
  • Compute resources: In addition to developers’ time, the amount of compute required to test and select models, tune hyperparameters and network architectures, and train models comes at a very high cost. When techniques like reinforcement learning, evolutionary algorithms and auto-ML directly impact the amount of compute required for tuning and training, this has direct impacts on the time and costs required to deploy machine and deep learning.
  • Cloud resources: When the volume, velocity, and sophistication of data processing capable of occurring locally or at the edge shifts, this impacts the overall spend required for cloud services. For many businesses, this will also reshuffle proprietary versus third-party assets required for a given application. Combine this impact with the increasing speed of compute across the stack.
  • Governance resources: As alluded to previously, automation of both AI development and AI decision-making will require significant management efforts and new process creation. This isn’t simply to reduce risks of bad customer experience, but to remain compliant, stay secure, and monitor for error. In the case of deep learning and other AI being pushed to the edge, companies must be aware of new classes of benefits (e.g., data privacy, bandwidth requirements) and risks (e.g., liability in the event of performance failure).

IA transformation underlies AI transformation

The overarching recommendation for enterprise adoption of AI is to closely monitor transformations in infrastructure and information architectures, as they are reshuffling the how, what, and where possible for AI.

This article is published as part of the IDG Contributor Network. Want to Join?

Related Articles

Back to top button