FeaturedIT topics

IDG Contributor Network: What in-memory computing means to IoT

Enterprises pursuing digital transformation or omnichannel customer experience initiatives are increasingly deploying IoT applications that support the collection and analysis of massive amounts of data from new sources. From asset tracking for manufacturing and logistics, to asset utilization for commercial aircraft and trucks, to patient monitoring systems, the analysis of sensor data must take place in real-time. To obtain this capability, most enterprises will be turning to in-memory computing.

There is little doubt that in-memory computing and IoT are vital technologies that will work together to reshape industries and how we live. According to Gartner, by 2019 75 percent of cloud-native application development will use in-memory computing or services using in-memory computing to enable mainstream developers to implement high-performance, massively scalable applications. Meanwhile, Gartner expects more than 20.8 billion devices to be connected to the internet of things (IoT) by 2020. According to the “Cisco Global Cloud Index: Forecast and Methodology, 2015—2020,” the total amount of data created by devices, driven by IoT, will reach 600ZB per year by 2020, up from 145 ZB per year in 2015. Analyzing all this data will require technology that most enterprises lack today.

The solution to the data analysis challenge is in-memory computing. In-memory computing can deliver a 1000X increase in speed in addition to the ability to scale out to handle petabytes of in-memory data for new and existing applications. Today’s in-memory computing solutions encompass a handful of different technologies.

  • In-memory data grids for existing applications: An in-memory data grid, deployed on a cluster of on-premises or cloud servers, can be easily inserted between the data and application layers of existing applications without the need to rip-and-replace the existing database. The in-memory data grid leverages the entire available memory and CPU power of its server cluster and can be scaled out simply by adding a new node.
  • In-memory databases for new applications: An in-memory database is a fully operational database that resides in memory. It is best suited for rearchitecting the database layer of existing applications or building new applications on a modern, scalable architecture.
  • Streaming analytics: A streaming analytics engine takes advantage of in-memory computing speed to manage all the complexity around dataflow and event processing, allowing users to query active data without any impact on performance.
  • Memory-centric architecture: A memory-centric architecture can be built using a distributed ACID and ANSI-99 SQL-compliant disk store that can be deployed on spinning disks, solid state drives (SSDs), 3D XPoint and other storage-class memory technologies. With this strategy, the full operational data set remains on disk and only a subset of user-defined data goes into memory. This allows organizations to achieve an optimal tradeoff between infrastructure costs and application performance.

IoT and HTAP

These in-memory computing technologies can deliver massive speed and scalability increases for a variety of use cases. For IoT applications, the technology enables a new and essential data strategy: hybrid transactional/analytical processing (HTAP). HTAP enables users to perform real-time analytics on their operational data set. Most organizations today still rely on a model in which OLTP and OLAP are performed in separate systems, which requires the data in their operational database to be extracted, transformed, and loaded (ETL’d) into their analytical database. This model is typically unworkable for real-time applications because the time lag required for ETL means the data is stale before it is analyzed.

In-memory computing provides the speed—at scale—to implement HTAP. Gartner agrees, writing that in-memory computing is ideal for HTAP because it supports real-time analytics and situational awareness on the live transactional data instead of relying on after-the-fact analyses on stale data. In-memory computing also has the potential to significantly reduce the cost and complexity of the data layer architecture. This allows real-time, web-scale applications at a much lower cost than approaches that rely on separate OLTP and OLAP systems.

With HTAP, enterprises can now more cost-effectively implement a variety of IoT and Industrial IoT use cases, for example:

  • Patient monitoring: A health care system has hundreds or thousands of at-home patients, each monitored by multiple IoT devices. The system must collect the data from each patient and analyze it in real-time to respond immediately to critical changes in any one patient’s condition.
  • Large commercial aircraft maintenance: Particularly relevant given recent headlines, an airline deploys a variety of sensors on its engines and airframes so it can monitor conditions in real time to detect required maintenance and potential failures before they occur.
  • Maintenance of hard-to-service devices: A manufacturer of large-scale underwater pumps combines data collected from a large number of device sensors, feeds from other relevant data sources, and advanced analytics to digitally model the current state of its physical devices. These digital twin models can monitor the current state of the pump and determine maintenance requirements and detect possible failures without the time and risk required for a physical inspection.

Enterprises are already using in-memory computing and HTAP in areas such as mobile banking applications, which must support large numbers of devices and endpoints. For example, when a major bank with 135 million customers began offering 24/7 online and mobile banking, it was overwhelmed by the huge number of requests, which increased transactions from 30 to 40 per second to 3,000 to 4,000 per second. In-memory computing is enabling the bank to develop a web-scale architecture using a 2,000-node in-memory data grid that can handle up to 1.5PB of data and the required transaction volume. The data grid is also a highly available, distributed computing solution that offers ANSI-99 SQL and ACID transaction support to ensure monetary transactions are accurately tracked.

While we are just at the beginning of the IoT revolution, executives must immediately begin formulating a strategy for building an infrastructure capable of rapidly scaling a variety of new data-intensive applications. IT decision makers who aren’t currently looking at how they can take advantage of in-memory computing will soon find their organizations struggling to move forward with their data- and performance-intensive IoT strategies.

This article is published as part of the IDG Contributor Network. Want to Join?

Related Articles

Back to top button