FeaturedIT topics

Open Computer Project and Azure: hardware meets software

Microsoft has been a member of the Open Compute Project since 2014, donating many of the specifications for its Azure data centers to the project. It’s where Microsoft develops its Olympus servers and its Sonic networking software. So it’s always fascinating to go along to the annual OCP Summit to see what’s happening in the world of open hardware design, and to see what aspects of Azure’s underlying infrastructure are being exposed to the world. Here’s what I found this year.

Introducing Project Zipline

Public clouds like Azure have a very different set of problems from most on-premises systems. They have to move terabytes of data around their networks, without hindering system performance. As more and more users use their services, they have to move more data in the network on links that don’t support highe-r-bandwidth connections. That’s a big problem, with three possible solutions:

  • Microsoft could spend millions of dollars on putting new connectivity into its data centers.
  • It could take a performance hit on its services.
  • It could use software to solve the problem.

With the resources of Microsoft Research and Azure, Microsoft made the obvious choice: It came up with a new compression algorithm, Project Zipline. Currently in use in Azure, Project Zipline offers twice the compression ratios of the commonly used Zlib-L4 64KB algorithm. That’s a significant boost, almost doubling bandwidth and storage capacity for little or no capital cost. Having proved its worth on its own network and in its own hardware, Microsoft is donating the Zipline algorithm to OCP for anyone to implement and use.

But Project Zipline is more than software. To work at the speed it has to work, it needs to be implemented as hardware. Kushagra Vaid, general manager of Azure Hardware Infrastructure, gave me details about Zipline and how it works. The project began by analyzing many internal data sets from across Azure, using data from a mix of workloads. Although the data was different, the underlying binary had similar patterns, letting Microsoft develop a common compression algorithm that could work across not only static data but also streamed data.

Related Articles

Back to top button