FeaturedIT topics

AWS adds blockchain and time-series databases

Amazon has spent more than a decade trying to free itself of Oracle’s “one database to rule them all” approach, something Amazon snidely calls “clunky” and “old guard.” Not content to unshackle just itself, Amazon has announced a series of database introductions and improvements to broaden choice for its customers. Yes, with Oracle you can get “fries” (MySQL) with your “burger” (Oracle DB). But with AWS you get a seemingly endless buffet of database options.

Well, you get 15 as of today, which feels like the equivalent of “endless” in old-school enterprise groupthink. At AWS Re:Invent 2018 today, the company announced three major functional upgrades to existing databases and two brand-new databases, bringing the full complement of purpose-built AWS databases to 15.

At AWS, a database in every pot

As Amazon CTO Werner Vogels wrote earlier in 2018, AWS offers “so many database products” because “developers want their applications to be well architected and scale effectively. To do this, they need to be able to use multiple databases and data models within the same application.” He continues:

Seldom can one database fit the needs of multiple distinct use cases. The days of the one-size-fits-all monolithic database are behind us, and developers are now building highly distributed applications using a multitude of purpose-built databases. Developers are doing what they do best: breaking complex applications into smaller pieces and then picking the best tool to solve each problem. The best tool for a job usually differs by use case.

In his keynote address today, AWS chief executive officer Andy Jassy called out the depth and breadth of AWS, stressing that the company must deliver the broadest array of functionality if it wants customers to build their futures with AWS. Gartner analyst Lydia Leong then put this in perspective, suggesting she personally has talked with “multiple” Fortune 500 executives in 2018 that are committing half a billion dollars each year to AWS, with “many” others negotiating $100 million commitments. You don’t get that kind of spend without ensuring enterprises can build applications and, in particular, park their data with you.

A blockchain database? But why?

For example, AWS announced the Amazon Quantum Ledger Database (QLDB) (available in beta), a futuristic name to satisfy your inner blockchain nerd. Blockchain advocates have rightly been criticized for overhyping and misusing the “immutable database” technology, but AWS sees enough signal in the noise to release both a managed blockchain service and QLDB. QLDB is particularly interesting because it delivers on core blockchain functionality, two to three times as fast (because it doesn’t need distributed consensus), without having to take on the full burden of the blockchain (managed or otherwise).

Enterprises have largely been suspicious of some of the core tenets of blockchain, wanting to centralize the decentralized nature of blockchain but keep its transparency and immutability. Amazon QLDB seems to deliver on that, while also packaging the service as serverless so developers won’t have to bother with capacity provisioning or configuration of read/write limits.

Blockchain without the bother—cool.

A time-series database for IoT and ops

And then there’s Amazon Timestream, a new time-series database also released in beta. AWS doesn’t have anything against relational databases (in fact, it offers several), but it recognizes that “RDBMS” tends to equate to “rigid” and “inflexible,” neither of which works for IoT or operational applications that must keep up with torrential quantities of data arriving from diverse sources and change over time.

Developers have been making do with the RDBMS to manage time-series data simply because better options didn’t really exist. Although commercial time-series databases (both open source and proprietary) do exist, they tend to be hard to work with, neither particularly easy to scale nor straightforward (in that they come separate from streaming and ingestion software, among other things).

As with much of what AWS is doing now, Amazon Timestream is serverless, so it should easily scale up or down as demands shift. Amazon promises dramatically better performance than an RDBMS, but it almost doesn’t need to bother: The promise of cloud and, now, serverless is convenience. No, it won’t hurt having 1,000X faster performance than Oracle or MySQL, and in fact will make a straightforward developer decision around convenience that much clearer.

AWS’s other database additions

AWS has also added:

  • Global database support for Amazon Aurora MySQL (so enterprises can update a database in one region and automatically replicate it globally across multiple regions).
  • On-demand (removing the need for capacity planning) and transactions (full ACID guarantees) for Amazon DynamoDB.

AWS’s network effect

AWS now has orders of magnitude more enterprise data than any other cloud provider. This not only makes it harder for an enterprise to leave AWS, but it also means, as Expedia Vice President Subbu Allamaraju notes, “Customers keep feeding [AWS’] network with problems and use cases, and the network keeps growing bigger and bigger every year.” It’s the network effect gone wild, with AWS learning more about database use cases than its peers, paired with the financial ability to invest in meeting those needs.

It’s hard to see how anyone competes with that—in particular Oracle, which has its “clunky, old-world” network struggling to run in the cloud at all. Developers are the new kingmakers, and it looks like they’ve crowned AWS king.

Related Articles

Back to top button