Cloud-native computing is about how you build applications, not where you build them. That means a global enterprise can run cloud-native applications in its own data center just as well as in the public cloud. Kubernetes is one of the key underlying technologies for this model, which explains its meteoric rise over the past few years.
Kubernetes lets global IT teams build and run applications faster, by automating low-value operations tasks so that teams can focus on adding business value. Part of the core value-add of Kubernetes is its flexibility to run anywhere—i.e., in any data center or any cloud.
I read an article recently that describes the danger of having a cloud-native IT policy. The piece argues that “cloud-native means lock-in” and asserts that “you’re all in with a particular public cloud provider, the single provider of those cloud-native services, with the goal of making the most from your cloud computing investment.” This doesn’t align with my experience working with large enterprises that are deploying cloud-native technologies, which are typically open-source technologies (like Kubernetes). In fact, I believe adopting cloud-native practices is the single best way to avoid vendor lock-in.
This may simply be a case of a definitional misalignment rather than a structural disagreement. The Cloud Native Computing Foundation defines cloud native as “technologies [that] empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds.” (See also the CNCF FAQ.) The ability to build applications that can be deployed across multiple cloud environments is core to the cloud-native proposition. In designing applications to run in any environment, you protect yourself from vendors who would use lock-in to raise prices and reduce service.