Hybrid cloud headlines have dominated tech news in the last year, with Microsoft investing in Azure Stack, AWS building OutPosts and not to forget, Google who led the pack with the announcement of their Kubernetes on-premises offering last July. These are advancements in computing, but what about storage? Any truly scalable cloud architecture is build around an object storage interface that serves as a building block for many services, to name a few:
- repository of system images
- target for snapshots and backups of virtual machine images
- long term repository of logs
- main storage location for user generated content like images, videos or document uploads depending on the application
What about applications which need to store petabytes of data?
In a cloud native world, Kubernetes is the fundamental building block that makes running and managing applications in hybrid environments easier. The cloud native movement promised and has delivered improvements on many important axis:
- faster application development
- better quality and agility thanks to automation
- flexibility in deployment options with hardware and infrastructure abstraction
- less human errors in operations via automation and common practises
For example, today it’s possible to develop an application on a laptop with a self contained Kubernetes environment that runs all its dependencies, build a helm package and deploy the exact same code on a public cloud environment running in Google, AWS or Azure or on premise with a local K8s distribution like Scality MetalK8s or Redhat Openshift.
For stateless applications, like a web proxy or serving static content that do not require storage capacity beyond what can be offered inside a given hyper converged compute cluster, this is a reality. On the over end, if your application requires more capacity, like storing user documents, photos, or videos, the dreams of deploy anywhere and automated operations fails short.
In a previous post, I outlined the significant differences and gaps between AWS S3 and Google Cloud Storage REST API or with Azure Blob Storage. These differences are even more complex with the multitude of smaller object storage services like Wasabi, Backblaze or Digital Ocean.
It’s not practical to think that an application can be developed to support Azure Blob Store and its object snapshots and Append Blobs features while keeping this application portable to AWS. With such a set-up, your service could become global with a need to run in a region where Microsoft is not present. Do you really want to rewrite significant piece of your code for each storage service?
There should be a way to stay independent of storage clouds interfaces and vendors, the same way that it’s possible to abstract compute clouds thanks to solution like Kubernetes.
Why is object storage a good fit for cloud native architectures?
For one they share a lot of similarities:
- Stateless: Great fit the Kubernetes/Pod model, greater flexibility
- Abstraction of location: containers and data life cycles are very different, it’s a good way to separate operation concerns, right-sized capacity
- Predictable: no locking and no complex state management
- Optimised for heterogeneous networks: Multi-cloud and geo distributed by nature
- Rich Metadata: metadata search is key, especially over multiple clusters and multiple geographies
What about if you could access data easily and transparently wherever it’s stored? No matter if it’s on prem, in a different data center or in any public cloud storage service.
To my knowledge, Zenko is the only solution today that can unify the leading 6 public cloud storage providers with a single API and support lifecycle policies between these heterogeneous environments. But don’t trust me, you can try Zenko in a few minutes by creating a sandbox environment in Orbit, Zenko’s hosted management portal!
And of course feel free to disagree and comment in our forums over here. We’re eager to get object storage as part of the reference architectures.