It’s hard to go anywhere lately without hearing the term Multi-cloud. But what does multi-cloud really mean for storage? Is it a new fancy word to replace “hybrid cloud”? Stay with me while I try answer these questions and share our definition of Multi-cloud and why we created, Zenko, an open-source Multi-cloud Data Controller.
In our vision, multi-cloud is an acknowledgment that the enterprise world is application centric and that each application has its own infrastructure needs that actually evolve over time. It’s only natural to want the freedom to leverage multiple, different cloud infrastructures at the same time and over time.
When we say Multi-cloud, it actually applies both to private clouds and public clouds. There’s a need to easily and transparently use different clouds based on their strength because in reality, AWS, Azure or Google Cloud each have their own area of expertise.
Multi-cloud is different than “hybrid” because it takes into consideration that an enterprise runs hundreds of different applications. Hybrid is more focused on tiering of old or lower value data to the cloud while Multi-cloud is about optimizing workflows and using the right tool for the right job at the right time. What we heard is that customers still like to manage storage locally in their own data centers, but need to use the cloud to leverage the native services they offer. This requires data mobility between clouds, whether private or public cloud services.
Multi-cloud also includes a notion of freedom: I am in front of customers frequently and one of the recurring topic is about not being locked into a specific cloud platform, whether it’s public or private. True freedom and data mobility can only arise if different cloud platforms uses the same communication protocols and share common abstractions to describe containers, objects, metadata and authentication credentials.
We do not see any initiative from the large public cloud providers or the numerous software defined storage vendors going in that direction so this is why we decided to start working on Zenko last year and are happy to announce that we’re making our source code available today as a set of open community projects on Github under an Apache 2.0 license.
Zenko is a Multi-cloud Data Controller and focuses on 4 pillars:
- AWS S3 API —Single API set and 360° access to any cloud
Gives developers an abstraction layer to enable freedom to use any cloud any time
Single unifying interface using the S3 API, supporting multi-cloud backend data storage, both on-premises (Scality RING and Docker) and public cloud with AWS S3, Microsoft Azure Blob Storage (and Google Cloud Storage to come soon)
- Native Format
Data written through Zenko is stored in the native format of the target storage and can be read directly, without the need to go through Zenko.
Therefore, data written in Azure Blob Store or in Amazon S3 can leverage the respective advanced services of these public clouds.
- Data workflow
Policy-based data management engine used for seamless data replication, data migration services or extended cloud workflow services like cloud analytics and content distribution (available in September)
- Metadata search
Provides the ability to subset data based on key attributes.
Interpret petabyte-scale data and easily manipulate it on any cloud to separate high-value information from data noise
Zenko focuses on ease of use and operation and relies on Docker Swarm for deployment and high availability. It runs as a set of containers either locally or in the cloud, anywhere that Docker can run, be it a laptop, physical servers or any existing cloud provider.
Please head to our community website, zenko.io, to learn more about Zenko and its architecture, look at how to contribute or download and use this new open source Multi-Cloud Data controller today.