One S3 Endpoint
One gateway to AWS, Google Cloud, Microsoft Azure and many more!
Each cloud has its own advantages and limitations. Picking one usually means getting all of them and have limited options for the evolution of your projects.
Cloud storage solutions should be interoperable but they’re not.
For software developers, supporting multiple clouds storage systems means managing an increased complexity: multiple APIs, multiple endpoints, opaque workflows. Data gets scattered across services with increased security threats. Zenko addresses these issues providing
- a unified API endpoint compatible with S3,
- support for multiple storage backends,
- full metadata search across all clouds
- a policy-based replication/workflow engine
Zenko’s Four Pillars
Why Support Multiple Clouds
Each cloud storage platform has advantages and limitations.
Amazon, Google, Azure and others have all great features but they also try to lock you in their platform.
Complex Cost Structure
Cloud costs vary a lot between clouds. Some are better for long-term archival, some are optimized for small, frequent transfers.
Each cloud has unique selling proposition: better developer experience, wider ecosystem, faster networks.
Different Data Security
Providers of different sizes offer different priorities for data availability and resiliency
Google Cloud for example does a terrific job at Artificial Intelligence (AI) tools like video/image analysis. Microsoft Azure is well integrated with enterprise-class tools and .NET framework. Amazon’s cloud offering is vast and its ecosystem deep. Choosing only one tool for data storage translates into marrying the whole platform.
If Amazon Glacier is great to store rarely accessed objects, using Google Machine Learning to analyze them could be expensive thanks to Amazon’s egress costs.
Gain Freedom To Choose And Keep Control Of Your Data
By using Zenko you don’t have to choose only one cloud solution but your application can use any of the major cloud storage providers. Your application will gain flexibility and you’ll save headaches.
Zenko allows to automatically replicate data between Amazon Glacier and Google Cloud Storage keeping costs down and maximizing flexibility.
Data managed by Zenko is available in each cloud un-mangled, in its native format so it can be accessed by other applications. For example, a media management system that only supports Microsoft Azure storage API, can access data directly in the native blob format while Zenko replicates the same data to Amazon for distribution.
Step 1: Deploy Zenko
The easiest way to get started is to get a Zenko sandbox in Orbit.
Signup now and follow the on-screen tutorial. The sandbox will last around 48 hours, plenty for a quick demonstration of Zenko’s S3-compatible API and replication capabilities.
Once you have created you Zenko instance, connect it to Orbit to easily create a storage account in the ‘Settings’ page. Copy the access key and secret ID (the secret ID will not be shown again). On the same Settings page, note also the endpoint of your Zenko instance (if you used the sandbox, it will be something like https://1d393129-7325-11e8-a153-0242ac110002.sandbox.zenko.io)
Step 2: Write Code
Once you have a Zenko instance up and running, you can use it as you would use any S3-compatible endpoint. For example, create a bucket on the Orbit sandbox using the AWS-CLI tool:
$ aws --profile zenkotest s3 mb s3://zenkotest --endpoint https://1d393129-7325-11e8-a153-0242ac110002.sandbox.zenko.io
Since Zenko is compatible with Amazon’s S3 API, you can use any of the standard S3 tools to manage the instance.
Each component is hosted in a separate repository hosted on GitHub and has versioned images readily available on their own DockerHub repositories.
Zenko’s multi-cloud goal is to provide a unified namespace, access API and search capabilities to data stored locally (using Docker volumes or Scality RING) or in public cloud storage services like Amazon S3, Google Cloud Storage, Microsoft Azure Blob storage or any other S3-compatible storage system like Scality S3.
The micro-services are deployed and orchestrated on a Kubernetes cluster. The main components are:
- Cloudserver, which provides the basic S3-compatible API endpoint.
- Backbeat, the core engine for asynchronous replication, optimized for queuing metadata updates and dispatching work to long-running tasks in the background.
- UtAPI, an API service for tracking resource utilization and reporting metrics.
Other open source components like nginx, Kafka, Zookeper, Redis, MongoDB are also part of Zenko. The system is designed to be extensible so that other storage protocols can be added: follow the Developer’s Bootstrap Guide of Zenko Cloudserver.