Cooking up the bright future of the multi-cloud data controller

Just like you want to keep control of your kitchen and  of your ingredients while cooking, we believe that everybody needs to keep control of their data. Zenko was designed from the ground up to be the chef’s choice of those who create value with data. Like picking the best ingredients from the shelf, Zenko’s […]

Written By Vianney Rancurel

On September 19, 2018
"

Read more

Solve the challenges of large-scale data, once and for all.

Just like you want to keep control of your kitchen and  of your ingredients while cooking, we believe that everybody needs to keep control of their data. Zenko was designed from the ground up to be the chef’s choice of those who create value with data. Like picking the best ingredients from the shelf, Zenko’s multi-cloud support for S3, Azure and Google APIs makes it easy to prepare cloud applications with the best storage for the task.

We’re happy to share what’s in Zenko 1.0 and a brief view of what’s cooking for the rest of 2018.

Multi-Cloud Storage Backends

Zenko is the Esperanto of cloud storage APIs. At the moment, Zenko supports Amazon S3, Google Cloud Storage (GCS), Microsoft Azure, Scality S3Connector RING, Scality  sproxyd RING, DigitalOcean Spaces and Wasabi Storage. Other S3-compatible backends are easy to add by extending the open source CloudServer component, with Blackblaze support already under development.

One-to-Many Replication

One-to-many cross-region replication (CRR) is a bucket-level configuration that enables automatic, asynchronous copying of objects across buckets in different clouds, private or public. This style of replication works particularly well for securing data in different locations, for example, copying  video files to the closest  distribution networks or keeping multiple copies for backups . Of course our favorite CRR use case is using replication to protect  against rising cloud egress costs. Like others have painfully discovered, once multiple terabytes of data is stored in one cloud, moving away will incur in a very high bill. By starting with multiple copies of the data, most of the control goes back to the user: switching from AWS to Azure won’t be as expensive or time consuming if the same data is uploaded in both.

Zenko Orbit offers an easy UI to track the progress of all replication processes across each target cloud. Replication jobs can also be paused when a target cloud is known to be down, and then resumed (manually or based on time) when it’s back up. Of course replication jobs that fail are automatically retried until they succeed, too.

Aside from replication-related data, like status and throughput, Orbit also provides other useful statistics like total data managed, Zenko capacity, memory usage, number of objects/ object versions managed, and amount of data managed between cloud destinations.

Local Cache (w/ Storage Limit)

Without a local cache, each objects would be stored immediately in a bucket on a public cloud. In that case, any replication will incur in egress costs because each object would have to go from the first cloud destination to the new ones, like a daisy chain. With a local cache, the objects are replicated by going straight to their destination, like a broadcast to the cloud.

Overview of Zenko’s replication engine.

Zenko’s local cache saves charges by queuing objects in a user-defined local cache before they’re uploaded to a target cloud. Users can set a local cache location on their preferred storage backend, set a storage limit for how large the cache can become. All objects are automatically cleared from the cache after the replication process completes.

Metadata Search

Objects are usually stored with metadata to describe the object itself. For a video production company for example, metadata can include details like “production ready” or the department that produced the file or the rockstar featured in a video.

With metadata search, users can actually search metadata on objects written through Zenko. Simply add tags and metadata to the object (in Orbit), write object to target cloud and then search for the object later.

Future versions will allow users to also ingest metadata for objects that are written natively to the clouds, so that Zenko users will be able to easily import existing catalogues.

Lifecycle: Expiration Policies

Reclaim storage space by defining lifecycle expiration policies (specified in Amazon’s S3 API) to any bucket managed through Zenko. The policies can be applied to both versioned and non-versioned buckets, and are triggered after a number of days (defined by user) have passed since the object’s creation.

Zenko features offer users the ability to personalize data workflows based on metadata, treating Zenko as an S3-compatible target that replicates data to remote servers in the cloud.

Future versions will add storage targets based on network filesystems like NFS and SMB. Also there are plans to radically improve the workflow management capabilities of Zenko with powerful API and a graphical UI. Get involved with Zenko development by following Zenko on GitHub and get on Zenko Orbit to test all Zenko features.

Simple, secure S3 object storage software for modern applications