We have been working hard and are thrilled to give you a sneak peek of the Zenko 1.1 release. Here is the list of new treats for you:
Control over your data.
The new release is bringing you more tools to control your data workflow in the multi-cloud environment.
What is lifecycle policy anyway? Basically, you can create rules that will specify actions to be taken on the objects in Zenko after a certain period of time. In the previous Zenko release you could set an expiration date on objects, meaning you could simply tell Zenko to delete certain objects after a specified period of time.
We are adding another powerful policy to the lifecycle management – transition. Transition policies allow moving object data from one location to another automatically instead of deleting it. Usually, you would like to move old or infrequently-accessed data to a slower but cheaper location. On versioned buckets, you can apply this policy to current version data, as well as noncurrent versions.
Support for Ceph-based storage
Liberating freedom of choice
Zenko is striving to be up-to-date with all current cloud storage offerings on the market. Our goal is to provide as much flexibility as possible and avoid vendor lock-in.
Ceph is an open source software put together to facilitate highly scalable object, block and file-based storage under one whole system.
Zenko 1.1 adds Ceph to the list of possible storage locations. Just go to the Orbit -> Storage Locations -> Add, in a dropdown menu you will find Ceph.
Out-of-band (OOB) updates from Scality RING to Zenko
If you are already utilizing our on-prem object storage software, RING, we are excited to let you know that now you can go multi-cloud with all your stored data. Previously you could use Zenko capabilities to manage data across different clouds, but only for the inbound data stream. What about all the data you already had stored in the RING?
That’s where out-of-band updates are coming in to save the day.
Our team created an extensible framework (a.k.a Cosmos) that will allow Zenko to manage data stored on various kinds of backends such as filesystems, block storage devices, and any other storage platform. Pre-existing data on these storage systems and data not created through Zenko will be chronologically ingested/synchronized.
Enter SOFS (Scale-Out-File-System)
To accommodate a variety of architectures and use cases, RING allows native file system access to RING storage through the integrated SOFS with NFS, SMB and FUSE Connectors for access over these well-known file protocols. SOFS is more precisely a virtual file system on top of the RING’s storage services. It is a commercial product from Scality and you can learn more about it in this whitepaper.
These updates essentially enable Zenko to discover and import metadata from files stored on existing Scality RING file system(s), as well as to receive ongoing asynchronous (out-of-band) updates for changes to the target file systems such as new file creates, deletes and metadata updates. Once the metadata is imported into Zenko, key Zenko functionality can be used.
To name a few key features:
- Cross-region replication
- Lifecycle management
- Metadata search
OOB from RING S3 Connector(S3C) to Zenko
The Scality S3 Connector provides a modern S3-compatible application interface to the Scality RING. The AWS S3 API has become the industry’s default cloud storage API and has furthermore emerged as the standard RESTful dialect for object storage.
Zenko 1.1 has a new service (ingestion-populator and ingestion-processor) in the Backbeat component to discover and import metadata from Scality S3C, as well as to receive ongoing asynchronous (out-of-band) updates for changes to the target bucket such as new objects created, deleted and metadata updates. Once the metadata is imported into Zenko, key Zenko functionality can be used on the associated RING object data.
Let us know
We would love to hear your thoughts on this updates to Zenko or If you want to contribute to the Zenko roadmap, check our GitHub repository and leave your comments on the forum.