Use Zenko CloudServer as a backend for Docker Registry

Use Zenko CloudServer as a backend for Docker Registry

We welcome the following contributed tutorial:

Rom Freiman

Rom Freiman

 

It is always a victory for an open source project when a contributor takes the time to provide a substantial addition to it. Today, we’re very happy to introduce a Docker Registry – Zenko tutorial by GitHub user rom-stratoscale, also known as Rom Freiman, R&D Director at Stratoscale.

Thank you very much for making the Zenko community stronger, Rom! — Laure

Introduction

Docker registry is a service enabling storage and distribution of private docker images among relevant users. Although there are public offerings for that service, many organizations prefer to host a private registry for their internal use.

Private Docker Registries have multiple options for storage configuration. One of the useful ones is Zenko’s CloudServer (formerly known as S3 Server), by Scality.
Zenko’s CloudServer is an open-source standalone S3 API deployed as a docker container. In other words, it allows you to have an S3-compatible store on premise (and even on your laptop).

In this tutorial, I’ll demonstrate how to deploy a private Docker Registry with Scality as your private on-premise S3 backend.
We’ll use containers to run both services (registry; Zenko CloudServer).

Prerequisites and assumptions

Prerequisites:

  1. Docker daemon (v1.10.3 or above)
  2. python (v2.7.12 or above)
  3. AWS CLI (v1.11.123 or above)
    $> pip install awscli==1.11.123
  4. Configure aws credentials by:
    $> aws configure
    AWS Access Key ID []: {{YOUR_ACCESS_KEY_ID}}
    AWS Secret Access Key []: {{YOUR_SECRET_KEY}}
    Default region name []: us-east-1
    Default output format[]: json

Assumptions:

  1. In this tutorial both s3 and registry will run on the same physical server, in order to avoid dealing with networking. If you choose to run it on different servers, verify that you have routing from the registry server towards CloudServer(TCP, port 8000)
  2. The s3 storage will use the containers storage and will be lost once the container will be stopped. For persistency, you should create appropriate volumes on the host and mount them as Docker volumes for the CloudServer container
  3. The consumption of the docker registry is from localhost, hence we’ll not dive into SSL and certificates generation (but you can if you want to).

Run Zenko CloudServer container

User the docker run utility to start Zenko Cloudserver:

$> docker run -d --name cloudserver -p 8000:8000 -e SCALITY_ACCESS_KEY_ID={{YOUR_ACCESS_KEY_ID}} -e SCALITY_SECRET_ACCESS_KEY={{YOUR_SECRET_KEY}} zenko/cloudserver

Sample output (should be cloudserver container uuid):

Unable to find image 'zenko/cloudserver:latest' locally
latest: Pulling from zenko/cloudserver
Digest: sha256:8f640c3399879925809bd703facf128a395444e2d4855334614e58027bcac436
Status: Downloaded newer image for zenko/cloudserver:latest
4f08be7e77fe8e47b8c09036e9f5fa4e29d7a42b23bd1327a2a4f682c5c48413

Use AWS CLI to create a dedicated bucket in Zenko CloudServer:

$> aws s3api --endpoint-url=http://127.0.0.1:8000 create-bucket --bucket docker-registry

Sample output:

{
    "Location": "/docker-registry"
}

Verify it exists:

aws s3 --endpoint-url=http://127.0.0.1:8000 ls s3://

Sample output:

2017-08-01 23:05:44 docker-registry

Configure and run your private Docker Registry hosted in your local CloudServer container

Create a config file looking like below, and located at /home/{{YOUR_WORKDIR}}/config.yml

version: 0.1
log:
  level: debug
  formatter: text
  fields:
    service: registry
    environment: staging
storage:
  s3:
    accesskey: newAccessKey
    secretkey: newSecretKey
    region: us-east-1
    regionendpoint: http://127.0.0.1:8000
    bucket: docker-registry
    encrypt: false
    secure: false
    v4auth: true
http:
  addr: :5000
  headers:
    X-Content-Type-Options: [nosniff]
health: 
  storagedriver:
    enabled: true
    interval: 10s
    threshold: 3

Important fields:

  1. http : the registry will listed to port 5000;
  2. S3 : the actual s3 backend configuration, including endpoint, credentials and the bucket name;
  3. health : the registry will actively probe the s3 backend (above spawned cloudserver container) to check that it’s active.

Spawn the registry, while mounting the configuration into the container:

docker run -d --name=zenkoregistry --net=host -v /home/{{YOUR_WORKDIR}}/config.yml:/etc/docker/registry/config.yml:z registry:2

Sample output (expected private zenkoregistry uuid):

4f08be7e77fe8e47b8c09036e9f5fa4e29d7a42b23bd1327a2a4f682c5c48413

Now it’s a time for testing… Let’s pull an alpine container, push it into the registry, and check it was created:

docker pull alpine
docker tag alpine:latest 127.0.0.1:5000/alpine:latest
docker push 127.0.0.1:5000/alpine:latest  # the actual submission to the newly spawned registry
docker rmi alpine:latest; docker rmi 127.0.0.1:5000/alpine:latest # local cleanup before pulling from the registry.
docker pull 127.0.0.1:5000/alpine:latest

And… voilà!!!

Now, you can list what is inside your CloudServer-hosted docker-registry bucket to check how zenkoregistry actually saved the data:

aws s3 --endpoint-url=http://127.0.0.1:8000 ls --recursive s3://docker-registry

Sample output:

2017-08-02 12:21:41        528 docker/registry/v2/blobs/sha256/09/0930dd4cc97ed5771ebe9be9caf3e8dc5341e0b5e32e8fb143394d7dfdfa100e/data
2017-08-02 12:21:40    1990402 docker/registry/v2/blobs/sha256/6d/6d987f6f42797d81a318c40d442369ba3dc124883a0964d40b0c8f4f7561d913/data
2017-08-02 12:21:40       1520 docker/registry/v2/blobs/sha256/73/7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560/data
2017-08-02 12:21:40         71 docker/registry/v2/repositories/alpine/_layers/sha256/6d987f6f42797d81a318c40d442369ba3dc124883a0964d40b0c8f4f7561d913/link
2017-08-02 12:21:41         71 docker/registry/v2/repositories/alpine/_layers/sha256/7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560/link
2017-08-02 12:21:41         71 docker/registry/v2/repositories/alpine/_manifests/revisions/sha256/0930dd4cc97ed5771ebe9be9caf3e8dc5341e0b5e32e8fb143394d7dfdfa100e/link
2017-08-02 12:21:41         71 docker/registry/v2/repositories/alpine/_manifests/tags/latest/current/link
2017-08-02 12:21:41         71 docker/registry/v2/repositories/alpine/_manifests/tags/latest/index/sha256/0930dd4cc97ed5771ebe9be9caf3e8dc5341e0b5e32e8fb143394d7dfdfa100e/link

Troubleshooting

Not working? No problem! Here are a few places to check:

  1. AWS credentials:
    cat ~/.aws/credentials

    Compare them to the one you used to spawn the cloudserver and to those that are written in the s3registry config file;

  2. cloudserver logs:
    docker logs cloudserver

    Check whether the healthcheck is received every 10 sec and the response code to it is 200;

  3. zenkoregistry logs:
    docker logs zenkoregistry

    Check the permissions of the config file, the port it uses (5000), try to run the container in privileged mode, etc.

  4. Google 🙂

Still not working? Still no problem! Reach out to the Zenko team on their forum: they’ll be very happy to help!

Enjoy,

Rom Freiman
Director, R&D,
Stratoscale

AWS Summits London and Chicago

Laure Vergeron

Laure Vergeron

I was lucky to attend both London and Chicago AWS Summits for Scality recently. We were promoting Zenko, our multi-cloud data-controller, and the Scality RING, our flagship Software-Defined solution for on premise Storage. As an engineer, I was there to nurture technical conversations whenever possible, and what I found striking was how many of those happened in the course of those two events.

AWS Summit attendees have very diverse backgrounds, and it keeps you on your toes: anyone from a bachelor undergraduate student to the CTO of a global company could be the next person stopping by your booth. I had great conversations with junior software engineers, senior infrastructure engineers, architects… and not all of them were AWS experts! In fact, a lot of attendees stated being new to AWS. But all of them had a massive spark of interest when we talked about an open-source implementation of the S3 protocol.

My three main takeaways from this event are:

  • Developing an application that will eventually run against S3 could have huge hidden costs: if you have been playing with AWS S3 for over a year, your free tier trial is over, and you start having to pay for every operation you perform against AWS S3. At this early stage, for any project, such costs can be very hard to bear as there is no revenue generated yet, and load/performance tests usually imply running a ton of operations… each time you try to pass them…
  • A lot of big companies are trying to move from on-premise storage to cloud storage, but there is a conflict between the vision of the top executive, who wants to move everything regardless of the costs, and the top architects, who realize there are compliance requirements for some data that are hard to meet in the cloud, along with sensitive data which safety would be better controlled if kept on premise. This discussion needs to happen and, from our experience, we believe that large players in the global companies world will realize the need for “hybrid” solutions (on-premise and Cloud storage working side by side)
  • When moving to the Cloud, whether for costs, tiering, or flexibility reasons, most large companies wish to split their assets across different Cloud Storage providers.

These takeaways are very positive for Scality and Zenko, as it means we already have answers to a problem people are just starting to grasp. Indeed, following the same structure as above:

  • CloudServer, our Open Source implementation of the S3 API running locally in a Docker container, enables developers and start-ups to take the time they need to test their application against a fully compatible S3 frontend before moving it to AWS S3 as it goes in production.
  • One way to connect to the RING is the “S3 Connector”, the Enterprise Edition of CloudServer; it comes with a few more features, especially an extensive support of IAM entities and calls.
    Deploying a RING alongside an S3 Connector allows control of your on-premise data and your AWS S3 hosted data via a single API that has become the de facto standard for object storage: the S3 API. As the world moves towards having both kinds of storage available, this kind of ease of use is crucial to have.
  • Zenko, which grows the CloudServer and S3 Connector ecosystem, currently enables on-premise high availability deployment architectures. It is scheduled to include an S3-to-Azure translator, and a cross-clouds aggregated metadata search by November 2017. If you are currently using multiple clouds, you know how much easier your life will be with these…

    Finally, it comes with a management portal enabling visual monitoring and configuration of the service. The main monitored aspects are disk usage, CPU usage, and various utilization stats (S3 API calls). The main configurable variables are credentials for all organization’s members across cloud providers, and endpoints for all cloud providers.

    Of course, all these functionalities will be available in exactly the same terms with your on premise storage provided it runs on a Scality RING. Moving (some of your data) to the Cloud has never been easier.

I strongly believe the multi-cloud approach is really what lies ahead of us, and these AWS Summits have done nothing but reinforce that impression. I encourage you to attend the next one happening close to you: these events are a great opportunity to peek at the future of storage, and to meet future partners! By the way, Scality (and thus Zenko) will attend NYC AWS Summit on August 14th, 2017… just in case 😉

Laure

Why did we call it Zenko?

Story time! In Asian mythologies (Japanese, Chinese, Thai, and Indian at least), foxes have held a strong role since the 11th century. The Konjaku Monogatarishu, a collection of Ancient Tales from Asia, is the first written reference we have to zenkos.

It gains tails as it matures and, after 100 years, when reaching its final stage, it has 9 tails, and may then become a zenko. (fun fact: did you know Tails, the character from Sonic The Hedgehog, is a zenko?)

Zenkos are celestial, benevolent foxes, who attached themselves to Samurai families, protecting them in exchange for housing. And Zenko strives to be a role-model in the open source world, by attaching itself to the open source contributors and users, and protecting their technical endeavors.

Foxes are, among other things, universal. We talked about zenko as a Japanese word, but there are divine occurrences of the fox globally: in Peru, there was a Fox god for the Andean people; in Egypt, foxes were good omens; in Persia, foxes were the equivalent of Charon; for a number of Native People from North America, the fox spirit was a strong one… And Zenko is global. Its core engineering team is split between France, the USA, and New Zealand, and its contributors come from very diverse backgrounds… object (and file) storage is a universal need, Zenko is aiming at providing a universal answer.

S3 Server

S3 Server Hackathon – October, 2016

Finally, foxes are smart, witty, artful, aware, swift, playful, brave… you name it.

Zenko is all of that. And even more so every time one of you joins our community. Welcome to Zenko, zenkos!

Zenko

Export your buckets as a filesystem with s3fs on top of s3server

Export your buckets as a filesystem with s3fs on top of s3server

s3fs is an open source tool that allows you to mount an S3 bucket on a filesystem-like backend. It is available both on Debian and RedHat distributions. For this tutorial, we used an Ubuntu 14.04 host to deploy and use s3fs over Scality’s S3 Server.

Deploying S3 Server with SSL


First, you need to deploy S3 Server. This can be done very easily via our DockerHub page (you want to run it with a file backend).

Note: – If you don’t have docker installed on your machine, here are the instructions to install it for your distribution

You also necessarily have to set up SSL with S3Server to use s3fs. We have a nice tutorial to help you do it.

s3fs setup

Installing s3fs

s3fs has quite a few dependencies. As explained in their README, the following commands should install everything for Ubuntu 14.04:

$> sudo apt-get install automake autotools-dev g++ git libcurl4-gnutls-dev
$> sudo apt-get install libfuse-dev libssl-dev libxml2-dev make pkg-config

Now you want to install s3fs per se:

$> git clone https://github.com/s3fs-fuse/s3fs-fuse.git
$> cd s3fs-fuse
$> ./autogen.sh
$> ./configure
$> make
$> sudo make install

Check that s3fs is properly installed by checking its version. it should answer as below:

 $> s3fs --version

Amazon Simple Storage Service File System V1.80(commit:d40da2c) with OpenSSL

Configuring s3fs

s3fs expects you to provide it with a password file. Our file is /etc/passwd-s3fs. The structure for this file is ACCESSKEYID:SECRETKEYID, so, for S3Server, you can run:

$> echo 'accessKey1:verySecretKey1' > /etc/passwd-s3fs
$> chmod 600 /etc/passwd-s3fs

Using S3Server with s3fs


First, you’re going to need a mountpoint; we chose /mnt/tests3fs:

$> mkdir /mnt/tests3fs

Then, you want to create a bucket on your local S3Server; we named it tests3fs:

$> s3cmd mb s3://tests3fs

Note: – If you’ve never used s3cmd with our S3Server, our README provides you with a recommended config

Now you can mount your bucket to your mountpoint with s3fs:

$> s3fs tests3fs /mnt/tests3fs -o passwd_file=/etc/passwd-s3fs -o url="https://s3.scality.test:8000/" -o use_path_request_style

If you’re curious, the structure of this command is s3fs BUCKET_NAME PATH/TO/MOUNTPOINT -o OPTIONS, and the options are mandatory and serve the following purposes:
passwd_file: specifiy path to password file;
url: specify the hostname used by your SSL provider;
use_path_request_style: force path style (by default, s3fs uses subdomains (DNS style)).

From now on, you can either add files to your mountpoint, or add objects to your bucket, and they’ll show in the other.
For example, let’s’ create two files, and then a directory with a file in our mountpoint:

$> touch /mnt/tests3fs/file1 /mnt/tests3fs/file2
$> mkdir /mnt/tests3fs/dir1
$> touch /mnt/tests3fs/dir1/file3

Now, I can use s3cmd to show me what is actually in S3Server:

$> s3cmd ls -r s3://tests3fs

2017-02-28 17:28         0   s3://tests3fs/dir1/
2017-02-28 17:29         0   s3://tests3fs/dir1/file3
2017-02-28 17:28         0   s3://tests3fs/file1
2017-02-28 17:28         0   s3://tests3fs/file2

Now you can enjoy a filesystem view on your local S3Server!

First Things First: Getting Started with Scality S3 Server

First Things First: Getting Started with Scality S3 Server

Let’s explore how to write a simple Node.js application that uses the S3 API to write data to the Scality S3 Server. If you do not have the S3 Server up and running yet, please visit the Docker Hub page to run it easily on your laptop. First we need to create a list of the libraries needed in a file called package.json. When the node package manager (npm) is run, it will download each library for the application. For this simple application, we will only need the aws-sdk library.

Save the following contents in package.json

{
"name": "myAPP",
  "version": "0.0.1",
  "dependencies": {
    "aws-sdk": ""
  }
}

Now let’s begin coding the main application in a file called app.js with the following contents:

var aws = require('aws-sdk');
var ACCESS_KEY = process.env.ACCESS_KEY;
var SECRET_KEY = process.env.SECRET_KEY;
var ENDPOINT = process.env.ENDPOINT;
var BUCKET = process.env.BUCKET;

aws.config.update({
    accessKeyId: ACCESS_KEY,
    secretAccessKey: SECRET_KEY
});

var s3 = new aws.S3({
    endpoint: ENDPOINT,
    s3ForcePathStyle: true,
});

function upload() {
    params = {
        Bucket: BUCKET,
        Key: process.argv[2],
        Body: process.argv[3]
    };

    s3.putObject(params, function(err, data) {
        if (err) {
            console.log('Error uploading data: ', err);
        } else {
            console.log("Successfully uploaded data to: " + BUCKET);
        }
    });

}

if (ACCESS_KEY && SECRET_KEY && ENDPOINT && BUCKET && process.argv[2] && process.argv[3]) {
    console.log('Creating File: '  + process.argv[2] + ' with the following contents:\n\n' + process.argv[3] + '\n\n');
    upload();
} else {
    console.log('\n[Error: Missing S3 credentials or arguments!\n');
}

This simple application will accept two arguments on the command-line. The first argument is for the file name and the second one is for the contents of the file. Think of it as a simple note taking application.

Now that the application is written, we can install the required libraries with npm.

npm install

Before the application is started, we need to set the S3 credentials, bucket, and endpoint in environment variables.

export ACCESS_KEY='accessKey1'
export SECRET_KEY=’verySecreyKey1'
export BUCKET='test'
export ENDPOINT='http://127.0.0.1:8000'

Please ensure that the bucket specified in the BUCKET argument exists on the S3 Server. If it does not, please create it.

Now we can run the application to create a simple file called “my-message” with the contents of “Take out the trash at 1pm PST”

node app.js 'my-message' 'Take out the trash at 1pm PST'

s3 server

You should now see the file on the S3 Server using your favorite S3 Client:

I hope that this tutorial will help you get started quickly to create wonderful applications that use the S3 API to store data on the Scality S3 Server. For more code samples for different SDKs, please visit the Scality S3 Server GitHub .