Use Zenko CloudServer as a backend for Docker Registry

We welcome the following contributed tutorial: Rom Freiman   It is always a victory for an open source project when a contributor takes the time to provide a substantial addition to it. Today, we’re very happy to introduce a Docker Registry – Zenko tutorial by GitHub user rom-stratoscale, also known as Rom Freiman, R&D Director […]

Written By Laure Vergeron

On August 21, 2017
"

Read more

Solve the challenges of large-scale data, once and for all.

We welcome the following contributed tutorial:

Rom Freiman

Rom Freiman

 

It is always a victory for an open source project when a contributor takes the time to provide a substantial addition to it. Today, we’re very happy to introduce a Docker Registry – Zenko tutorial by GitHub user rom-stratoscale, also known as Rom Freiman, R&D Director at Stratoscale.

Thank you very much for making the Zenko community stronger, Rom! — Laure

Introduction

Docker registry is a service enabling storage and distribution of private docker images among relevant users. Although there are public offerings for that service, many organizations prefer to host a private registry for their internal use.

Private Docker Registries have multiple options for storage configuration. One of the useful ones is Zenko’s CloudServer (formerly known as S3 Server), by Scality.
Zenko’s CloudServer is an open-source standalone S3 API deployed as a docker container. In other words, it allows you to have an S3-compatible store on premise (and even on your laptop).

In this tutorial, I’ll demonstrate how to deploy a private Docker Registry with Scality as your private on-premise S3 backend.
We’ll use containers to run both services (registry; Zenko CloudServer).

Prerequisites and assumptions

Prerequisites:

  1. Docker daemon (v1.10.3 or above)
  2. python (v2.7.12 or above)
  3. AWS CLI (v1.11.123 or above)
    $> pip install awscli==1.11.123
  4. Configure aws credentials by:
    $> aws configure
    AWS Access Key ID []: {{YOUR_ACCESS_KEY_ID}}
    AWS Secret Access Key []: {{YOUR_SECRET_KEY}}
    Default region name []: us-east-1
    Default output format[]: json

Assumptions:

  1. In this tutorial both s3 and registry will run on the same physical server, in order to avoid dealing with networking. If you choose to run it on different servers, verify that you have routing from the registry server towards CloudServer(TCP, port 8000)
  2. The s3 storage will use the containers storage and will be lost once the container will be stopped. For persistency, you should create appropriate volumes on the host and mount them as Docker volumes for the CloudServer container
  3. The consumption of the docker registry is from localhost, hence we’ll not dive into SSL and certificates generation (but you can if you want to).

Run Zenko CloudServer container

User the docker run utility to start Zenko Cloudserver:

$> docker run -d --name cloudserver -p 8000:8000 -e SCALITY_ACCESS_KEY_ID={{YOUR_ACCESS_KEY_ID}} -e SCALITY_SECRET_ACCESS_KEY={{YOUR_SECRET_KEY}} zenko/cloudserver

Sample output (should be cloudserver container uuid):

Unable to find image 'zenko/cloudserver:latest' locally
latest: Pulling from zenko/cloudserver
Digest: sha256:8f640c3399879925809bd703facf128a395444e2d4855334614e58027bcac436
Status: Downloaded newer image for zenko/cloudserver:latest
4f08be7e77fe8e47b8c09036e9f5fa4e29d7a42b23bd1327a2a4f682c5c48413

Use AWS CLI to create a dedicated bucket in Zenko CloudServer:

$> aws s3api --endpoint-url=http://127.0.0.1:8000 create-bucket --bucket docker-registry

Sample output:

{
    "Location": "/docker-registry"
}

Verify it exists:

aws s3 --endpoint-url=http://127.0.0.1:8000 ls s3://

Sample output:

2017-08-01 23:05:44 docker-registry

Configure and run your private Docker Registry hosted in your local CloudServer container

Create a config file looking like below, and located at /home/{{YOUR_WORKDIR}}/config.yml

version: 0.1
log:
  level: debug
  formatter: text
  fields:
    service: registry
    environment: staging
storage:
  s3:
    accesskey: newAccessKey
    secretkey: newSecretKey
    region: us-east-1
    regionendpoint: http://127.0.0.1:8000
    bucket: docker-registry
    encrypt: false
    secure: false
    v4auth: true
http:
  addr: :5000
  headers:
    X-Content-Type-Options: [nosniff]
health: 
  storagedriver:
    enabled: true
    interval: 10s
    threshold: 3

Important fields:

  1. http : the registry will listed to port 5000;
  2. S3 : the actual s3 backend configuration, including endpoint, credentials and the bucket name;
  3. health : the registry will actively probe the s3 backend (above spawned cloudserver container) to check that it’s active.

Spawn the registry, while mounting the configuration into the container:

docker run -d --name=zenkoregistry --net=host -v /home/{{YOUR_WORKDIR}}/config.yml:/etc/docker/registry/config.yml:z registry:2

Sample output (expected private zenkoregistry uuid):

4f08be7e77fe8e47b8c09036e9f5fa4e29d7a42b23bd1327a2a4f682c5c48413

Now it’s a time for testing… Let’s pull an alpine container, push it into the registry, and check it was created:

docker pull alpine
docker tag alpine:latest 127.0.0.1:5000/alpine:latest
docker push 127.0.0.1:5000/alpine:latest  # the actual submission to the newly spawned registry
docker rmi alpine:latest; docker rmi 127.0.0.1:5000/alpine:latest # local cleanup before pulling from the registry.
docker pull 127.0.0.1:5000/alpine:latest

And… voilà!!!

Now, you can list what is inside your CloudServer-hosted docker-registry bucket to check how zenkoregistry actually saved the data:

aws s3 --endpoint-url=http://127.0.0.1:8000 ls --recursive s3://docker-registry

Sample output:

2017-08-02 12:21:41        528 docker/registry/v2/blobs/sha256/09/0930dd4cc97ed5771ebe9be9caf3e8dc5341e0b5e32e8fb143394d7dfdfa100e/data
2017-08-02 12:21:40    1990402 docker/registry/v2/blobs/sha256/6d/6d987f6f42797d81a318c40d442369ba3dc124883a0964d40b0c8f4f7561d913/data
2017-08-02 12:21:40       1520 docker/registry/v2/blobs/sha256/73/7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560/data
2017-08-02 12:21:40         71 docker/registry/v2/repositories/alpine/_layers/sha256/6d987f6f42797d81a318c40d442369ba3dc124883a0964d40b0c8f4f7561d913/link
2017-08-02 12:21:41         71 docker/registry/v2/repositories/alpine/_layers/sha256/7328f6f8b41890597575cbaadc884e7386ae0acc53b747401ebce5cf0d624560/link
2017-08-02 12:21:41         71 docker/registry/v2/repositories/alpine/_manifests/revisions/sha256/0930dd4cc97ed5771ebe9be9caf3e8dc5341e0b5e32e8fb143394d7dfdfa100e/link
2017-08-02 12:21:41         71 docker/registry/v2/repositories/alpine/_manifests/tags/latest/current/link
2017-08-02 12:21:41         71 docker/registry/v2/repositories/alpine/_manifests/tags/latest/index/sha256/0930dd4cc97ed5771ebe9be9caf3e8dc5341e0b5e32e8fb143394d7dfdfa100e/link

Troubleshooting

Not working? No problem! Here are a few places to check:

  1. AWS credentials:
    cat ~/.aws/credentials

    Compare them to the one you used to spawn the cloudserver and to those that are written in the s3registry config file;

  2. cloudserver logs:
    docker logs cloudserver

    Check whether the healthcheck is received every 10 sec and the response code to it is 200;

  3. zenkoregistry logs:
    docker logs zenkoregistry

    Check the permissions of the config file, the port it uses (5000), try to run the container in privileged mode, etc.

  4. Google 🙂

Still not working? Still no problem! Reach out to the Zenko team on their forum: they’ll be very happy to help!

Enjoy,

Rom Freiman
Director, R&D,
Stratoscale

Simple, secure S3 object storage software for modern applications