How to deploy Zenko 1.1 GA on bare metal, private or public cloud

We have been working hard on Zenko 1.1 release, and finally, it is here! Thanks to the dedicated and tireless work of Zenko team, our newest release comes with an array of useful new features. Now is a good time to try Zenko: you can deploy it on a managed Kubernetes (Azure, Amazon, Google) or […]

Written By Dasha Gurova

On June 25, 2019
"

Read more

Solve the challenges of large-scale data, once and for all.

We have been working hard on Zenko 1.1 release, and finally, it is here! Thanks to the dedicated and tireless work of Zenko team, our newest release comes with an array of useful new features. Now is a good time to try Zenko: you can deploy it on a managed Kubernetes (Azure, Amazon, Google) or on Minikube for a quick test. But what if you want to run Zenko on bare metal or on your own cloud, we suggest you deploy on MetalK8s. It’s an open source opinionated distribution of Kubernetes with a focus on long-term on-prem deployments. MetalK8s is developed at Scality to provide great functionality while reducing complexity for users and delivering efficient access to local stateful storage.

This tutorial comes from our core engineering team, and we use it on a daily basis to deploy and test Zenko. This guide has been developed as a collective effort from contributions made in this forum post.

Here are the steps we are using to deploy Zenko 1.1 with our OpenStack-based private cloud. Let’s do this!

Part 1: Deploying MetalK8s

This tutorial creates Zenko instance distributed on three nodes, but you can always repurpose it for as many servers as you wish.

1. Create three instances with the following characteristics:

  • Operating system: CentOS-7.6
  • Size: 8 CPUs and 32GB of RAM

2. If you are deploying on a private cloud create the following volumes (type: SSD):

  • one volume with a 280GB capacity
  • two volumes with a 180GB capacity

3. Attach a volume to each instance

4. SSH into a node:

$ ssh -A centos@<node-ip>

Pro-tip: If you use ssh -A from your computer into the first node this will forward your authentication agent connection and allow native ssh access to the remaining nodes in your cluster.

5. $ sudo yum install git vim -y
   $ git clone https://github.com/scality/metalk8s
   $ cd metalk8s/

6. Checking out the current stable version of MetalK8s

$ git checkout tags/1.1.0
$ mkdir -p inventory/zenko-cluster/group_vars
$ cd inventory/zenko-cluster/
7. $ vim hosts

Copy the following in your hosts file and update the IPs to your instance IPs:

# Floating IP addresses can be specified using the var `access_ip=<ip-address>` on the line corresponding to the attached server
node-01 ansible_host=10.200.3.179 ansible_user=centos # server with the larger volume attached
node-02 ansible_host=10.200.3.164 ansible_user=centos # server with the smaller volume attached
node-03 ansible_host=10.200.2.27  ansible_user=centos # server with the smaller volume attached

[bigserver]
node-01

[smallserver]
node-02
node-03

[kube-master]
node-01
node-02
node-03

[etcd]
node-01
node-02
node-03

[kube-node:children]
bigserver
smallserver

[k8s-cluster:children]
kube-node
kube-master
8. $ vim group_vars/bigserver.yml

Run this statement and copy the following into bigserver.yml (this is for the server that will provision Zenko Local Filesystem)

metalk8s_lvm_drives_vg_metalk8s: ['/dev/vdb']
metalk8s_lvm_lvs_vg_metalk8s:
lv01:
size: 100G
lv02:
size: 54G
lv03:
size: 22G
lv04:
size: 12G
lv05:
size: 10G
lv06:
size: 6G

Note: /dev/vdb on the first line is a default location of a newly attached drive, if this location is already in use on your machine you need to change this part. For example:

/dev/vda
/dev/vdb
/dev/vdc
etc...
9. $ vim group_vars/smallserver.yml

Run this statement and copy the following into smallserver.yml

metalk8s_lvm_drives_vg_metalk8s: ['/dev/vdb']
metalk8s_lvm_lvs_vg_metalk8s:
lv01:
size: 54G
lv02:
size: 22G
lv03:
size: 12G
lv04:
size: 10G
lv05:
size: 6G

10. This step is optional but highly recommended:

$ vim group_vars/all

Paste this into the group_vars/all and save:

$ kubelet_custom_flags:
- --kube-reserved cpu=1,memory=2Gi
- --system-reserved cpu=500m,memory=1Gi
- --eviction-hard=memory.available<500Mi

This adds resource reservations for system processes and k8s control plane along with a pod eviction threshold, thus preventing out-of-memory issues that typically lead to node/system instability. For more info see this issue.

11. Return to metalK8s folder

$ cd ~/metalk8s

12. And run the virtual environment

$ make shell

13. Make sure that you have ssh access to each other node in your cluster and run the following:

$ ansible-playbook -i inventory/zenko-cluster -b playbooks/deploy.yml

Deployment typically takes between 15-30 minutes. Once it is done, you will see a URL for the Kubernetes dashboard access along with a username/password in the output of the last task.

Notes

If you forget this password or need access to it again, it is saved under:

metalk8s/inventory/zenko-cluster/credentials/kube_user.creds

The MetalK8s installation created an admin.conf file:

metalk8s/inventory/zenko-cluster/artifacts/admin.conf

This file can be copied from your deployment machine to any other machine that requires access to the cluster (for example if you did not deploy from your laptop)

MetalK8s 1.1 is now deployed!

Part 2: Deploying Zenko 1.1

1. Clone Zenko repository:

$ git clone https://github.com/scality/zenko ~/zenko
$ cd zenko/

2.  Grab fresh Zenko 1.1 release:

$ git checkout tags/1.1.0
$ cd kubernetes/

3. You will be provided with the latest version of helm from MetalK8s installation we did in part 1. Now it’s time to actually deploy Zenko instance on three nodes we have prepared.

Run this command:

$ helm install --name zenko --set ingress.enabled=true
--set ingress.hosts[0]=zenko.local
--set cloudserver.endpoint=zenko.local zenko

4. Wait about 15-20 minutes while the pods stabilize.

5. You can confirm that the zenko instance is ready when all pods are in the running state. To check:

$ kubectl get pods

Note

It is expected that the queue-config pods will multiply until one succeeds. Any  “Completed” or  “Error” queue-config pods can be deleted.

Zenko is now deployed!

Part 3: Registering your Zenko instance with Orbit

Orbit is a cloud-based GUI portal to manage the Zenko instance you deployed in the previous two parts. It gives you insight into metrics and lets you create policies and rules to manage the data and replicate it between different public clouds. Here are the steps to register Zenko with Orbit.

1. Find cloudserver manager pod:

$ kubectl get pods | grep cloudserver-manager

2. Use the pod name to find the Zenko instance ID:

$ kubectl logs zenko-cloudserver-manager-7f8c8846b-5gjxk | grep 'Instance ID'

3. Now, find your Instance ID and head to Orbit to register your Zenko instance with your instance ID.

Your Orbit instance is now registered!

If you successfully launched a Zenko 1.1 instance with MetalK8s and Orbit using this tutorial, let us know. If you use this guide and get stuck or have any questions, let us know! Visit the forum and we can troubleshoot through any issues. Your input will also help to refine and constantly update this tutorial along the way. We’re always looking for feedback on our features and tutorials.

Simple, secure S3 object storage software for modern applications