How to deploy Zenko 1.1 GA on bare metal, private or public cloud

How to deploy Zenko 1.1 GA on bare metal, private or public cloud

We have been working hard on Zenko 1.1 release, and finally, it is here! Thanks to the dedicated and tireless work of Zenko team, our newest release comes with an array of useful new features. Now is a good time to try Zenko: you can deploy it on a managed Kubernetes (Azure, Amazon, Google) or on Minikube for a quick test. But what if you want to run Zenko on bare metal or on your own cloud, we suggest you deploy on MetalK8s. It’s an open source opinionated distribution of Kubernetes with a focus on long-term on-prem deployments. MetalK8s is developed at Scality to provide great functionality while reducing complexity for users and delivering efficient access to local stateful storage.

This tutorial comes from our core engineering team, and we use it on a daily basis to deploy and test Zenko. This guide has been developed as a collective effort from contributions made in this forum post.

Here are the steps we are using to deploy Zenko 1.1 with our OpenStack-based private cloud. Let’s do this!

Part 1: Deploying MetalK8s

This tutorial creates Zenko instance distributed on three nodes, but you can always repurpose it for as many servers as you wish.

1. Create three instances with the following characteristics:

  • Operating system: CentOS-7.6
  • Size: 8 CPUs and 32GB of RAM

2. If you are deploying on a private cloud create the following volumes (type: SSD):

  • one volume with a 280GB capacity
  • two volumes with a 180GB capacity

3. Attach a volume to each instance

4. SSH into a node:

$ ssh -A centos@<node-ip>

Pro-tip: If you use ssh -A from your computer into the first node this will forward your authentication agent connection and allow native ssh access to the remaining nodes in your cluster.

5. $ sudo yum install git vim -y
   $ git clone https://github.com/scality/metalk8s
   $ cd metalk8s/

6. Checking out the current stable version of MetalK8s

$ git checkout tags/1.1.0
$ mkdir -p inventory/zenko-cluster/group_vars
$ cd inventory/zenko-cluster/
7. $ vim hosts

Copy the following in your hosts file and update the IPs to your instance IPs:

# Floating IP addresses can be specified using the var `access_ip=<ip-address>` on the line corresponding to the attached server
node-01 ansible_host=10.200.3.179 ansible_user=centos # server with the larger volume attached
node-02 ansible_host=10.200.3.164 ansible_user=centos # server with the smaller volume attached
node-03 ansible_host=10.200.2.27  ansible_user=centos # server with the smaller volume attached

[bigserver]
node-01

[smallserver]
node-02
node-03

[kube-master]
node-01
node-02
node-03

[etcd]
node-01
node-02
node-03

[kube-node:children]
bigserver
smallserver

[k8s-cluster:children]
kube-node
kube-master
8. $ vim group_vars/bigserver.yml

Run this statement and copy the following into bigserver.yml (this is for the server that will provision Zenko Local Filesystem)

metalk8s_lvm_drives_vg_metalk8s: ['/dev/vdb']
metalk8s_lvm_lvs_vg_metalk8s:
lv01:
size: 100G
lv02:
size: 54G
lv03:
size: 22G
lv04:
size: 12G
lv05:
size: 10G
lv06:
size: 6G

Note: /dev/vdb on the first line is a default location of a newly attached drive, if this location is already in use on your machine you need to change this part. For example:

/dev/vda
/dev/vdb
/dev/vdc
etc...
9. $ vim group_vars/smallserver.yml

Run this statement and copy the following into smallserver.yml

metalk8s_lvm_drives_vg_metalk8s: ['/dev/vdb']
metalk8s_lvm_lvs_vg_metalk8s:
lv01:
size: 54G
lv02:
size: 22G
lv03:
size: 12G
lv04:
size: 10G
lv05:
size: 6G

10. This step is optional but highly recommended:

$ vim group_vars/all

Paste this into the group_vars/all and save:

$ kubelet_custom_flags:
- --kube-reserved cpu=1,memory=2Gi
- --system-reserved cpu=500m,memory=1Gi
- --eviction-hard=memory.available<500Mi

This adds resource reservations for system processes and k8s control plane along with a pod eviction threshold, thus preventing out-of-memory issues that typically lead to node/system instability. For more info see this issue.

11. Return to metalK8s folder

$ cd ~/metalk8s

12. And run the virtual environment

$ make shell

13. Make sure that you have ssh access to each other node in your cluster and run the following:

$ ansible-playbook -i inventory/zenko-cluster -b playbooks/deploy.yml

Deployment typically takes between 15-30 minutes. Once it is done, you will see a URL for the Kubernetes dashboard access along with a username/password in the output of the last task.

Notes

If you forget this password or need access to it again, it is saved under:

metalk8s/inventory/zenko-cluster/credentials/kube_user.creds

The MetalK8s installation created an admin.conf file:

metalk8s/inventory/zenko-cluster/artifacts/admin.conf

This file can be copied from your deployment machine to any other machine that requires access to the cluster (for example if you did not deploy from your laptop)

MetalK8s 1.1 is now deployed!

Part 2: Deploying Zenko 1.1

1. Clone Zenko repository:

$ git clone https://github.com/scality/zenko ~/zenko
$ cd zenko/

2.  Grab fresh Zenko 1.1 release:

$ git checkout tags/1.1.0
$ cd kubernetes/

3. You will be provided with the latest version of helm from MetalK8s installation we did in part 1. Now it’s time to actually deploy Zenko instance on three nodes we have prepared.

Run this command:

$ helm install --name zenko --set ingress.enabled=true
--set ingress.hosts[0]=zenko.local
--set cloudserver.endpoint=zenko.local zenko

4. Wait about 15-20 minutes while the pods stabilize.

5. You can confirm that the zenko instance is ready when all pods are in the running state. To check:

$ kubectl get pods

Note

It is expected that the queue-config pods will multiply until one succeeds. Any  “Completed” or  “Error” queue-config pods can be deleted.

Zenko is now deployed!

Part 3: Registering your Zenko instance with Orbit

Orbit is a cloud-based GUI portal to manage the Zenko instance you deployed in the previous two parts. It gives you insight into metrics and lets you create policies and rules to manage the data and replicate it between different public clouds. Here are the steps to register Zenko with Orbit.

1. Find cloudserver manager pod:

$ kubectl get pods | grep cloudserver-manager

2. Use the pod name to find the Zenko instance ID:

$ kubectl logs zenko-cloudserver-manager-7f8c8846b-5gjxk | grep 'Instance ID'

3. Now, find your Instance ID and head to Orbit to register your Zenko instance with your instance ID.

Your Orbit instance is now registered!

If you successfully launched a Zenko 1.1 instance with MetalK8s and Orbit using this tutorial, let us know. If you use this guide and get stuck or have any questions, let us know! Visit the forum and we can troubleshoot through any issues. Your input will also help to refine and constantly update this tutorial along the way. We’re always looking for feedback on our features and tutorials.

How I made a Kubernetes cluster with five Raspberry Pis

How I made a Kubernetes cluster with five Raspberry Pis

Working as a DevOps in Scality, I’m exposed to Kubernetes clusters and CI/CD pipelines across the major clouds. My day-to-day tasks include maintaining Zenko and therefore I typically see large amounts of compute and storage resources at my disposal to test and deploy new infrastructure.

I love Kubernetes and would try to deploy a cluster on anything from a couple of toasters to AWS. And then one day I heard the announcement from Rancher about their micro Kubernetes distribution called K3s (five  less than K8s)

I immediately was hit with an undeniable desire to set up a small, physically portable cluster and test the guts out of K3s. Being a long-time Raspberry Pi enthusiast, naturally, I saw this as an opportunity for a passion project.

The idea is simple but interesting. Take some Raspberry Pis, string them together as a Kubernetes cluster. Far from a unique idea as this has already been done before; however, combined with this light-weight Kubernetes would allow for enough room to fit some workloads. I started to dream about Zenko at some remote edge device where asynchronous replication to the cloud would thrive. I thought: “Let’s do this!

The shopping list for a tiny Kubernetes cluster

Start with the shopping list:

  • Five Raspberry Pis 3B+ (Plus memory cards)
  • C4 Labs “Cloudlet” 8 bay case
  • portable TP-link router
  • Anker 6-port 60-watt USB charger
  • 8-port switch

Operating System hustle

There are countless great guides on how to set up a Raspberry Pi with the various OSes available. On the initial setup, I started with just a basic Raspbian to test out and see if I could find or build ARM images for all the Zenko services. I was able to easily build key components – CloudServer and Backbeat images – with the ‘arm32v6/node’ Docker image as a base.

After that was successful I decided to test out MongoDB, which is the core database we use for our metadata engine. Here’s where I hit my first problem: I found out that MongoDB 3.x version only supports 64bit operating systems. This is something I’ve taken for granted for so long now that I forgot it’s an issue. Fortunately Raspberry Pis 2 or newer use 64bit ARM chips but I still had to find a new OS since Raspbian only comes in the 32bit flavor.

While there is no definitive list, most distributions have an ‘aarch64’ version that typically works with the newer Raspberry Pis. I settled on Fedora 29 mostly because they have a CLI tool to load the image onto the SD card, add an ssh-public-key, and resize the root filesystem to fill the SD card. These are all manual configurations that typically needs to be done after you first boot up your Pi. This also meant that I could set up all five of my Pis without hooking up a keyboard and monitor and immediately have headless servers running.

Note: you can download Fedora from here.

So with all my Pis setup, I’m essentially left with just setting up the Kubernetes cluster.  While I’ve deployed countless clusters on virtual machines and bare-metal servers to the point that I feel like I could do it in my sleep, this time was completely unlike any I’ve done before. Thanks to the K3s installer, I had a cluster with four dedicated nodes and one master/node deployed under five minutes (not including my RPI setup time). Their bootstrap script allows you to set this up super easily. As easy as this:

# On the control server node
curl -sfL https://get.k3s.io | sh -

# Kubeconfig is written to /etc/rancher/k3s/k3s.yaml
k3s kubectl get node

# To setup an agent node run the below. NODE_TOKEN comes from /var/lib/rancher/k3s/server/node-token on the your server
curl -sfL https://get.k3s.io | K3S_URL=https://master-node-hostname:6443 K3S_TOKEN=XXX sh -

Putting Kubernetes on a mini-rack

With the 5-node Pi cluster operational it was time to set everything up in a portable format. The goals here were to only have a single power cable for everything and easily connect to WiFi wherever we take it. However, this also meant we didn’t want to go through the hassle of the manual setup and connecting each Raspberry Pi to the WiFi at every new location we brought it to. The solution was simple, make the network itself equally portable with a small switch and portable router.

The Cloudlet case from C4Labs is very thought out with wire management in mind and well put together with straightforward instructions for installing all the Raspberry Pis.

In our case, I wanted to be sure to leave room for the portable router, switch, and power brick as well. Fortunately and purely by accident, the length of the switch we ordered fit the exact internal height of the case allowing us to mount the switch vertically. This left us room underneath the Pis for the power brick and allowed us to mount the portable TP-link router in one of the remaining bays.

With all the fans mounted, Pis plugged in, and wires managed we still had one very obvious issue — both the 8-port switch and the USB power brick needed their own plugs. Looking over the switch, I quickly noticed that it ran off 5v which means it could easily run off USB. But I used up all six ports of the power brick for the five RPis and the portable router.

What’s next?

While this is it for me today, the goal is to now put this diminutive cluster through some workloads for a gauge of performance and eventually turn the setup process into some simple Ansible playbooks to streamline to bootstrapping of multiple nodes. Let me know what you think or ask me anything on the forum.

Deploy Zenko on your laptop with Minikube

Deploy Zenko on your laptop with Minikube

Since Kubernetes is Zenko’s default choice for deployment, Minikube is the quickest way to develop Zenko locally. Minikube is described in the official docs as

[…] a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.

To install Minikube you’ll need Virtualbox (or another virtualization option) and follow instructions for your operating system. The video tutorial contains an installation walk-through.

Once Minikube, kubectl, and helm are installed, start Minikube with Kubernetes version 1.9 or newer, and preferably at least 4GB of RAM.  Then enable the Minikube ingress addon for communication.

$ minikube start --kubernetes-version=v1.9.0 --memory 4096
$ minikube addons enable ingress

Once Minikube started, run the helm initialization.

$ helm init --wait

With K8s now running, clone Zenko repository and go into the Helm charts directory to retrieve all dependencies:

$ git clone https://github.com/scality/Zenko.git
$ cd ./Zenko/charts
$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
"incubator" has been added to your repositories

$ helm dependency build zenko/
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "incubator" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 8 charts
Downloading prometheus from repo https://kubernetes-charts.storage.googleapis.com/
Downloading mongodb-replicaset from repo https://kubernetes-charts.storage.googleapis.com/
Downloading redis from repo https://kubernetes-charts.storage.googleapis.com/
Downloading kafka from repo http://storage.googleapis.com/kubernetes-charts-incubator
Downloading zookeeper from repo http://storage.googleapis.com/kubernetes-charts-incubator
Deleting outdated charts

With your dependencies built, you can run the following shell command to deploy a single node Zenko stack with Orbit enabled.

$ helm install --name zenko \
--set prometheus.rbac.create=false \
--set zenko-queue.rbac.enabled=false \
--set redis-ha.rbac.create=false \
-f single-node-values.yml zenko

To view the K8s dashboard type the following and will launch the dashboard in your default browser:

$ minikube dashboard

The endpoint can now be accessed via the K8s cluster ip (run minikube ip to display the cluster ip). Now you have a running Zenko instance in a mini-kubernetes cluster. To connect your instance to Orbit, find the instance ID

$ kubectl logs $(kubectl get pods --no-headers=true -o \
custom-columns=:metadata.name | grep cloudserver-front) | \
grep Instance | tail -n 1

The output will look something like this:

{"name":"S3","time":1529101607249,"req_id":"9089628bad40b9a255fd","level":"info","message":"this deployment's Instance ID is 6075357a-b08d-419e-9af8-cc9f391ca8e2","hostname":"zenko-cloudserver-front-f74d8c48c-dt6fc","pid":23}

The Instance ID in this case is 6075357a-b08d-419e-9af8-cc9f391ca8e2. Login into Orbit and register this instance. To test your minikube deployment, assign a hostname to the clusters ingress IP address to make things easier, then using s3cmd test

$ cat /etc/hosts
127.0.0.1   localhost
127.0.1.1   machine
192.168.99.100 minikube

By default, minikube only exposes SSL port 443, so you’ll want to ask your client/app to use SSL. However, since minikube uses a self-signed certificate, you may get security error. You can either configure minikube to use a trusted certificate, or simply ignore the certificate.

$ cat ~/.s3cfg
[default]
access_key = OZN3QAFKIS7K90YTLYP4
secret_key = 1Tix1ZbvzDtckDZLlSAr4+4AOzxRGsOtQduV297p
host_base = minikube
host_bucket = %(bucket).minikube
signature_v2 = False
use_https = True

$ s3cmd ls  --no-check-certificate
$

Head to Zenko forum to ask questions!