5 free tools to navigate through Docker containers’ security

5 free tools to navigate through Docker containers’ security

In this day and age, either you are already using Docker containers or considering using it. Containers have made a huge impact on the way teams architect, develop and ship software. No wonder – they are lightweight and scalable, and help us create an extremely portable environment to run our applications anywhere.

The problem with containers

To understand the problem we need to get our basics down. A container is an instance of an executable package that includes everything needed to run an application: code, configuration files, runtime, libraries and packages, environment variables, etc.

A container is launched based on something called an image, which consists of a series of layers. For Docker, each layer represents an instruction in a text file called a  Dockerfile. A parent image is a base on which your custom image is built. Most Dockerfiles start from a parent image.

When talking about container images, we often focus on one particular piece of software that we are interested in. However, an image includes the whole collection of software that plays a supporting role to the featured component. Even a developer who regularly works with a particular image may have only a superficial understanding of everything in the image.

It’s time-consuming to track all the libraries and packages included in an image once it’s built. Moreover, developers casually pull images from public repositories where it is impossible to know who built an image, what they used to build it and what exactly is included in it. But when you ship your application along with everything that is in the container, you are responsible for security. If there is a security breach, it is your reputation that could be destroyed.

Container Scanners

It is so difficult to track what is going on under the hood of a container image. Image scanners have emerged to address this issue, giving users varying degrees of insight into Docker container images. Most of the tools execute the same set of actions:

  • Binary scan of the Docker image, deconstruct it to layers and put together a detailed bill of material of the contents.
  • Take a snapshot (index) of the OS and packages.
  • Compare this bill of material from an image against a database of known vulnerabilities and report any matches.

Even though those tools are similar they are not the same. And when choosing one to use, you need to consider how effective they are:

  • How deep can the scan go? In other words, the scanner’s ability to see inside the image layers and their contents (packages and files).
  • How up-to-date the vulnerability lists are.
  • How the results of the scan are presented, in which form/format.
  • Capabilities to reduce noisy data (duplication).

5 tools to consider

Clair – tool from well-known and loved CoreOS. It is a scanning engine for static analyses of vulnerabilities in containers or clusters of containers (like Kubernetes). Static means that the actual container image doesn’t have to be executed, and you can catch the security threats before they enter your system.

Clair maintains a comprehensive vulnerability database from configured CVE resources. It exposes APIs to clients to invoke and perform scans of images.  A scan indexes features present in the image, and is stored in the database. Clients can use the Clair API to query the database for vulnerabilities of a particular image.

Anchor – is a well-maintained and powerful automated scanning and policy enforcement engine that can be integrated into CI/CD pipelines and Docker images. Users can create whitelists, blacklists and enforce rules.

It is available as a free online SaaS navigator to scan public repositories, and as an open source engine for on-prem scans. The on-prem engine can be wired into your CI/CD through CLI or REST to automatically fail builds that don’t pass defined policies.

Below is an example of Anchor scan results of Zenko cloudserver Docker image (the list of Node.js dependencies)

Anchore Engine ultimately provides a policy evaluation result for each image: pass/fail against policies defined by the user. Even though it comes with some predefined security and compliance policies, functions and decision gates, you can also write your own analysis modules and reports.

Dagda – is a tool to perform static analyses of known vulnerabilities in Docker images and containers. Dagda retrieves information about the software installed into your Docker images, such as the OS packages and the dependencies of the programming languages, and verifies for each product and its version if it is free of vulnerabilities against the previously stored information into a MongoDB instance. This database includes the known vulnerabilities as CVEs (Common Vulnerabilities and Exposures), BIDs (Bugtraq IDs), RHSAs (Red Hat Security Advisories) and RHBAs (Red Hat Bug Advisories), and the known exploits from Offensive Security database.

On top of that, it uses ClamAV to detect viruses and malware. I also want to note that all reports from scanning the image/container are stored in MongoDB where the user can access it.

Docker Bench for Security – the Center of Internet Security came up with a solid step-by-step guide on how to secure Docker. As a result, the Docker team released a tool (shell script) that runs as a small container and checks for these best-practices around deploying Docker containers in production.

OpenSCAP – this is a full ecosystem of tools that assist with measurement and enforcement of a security baseline. They have a specific container-oriented tool, oscap-docker, that performs the CVE scans of containers and checks it against predefined policies.

OSCAP Base is the base command line NIST-certified scanner. OSCAP WorkBench is a graphical user interface that represents the results of the scanner and aims to be intuitive and user-friendly.

Wrap up

These tools appeared because Docker’s popularity has grown so fast. Only two years ago, it would have been hard to trust those tools as they only started to pop up. Today, they are more experienced with Docker containers and the challenges that came up with the rise of this technology.

Next week, I will go through other tools and scanners that are more OSS compliance-oriented.

That’s it for today. Stay safe and let’s chat on the forum.

What are Kubernetes Operators and why you should use them

What are Kubernetes Operators and why you should use them

First, containers and microservices transformed the way we create and ship applications, shifting challenges to orchestrating many moving pieces at scale. Then Kubernetes came to save us. But the salty “helmsman” needs a plan to steer a herd of microservices and Operators are the best way to do that.

Hello Operator, what are you exactly?

The most commonly used definition online is: “Operators are the way of packaging, deploying and managing your application that runs atop Kubernetes”. In other words, Operators help in building cloud-native applications by automation of deployment, scaling, backup and restore. All that while being K-native application itself so almost absolutely independent from the platform where it runs.

CoreOS (who originally proposed the Operators concept in 2016) suggests thinking of an operator as an extension of the software vendor’s engineering team that watches over your Kubernetes environment and uses its current state to make decisions in milliseconds. An Operator essentially is codified knowledge on how to run the Kubernetes application.

Why Operators?

Kubernetes has been very good at managing stateless applications without any custom intervention.

But think of a stateful application, a database running on several nodes. If a majority of nodes go down, you’ll need to restore the database from a specific point following some steps. Scaling nodes up, upgrading or disaster recovery – these kinds of operations need knowing what is the right thing to do. And Operators help you bake that difficult patterns in a custom controller.

Some perks you get:

  • Less complexity: Operators simplify the processes of managing distributed applications. They take the Kubernetes promise of automation to its logical next step.
  • Transferring human knowledge to code: very often application management requires domain-specific knowledge. This knowledge can be transferred to the Operator.
  • Extended functionality: Kubernetes is extensible – it offers interfaces to plug in your network, storage, runtime solutions. Operators make it possible to extend K8s APIs with application specific logic!
  • Useful in most of the modern settings: Operators can run where Kubernetes can run: on public/hybrid/private, multi-cloud or on-premises.

Diving deeper

An Operator is basically a Kubernetes Custom Controller managing one or more Custom Resources. Kubernetes introduced custom resource definitions (CRDs) in version 1.7 and the platform became extensible. The application you want to watch is defined in K8s as a new object: a CRD that has its own YAML file and object type that the API server can understand. That way, you can define any specific criteria in the custom spec to watch out for.

CRD is a mean to specify a configuration. The cluster needs controllers to monitor its state and to match with the configuration. Enter Operators. They extend K8s functionality by allowing you to declare a custom controller to keep an eye on your application and perform custom tasks based on its state. The way Operator works is very similar to native K8s controllers, but it’s using mostly custom components that you defined.

This is a more specific list of what you need in order to create your custom operator:

  • A custom resource (CRD) spec that defines the application we want to watch, as well as an API for the CR
  • A custom controller to watch our application
  • Custom code within the new controller that dictates how to reconcile our CR against the spec
  • An operator to manage the custom controller
  • Deployment for the operator and custom resource

Where to start developing your Operator

Writing a CRD schema and its accompanying controller can be a daunting task. Currently, the most commonly used tool to create operators is Operator SDK. It is an open-source toolkit that makes it easier to manage and build Kubernetes native applications – Operators. The framework also includes the ability to monitor and collect metrics from operator-built clusters and to administrate multiple operators with lifecycle-manager.

You should also check this Kubernetes Operator Guidelines document on design, implementation, packaging, and documentation of a custom Operator.

The creation of an operator mostly starts by automating an application’s installation and then matures to perform more complex automation. So I would suggest starting small and wet your toes creating a basic operator that deploys an application or does something small.

The framework has a maturity model for provided tools that you can use to build the Operator. As you can see using Helm Operator Kit is probably the easiest way to get started, but not as powerful if you wish to build more sophisticated tool.

Operator maturity model from Operator SDK

Explore other operators

The number of custom operators for well-known applications is growing every day. In fact, Red Hat in collaboration with AWS, Google Cloud and Microsoft launched OperatorHub.io just a couple of months ago. It is the public registry for finding Kubernetes Operator backed services. You might find one that is useful for some components of your application or list your custom operator there.

Wrapping up

Kubernetes coupled with operators provides cloud-agnostic application deployment and management. It is so powerful that might lead us to treat cloud providers almost like a commodity, as you will be able to freely migrate between them and offer your product on any possible platform.

But is it a step to make Kubernetes easier or it actually adds even more complexity? Is it yet another tool that available, but just makes it more complicated for someone new? Is it all just going to explode in our face? So many questions…

If you have any thoughts or questions stop by the forum 🙂

How to deploy Zenko on Azure Kubernetes Service

How to deploy Zenko on Azure Kubernetes Service

In the spirit of the Deploy Zenko anywhere series, I would like to guide you through deploying Zenko on AKS (Azure Kubernetes Service) today. Azure is a constantly expanding worldwide network of data centers maintained by Microsoft.

You can find previous tutorials on how to deploy Zenko here:

Prerequisites

Initial VM

We are going to create an initial virtual machine on Azure that will be used to spin up and manage a Kubernetes cluster later. But first, create a resource group. Azure uses the concept of resource groups to group related resources together. We will create our computational resources within this resource group.

az group create\ 
  --name=<YourResourceGroupName>\
  --location=centralus

After that, you can follow this tutorial to create a virtual machine. It is pretty straight forward.

Things I want to mention:

  • choose your resource group within which the virtual machine is created
  • choose CentOS operating system
  • create a public IP address
  • expose SSH and HTTP ports at least
  • add your local computer’s (the one you will use to connect to VM) SSH public keys, as we need a way to connect to the machine later

Once the VM is created, you can connect to it through SSH and the public IP address from your local computer.

Azure CLI

To use the Kuberenetes Service on Azure, we need a command line tool to interact with it. You can choose between Azure interactive shell or installing the command line tool locally. In this case, I find CLI far easier to work with.

Install the Azure CLI tool on the new VM and try to login into Azure. This command will take you to a web browser page where you can confirm the login info.

az login

Create a Kubernetes cluster

To keep things neat, I suggest creating a directory inside the VM:

mkdir <ClusterName>
cd <ClusterName>

To secure your future cluster, generate SSH keys and:

ssh-keygen -f ssh-key-<ClusterName>

It will prompt you to add a passphrase, which you can leave empty if you wish. This will create a public key named ssh-key-<ClusterName>.pub and a private key named ssh-key-<ClusterName> in the folder we created.

The following command will request a Kubernetes cluster within the resource group that we created earlier:

az aks create --name <ClusterName> \
              --resource-group <YourResourceGroupName> \
              --ssh-key-value ssh-key-<ClusterName>.pub \
              --node-count 3 \
              --node-vm-size Standard_D2s_v3

In the code above:

  • –name is the name you want to use to refer to your cluster
  • –resource-group is the resource group you created in the beginning
  • –ssh-key – value is the SSH public key created for this cluster
  • –node – count is the number of nodes you want in your Kubernetes cluster (I am using 3 for tutorial)
  • –node-vm-size is the size of the nodes you want to use, which varies based on what you are using your cluster for and how much RAM/CPU each of your users needs. There is a list of all possible node sizes for you to choose from, but not all might be available in your location. If you get an error whilst creating the cluster you can try changing either the region or the node size.
  • It will install the default version of Kubernetes. You can pass –kubernetes-version to install a different version.

This might take some time. Once it is ready you will see information about the new Kubernetes cluster printed in the terminal.

Install Kubernetes CLI

To work with the cluster we need to install kubectl, the Kubernetes command line tool. Run the following command:

az aks install-cli

The next step is to get credentials from Azure:

az aks get-credentials
--name <ClusterName>
--resource-group <YourResourceGroupName>

Now if I run this command, I get all my nodes and the status on each:
It looks good, we can move on.

Install Helm

Helm is the first application package manager running atop Kubernetes, and we can use the official Zenko helm charts to deploy it to our cluster. It allows describing the application structure through convenient helm charts and managing it with simple commands.

1. Download helm v2.13.1

2. Unpack it and move it to its desired destination :

tar -zxvf helm-v2.13.1-linux-386.tar.gz
mv linux-386/helm /usr/local/bin/helm
helm version

The first service we need is a tiller, it runs inside of your Kubernetes cluster and manages releases (installations) of your charts. Create a serviceaccount for tiller:

kubectl create serviceaccount tiller --namespace kube-system

Create rbac-config.yaml that will configure tiller service:

kind: ServiceAccount
metadata:
   name: tiller
   namespace: kube-system
 ---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
   name: tiller
roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: cluster-admin

subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

Lastly, apply rbac-config.yaml file :

kubectl apply -f rbac-config.yaml
helm init --service-account tiller

Install Zenko

Get the latest release of Zenko or the one that you prefer from here:

wget https://github.com/scality/Zenko/releases/tag/1.0.2-hotfix.1
unzip 1.0.2-hotfix.1.zip

Go to the kubernetes folder and run the following commands that will deploy Zenko:

cd Zenko-1.0.2-hotfix.1/kubernetes
helm init
helm install --name zenko --set ingress.enabled=true
--set ingress.hosts[0]=zenko.local
--set cloudserver.endpoint=zenko.local zenko

This step may take up to 10 minutes. After the setup is done, you can run this command to see all Zenko pods and their availability:

kubectl get pods

Wait a few more minutes for all services to be started and run this command to get your Instance ID, you will need it to connect to Orbit:

kubectl logs $(kubectl get pods --no-headers=true -o custom-columns=:metadata.name | grep cloudserver-manager) | grep Instance | tail -n 1
Connect your instance to Orbit

Once you got the Instance ID copy it and go to Orbit signup page. After signup, you will have a choice to start sandbox or connect existing instance – that is what you need. Enter your ID and create a name for your Zenko cluster. Done! Start managing data.


				
					
Deploy Zenko on Amazon EKS in 30 minutes

Deploy Zenko on Amazon EKS in 30 minutes

Do you have half an hour and an AWS account? If so, you can install Zenko and use Orbit to manage your data. Below is a step-by-step guide with time estimates to get started.

If you are an AWS user with appropriate permissions or policies to create EC2 instances and EKS clusters, you can dive into this tutorial. Otherwise, contact your administrator, who can add permissions (full documentation).

Initial Machine Setup (estimated time: 10 minutes):

For this tutorial, we use a jumper EC2 instance with Amazon Linux to deploy and manage our Kubernetes cluster. A power user can use their own workstation or laptop to manage the Kubernetes cluster.

Follow this guide to set up your EC2 instance and connect to your new instance using the information hereOnce connected to the instance, install applications that will help set up the Kubernetes cluster.

Install Kubectl, a command-line tool for running commands against Kubernetes clusters.

$ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl

$ chmod +x ./kubectl
$ sudo mv ./kubectl /usr/local/bin/kubectl

Verify that kubectl is installed (expect a similar output):

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState:"clean", BuildDate:"2019-01-10T23:35:51Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}

Download aws-iam-authenticator, a tool to use AWS IAM credentials to authenticate to a Kubernetes cluster.

$ curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.11.5/2018-12-06/bin/linux/amd64/aws-iam-authenticator

$ chmod +x ./aws-iam-authenticator
$ mkdir bin
$ cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$HOME/bin:$PATH

Install eksctl. eksctl is a simple CLI tool for creating clusters on EKS – Amazon’s new managed Kubernetes service for EC2.

$ curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp

$ sudo mv /tmp/eksctl /usr/local/bin

Configure AWS credentials:

$ mkdir ~/.aws
$ vim ~/.aws/credentials
$ cat ~/.aws/credentials
[default]
aws_access_key_id = AKIAII25IGOGWQITLYIQ
aws_secret_access_key = 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
$ export AWS_SHARED_CREDENTIALS_FILE=~/.aws/credentials

Verify credentials work. If the output looks similar, you are ready to launch your Kubernetes cluster:

$ eksctl get clusters
No clusters found

Deploy a Three-Node Kubernetes Cluster for Zenko: (estimated time: 10–15 minutes):

$ eksctl create cluster --name=zenko-eks-cluster --nodes=3 --region=us-west-2

Once you get the line below, your cluster is ready:

[✔]  EKS cluster "zenko-eks-cluster" in "us-west-2" region is ready

Install Helm:

$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh 
$ bash ./get_helm.sh
$ helm version
Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}

EKS requires role-based access control to be set up. The first step is to create a service account for Tiller:

$ kubectl create serviceaccount tiller --namespace kube-system

Create a Tiller service account: Make a rbac-config.yaml file and apply it.

$ cat rbac-config.yaml
apiVersion: v1
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
 name: tiller-role-binding
roleRef:
 kind: ClusterRole
 name: cluster-admin
 apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
 name: tiller
 namespace: kube-system

$ kubectl apply -f rbac-config.yaml
$ helm init --service-account tiller

Deploy Zenko (estimated time: 10 minutes)

Install Git:

$ sudo yum install git

Clone Zenko:

$ git clone https://github.com/scality/Zenko/

Go to the kubernetes folder and deploy Zenko. This will take about 10 minutes.

$ cd Zenko/kubernetes/
$ helm init
$ helm install --name zenko --set ingress.enabled=true \
--set ingress.hosts[0]=zenko.local \
--set cloudserver.endpoint=zenko.local zenko

Connect EKS Zenko to Orbit

Find the Instance ID to use for registering your instance:

$ kubectl logs $(kubectl get pods --no-headers=true -o \
custom-columns=:metadata.name | grep cloudserver-manager) | grep Instance | tail -n 1

{"name":"S3","time":1548793280888,"req_id":"a67edf37254381fc4781","level":"info","message":"this deployment's Instance ID is fb3c8811-88c6-468c-a2f4-aebd309707ef","hostname":"zenko-cloudserver-manager-8568c85497-5k5zp","pid":17}

Copy the ID and head to Orbit to paste it in the Settings page. Once the Zenko instance is connected to Orbit you’ll be able to attach cloud storage from different providers.

If you have any questions or want to show off a faster time than 30 minutes, join us at the Zenko forum.

Photo by chuttersnap on Unsplash

Official Helm charts to install CloudServer on Kubernetes

Official Helm charts to install CloudServer on Kubernetes

CloudServer can now be deployed on a Kubernetes cluster through Helm. The CloudServer Helm chart allows to add S3-compatible storage system to a K8s cluster easily. CloudServer can store data locally or can be used with existing S3 compatible servers by supplying credentials to the values.yaml file in the chart. See the full documentation at the Helm charts Github repository.

To start using CloudServer on an existing Kubernetes cluster, run:

$ helm install stable/cloudserver

In order to connect CloudServer to S3 compatible services, fill in cloud backend credentials on the values.yaml file. An AWS configuration may look like this:

api:
 locationConstraints:
   awsbackend:
       type: aws_s3
       legacyAwsBehavior: true
       details:
         bucketMatch: true
         awsEndpoint: s3.amazonaws.com
         bucketName: my-bucket=name
         credentials:
           accessKey: my-access-key
           secretKey: my-secret-key

A number of configurations are also available at install. Some common examples include:

  • Expose the Cloudserver service through ingress:
$ helm install stable/cloudserver --set api.ingress.enabled=true
  • Configure S3 credentials at install:
$ helm install stable/cloudserver --set api.credentials.accessKey="hello" --set api.credentials.secretKey="world"
  • Enable autoscaling:
$ helm install stable/cloudserver --set api.autoscaling.enabled=true
  • Disable data persistence:
$ helm install stable/cloudserver --set localdata.persistentVolume.enabled=false # disable persistence

Minikube and Docker for Mac Edge also support single-node Kubernetes for local testing. Docker has a step-by-step guide for such a set up. CloudServer’s full documentation covers other details. Try it out and let us know what you think on Zenko forums.

How To Deploy Zenko On Google Kubernetes Engine

How To Deploy Zenko On Google Kubernetes Engine

Zenko can be deployed on a managed-Kubernetes cluster on Google Cloud (GKE) using the Helm charts distributed in its repository. There are many other ways to run Zenko on a Kubernetes cluster, including our favorite Kubernetes distribution MetalK8s. The Helm charts are designed to isolate how Zenko is deployed from where it is deployed: any Kubernetes cluster will be good to get started. In some ways, it helps developers like Zenko itself tries to give developers the freedom to choose the best cloud storage system, abstracting the complex choices like supporting multiple APIs or aggregate metadata.  GKE is an easy way to quickly setup a cloud-agnostic storage platform.

The first step is to start a new cluster on Kubernetes following the instructions on Google Cloud documentation. For better performance, you’ll need a cluster with 3 nodes, 2vCPU and 7.5GB RAM each. Once the cluster is running, connect to it and install Helm.

Create Role For Tiller

Google Kubernetes Engine requires Role Based Access Control to be setup. The first step is to create a serviceaccount for tiller:

$ kubectl create serviceaccount tiller --namespace kube-system

Check that the correct context is set:

$ kubectl get nodes
NAME                                       STATUS   ROLES    AGE VERSION
gke-cluster-1-default-pool-9ad69bcf-4g2n   Ready    <none>   1m  v1.8.10-gke.0
gke-cluster-1-default-pool-9ad69bcf-frj5   Ready    <none>   1m  v1.8.10-gke.0
gke-cluster-1-default-pool-9ad69bcf-rsbt   Ready    <none>   1m  v1.8.10-gke.0

Install Helm on Kubernetes Cluster

Helm is not available by default on GKE and needs to be installed.

$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
$ bash ./get_helm.sh

Once that’s completed, start Helm:

$ helm init --service-account tiller --wait

Deploy Zenko on GKE

Clone Zenko’s repo and go into the charts directory:

$ git clone https://github.com/scality/Zenko.git
$ cd ./Zenko/charts

Once you have the repo cloned you can retrieve all dependencies:

$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
"incubator" has been added to your repositories

$ helm dependency build zenko/
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "incubator" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 8 charts
Downloading prometheus from repo https://kubernetes-charts.storage.googleapis.com/
Downloading mongodb-replicaset from repo https://kubernetes-charts.storage.googleapis.com/
Downloading redis from repo https://kubernetes-charts.storage.googleapis.com/
Downloading kafka from repo http://storage.googleapis.com/kubernetes-charts-incubator
Downloading zookeeper from repo http://storage.googleapis.com/kubernetes-charts-incubator
Deleting outdated charts

With your dependencies built, you can run the following shell command to deploy a three-nodes Zenko stack with Orbit enabled.

$ helm install --name zenko --set ingress.enabled=true zenko

Connect GKE Zenko To Orbit

Find the Instance ID to use for registering your instance:

$ kubectl logs $(kubectl get pods --no-headers=true -o \
custom-columns=:metadata.name | grep cloudserver-front) | grep \
Instance | tail -n 1

The output will look something like this:

{"name":"S3",
"time":1529101607249,
"req_id":"9089628bad40b9a255fd",
"level":"info",
"message":"this deployment's Instance ID is 6075357a-b08d-419e-9af8-cc9f391ca8e2",
"hostname":"zenko-cloudserver-front-f74d8c48c-dt6fc",
"pid":23}

Copy the ID and head to Orbit to paste it in the Settings page. Once the Zenko instance is connected to Orbit you’ll be able to attach cloud storage from different providers.

Photo by Silvio Kundt on Unsplash