First, containers and microservices transformed the way we create and ship applications, shifting challenges to orchestrating many moving pieces at scale. Then Kubernetes came to save us. But the salty “helmsman” needs a plan to steer a herd of microservices and Operators are the best way to do that.
Hello Operator, what are you exactly?
The most commonly used definition online is: “Operators are the way of packaging, deploying and managing your application that runs atop Kubernetes”. In other words, Operators help in building cloud-native applications by automation of deployment, scaling, backup and restore. All that while being K-native application itself so almost absolutely independent from the platform where it runs.
CoreOS (who originally proposed the Operators concept in 2016) suggests thinking of an operator as an extension of the software vendor’s engineering team that watches over your Kubernetes environment and uses its current state to make decisions in milliseconds. An Operator essentially is codified knowledge on how to run the Kubernetes application.
Kubernetes has been very good at managing stateless applications without any custom intervention.
But think of a stateful application, a database running on several nodes. If a majority of nodes go down, you’ll need to restore the database from a specific point following some steps. Scaling nodes up, upgrading or disaster recovery – these kinds of operations need knowing what is the right thing to do. And Operators help you bake that difficult patterns in a custom controller.
Some perks you get:
- Less complexity: Operators simplify the processes of managing distributed applications. They take the Kubernetes promise of automation to its logical next step.
- Transferring human knowledge to code: very often application management requires domain-specific knowledge. This knowledge can be transferred to the Operator.
- Extended functionality: Kubernetes is extensible – it offers interfaces to plug in your network, storage, runtime solutions. Operators make it possible to extend K8s APIs with application specific logic!
- Useful in most of the modern settings: Operators can run where Kubernetes can run: on public/hybrid/private, multi-cloud or on-premises.
An Operator is basically a Kubernetes Custom Controller managing one or more Custom Resources. Kubernetes introduced custom resource definitions (CRDs) in version 1.7 and the platform became extensible. The application you want to watch is defined in K8s as a new object: a CRD that has its own YAML file and object type that the API server can understand. That way, you can define any specific criteria in the custom spec to watch out for.
CRD is a mean to specify a configuration. The cluster needs controllers to monitor its state and to match with the configuration. Enter Operators. They extend K8s functionality by allowing you to declare a custom controller to keep an eye on your application and perform custom tasks based on its state. The way Operator works is very similar to native K8s controllers, but it’s using mostly custom components that you defined.
This is a more specific list of what you need in order to create your custom operator:
- A custom resource (CRD) spec that defines the application we want to watch, as well as an API for the CR
- A custom controller to watch our application
- Custom code within the new controller that dictates how to reconcile our CR against the spec
- An operator to manage the custom controller
- Deployment for the operator and custom resource
Where to start developing your Operator
Writing a CRD schema and its accompanying controller can be a daunting task. Currently, the most commonly used tool to create operators is Operator SDK. It is an open-source toolkit that makes it easier to manage and build Kubernetes native applications – Operators. The framework also includes the ability to monitor and collect metrics from operator-built clusters and to administrate multiple operators with lifecycle-manager.
You should also check this Kubernetes Operator Guidelines document on design, implementation, packaging, and documentation of a custom Operator.
The creation of an operator mostly starts by automating an application’s installation and then matures to perform more complex automation. So I would suggest starting small and wet your toes creating a basic operator that deploys an application or does something small.
The framework has a maturity model for provided tools that you can use to build the Operator. As you can see using Helm Operator Kit is probably the easiest way to get started, but not as powerful if you wish to build more sophisticated tool.
Explore other operators
The number of custom operators for well-known applications is growing every day. In fact, Red Hat in collaboration with AWS, Google Cloud and Microsoft launched OperatorHub.io just a couple of months ago. It is the public registry for finding Kubernetes Operator backed services. You might find one that is useful for some components of your application or list your custom operator there.
Kubernetes coupled with operators provides cloud-agnostic application deployment and management. It is so powerful that might lead us to treat cloud providers almost like a commodity, as you will be able to freely migrate between them and offer your product on any possible platform.
But is it a step to make Kubernetes easier or it actually adds even more complexity? Is it yet another tool that available, but just makes it more complicated for someone new? Is it all just going to explode in our face? So many questions…
If you have any thoughts or questions stop by the forum 🙂