Kubespray Ansible Playbooks foster Collaborative Kubernetes Ops

Friday, May 19, 2017

Kubespray Ansible Playbooks foster Collaborative Kubernetes Ops

Today’s guest post is by Rob Hirschfeld, co-founder of open infrastructure automation project, Digital Rebar and co-chair of the SIG Cluster Ops.  

Why Kubespray?

Making Kubernetes operationally strong is a widely held priority and I track many deployment efforts around the project. The incubated Kubespray project is of particular interest for me because it uses the popular Ansible toolset to build robust, upgradable clusters on both cloud and physical targets. I believe using tools familiar to operators grows our community.

We’re excited to see the breadth of platforms enabled by Kubespray and how well it handles a wide range of options like integrating Ceph for StatefulSet persistence and Helm for easier application uploads. Those additions have allowed us to fully integrate the OpenStack Helm charts (demo video).

By working with the upstream source instead of creating different install scripts, we get the benefits of a larger community. This requires some extra development effort; however, we believe helping share operational practices makes the whole community stronger. That was also the motivation behind the SIG-Cluster Ops.

With Kubespray delivering robust installs, we can focus on broader operational concerns.

For example, we can now drive parallel deployments, so it’s possible to fully exercise the options enabled by Kubespray simultaneously for development and testing.  

That’s helpful to built-test-destroy coordinated Kubernetes installs on CentOS, Red Hat and Ubuntu as part of an automation pipeline. We can also set up a full classroom environment from a single command using Digital Rebar’s providers, tenants and cluster definition JSON.

Let’s explore the classroom example:

First, we define a student cluster in JSON like the snippet below

| {

 “attribs”: {

   “k8s-version”: “v1.6.0”,

   “k8s-kube_network_plugin”: “calico”,

   “k8s-docker_version”: “1.12”

 },

 “name”: “cluster01”,

 “tenant”: “cluster01”,

 “public_keys”: {

   “cluster01”: “ssh-rsa AAAAB….. user@example.com”

 },

 “provider”: {

   “name”: “google-provider”

 },

 “nodes”: [

   {

     “roles”: [“etcd”,“k8s-addons”, “k8s-master”],

     “count”: 1

   },

   {

     “roles”: [“k8s-worker”],

     “count”: 3

   }

 ]

} |

Then we run the Digital Rebar workloads Multideploy.sh reference script which inspects the deployment files to pull out key information.  Basically, it automates the following steps:

| rebar provider create {“name”:“google-provider”, [secret stuff]}

rebar tenants create {“name”:“cluster01”}

rebar deployments create [contents from cluster01 file] |

The deployments create command will automatically request nodes from the provider. Since we’re using tenants and SSH key additions, each student only gets access to their own cluster. When we’re done, adding the –destroy flag will reverse the process for the nodes and deployments but leave the providers and tenants.

We are invested in operational scripts like this example using Kubespray and Digital Rebar because if we cannot manage variation in a consistent way then we’re doomed to operational fragmentation.  

I am excited to see and be part of the community progress towards enterprise-ready Kubernetes operations on both cloud and on-premises. That means I am seeing reasonable patterns emerge with sharable/reusable automation. I strongly recommend watching (or better, collaborating in) these efforts if you are deploying Kubernetes even at experimental scale. Being part of the community requires more upfront effort but returns dividends as you get the benefits of shared experience and improvement.

When deploying at scale, how do you set up a system to be both repeatable and multi-platform without compromising scale or security?

With Kubespray and Digital Rebar as a repeatable base, extensions get much faster and easier. Even better, using upstream directly allows improvements to be quickly cycled back into upstream. That means we’re closer to building a community focused on the operational side of Kubernetes with an SRE mindset.

If this is interesting, please engage with us in the Cluster Ops SIG, Kubespray or Digital Rebar communities. 

– Rob Hirschfeld, co-founder of RackN and co-chair of the Cluster Ops SIG

-

Get involved with the Kubernetes project on GitHub

Post questions (or answer questions) on Stack Overflow

Connect with the community on Slack

Follow us on Twitter @Kubernetesio for latest updates