Edit This Page

Installing Kubernetes with Digital Rebar Provision (DRP) via KRIB


This guide helps to install a Kubernetes cluster hosted on bare metal with Digital Rebar Provision using only its Content packages and kubeadm.

Digital Rebar Provision (DRP) is an integrated Golang DHCP, bare metal provisioning (PXE/iPXE) and workflow automation platform. While DRP can be used to invoke kubespray, it also offers a self-contained Kubernetes installation known as KRIB (Kubernetes Rebar Integrated Bootstrap).

Note: KRIB is not a stand-alone installer: Digital Rebar templates drive a standard kubeadm configuration that manages the Kubernetes installation with the Digital Rebar cluster pattern to elect leaders without external supervision.

KRIB features:

Creating a cluster

Review Digital Rebar documentation for details about installing the platform.

The Digital Rebar Provision Golang binary should be installed on a Linux-like system with 16 GB of RAM or larger ( Tiny and Rasberry Pi are also acceptable).

(1/5) Discover servers

Following the Digital Rebar installation, allow one or more servers to boot through the Sledgehammer discovery process to register with the API. This will automatically install the Digital Rebar runner and to allow for next steps.

(2/5) Install KRIB Content and Certificate Plugin

Upload the KRIB Content bundle (or build from source) and the Cert Plugin for your DRP platform (e.g.: amd64 Linux v2.4.0). Both are freely available via the RackN UX.

(3/5) Start your cluster deployment

Note: KRIB documentation is dynamically generated from the source and will be more up to date than this guide.

Following the KRIB documentation, create a Profile for your cluster and assign your target servers into the cluster Profile. The Profile must set krib\cluster-name and etcd\cluster-name Params to be the name of the Profile. Cluster configuration choices can be made by adding additional Params to the Profile; however, safe defaults are provided for all Params.

Once all target servers are assigned to the cluster Profile, start a KRIB installation Workflow by assigning one of the included Workflows to all cluster servers. For example, selecting krib-live-cluster will perform an immutable deployment into the Sledgehammer discovery operating system. You may use one of the pre-created read-only Workflows or choose to build your own custom variation.

For basic installs, no further action is required. Advanced users may choose to assign the controllers, etcd servers or other configuration values in the relevant Params.

(4/5) Monitor your cluster deployment

Digital Rebar Provision provides detailed logging and live updates during the installation process. Workflow events are available via a websocket connection or monitoring the Jobs list.

During the installation, KRIB writes cluster configuration data back into the cluster Profile.

(5/5) Access your cluster

The cluster is available for access via kubectl once the krib/cluster-admin-conf Param has been set. This Param contains the kubeconfig information necessary to access the cluster.

For example, if you named the cluster Profile krib then the following commands would allow you to connect to the installed cluster from your local terminal.


drpcli profiles get krib params krib/cluster-admin-conf > admin.conf
export KUBECONFIG=admin.conf
kubectl get nodes

The installation continues after the krib/cluster-admin-conf is set to install the Kubernetes UI and Helm. You may interact with the cluster as soon as the admin.conf file is available.

Cluster operations

KRIB provides additional Workflows to manage your cluster. Please see the KRIB documentation for an updated list of advanced cluster operations.

Scale your cluster

You can add servers into your cluster by adding the cluster Profile to the server and running the appropriate Workflow.

Cleanup your cluster (for developers)

You can reset your cluster and wipe out all configuration and TLS certificates using the krib-reset-cluster Workflow on any of the servers in the cluster.

Caution: When running the reset Workflow, be sure not to accidentally target your production cluster!