Kubernetes 博客

Friday, March 15, 2019

Kubernetes Setup Using Ansible and Vagrant

Author: Naresh L J (Infosys)

Objective

This blog post describes the steps required to setup a multi node Kubernetes cluster for development purposes. This setup provides a production-like cluster that can be setup on your local machine.

Why do we require multi node cluster setup?

Multi node Kubernetes clusters offer a production-like environment which has various advantages. Even though Minikube provides an excellent platform for getting started, it doesn’t provide the opportunity to work with multi node clusters which can help solve problems or bugs that are related to application design and architecture. For instance, Ops can reproduce an issue in a multi node cluster environment, Testers can deploy multiple versions of an application for executing test cases and verifying changes. These benefits enable teams to resolve issues faster which make the more agile.

Why use Vagrant and Ansible?

Vagrant is a tool that will allow us to create a virtual environment easily and it eliminates pitfalls that cause the works-on-my-machine phenomenon. It can be used with multiple providers such as Oracle VirtualBox, VMware, Docker, and so on. It allows us to create a disposable environment by making use of configuration files.

Ansible is an infrastructure automation engine that automates software configuration management. It is agentless and allows us to use SSH keys for connecting to remote machines. Ansible playbooks are written in yaml and offer inventory management in simple text files.

Prerequisites

  • Vagrant should be installed on your machine. Installation binaries can be found here.
  • Oracle VirtualBox can be used as a Vagrant provider or make use of similar providers as described in Vagrant’s official documentation.
  • Ansible should be installed in your machine. Refer to the Ansible installation guide for platform specific installation.

Setup overview

We will be setting up a Kubernetes cluster that will consist of one master and two worker nodes. All the nodes will run Ubuntu Xenial 64-bit OS and Ansible playbooks will be used for provisioning.

Step 1: Creating a Vagrantfile

Use the text editor of your choice and create a file with named Vagrantfile, inserting the code below. The value of N denotes the number of nodes present in the cluster, it can be modified accordingly. In the below example, we are setting the value of N as 2.

IMAGE_NAME = "bento/ubuntu-16.04"
N = 2

Vagrant.configure("2") do |config|
    config.ssh.insert_key = false

    config.vm.provider "virtualbox" do |v|
        v.memory = 1024
        v.cpus = 2
    end
      
    config.vm.define "k8s-master" do |master|
        master.vm.box = IMAGE_NAME
        master.vm.network "private_network", ip: "192.168.50.10"
        master.vm.hostname = "k8s-master"
        master.vm.provision "ansible" do |ansible|
            ansible.playbook = "kubernetes-setup/master-playbook.yml"
        end
    end

    (1..N).each do |i|
        config.vm.define "node-#{i}" do |node|
            node.vm.box = IMAGE_NAME
            node.vm.network "private_network", ip: "192.168.50.#{i + 10}"
            node.vm.hostname = "node-#{i}"
            node.vm.provision "ansible" do |ansible|
                ansible.playbook = "kubernetes-setup/node-playbook.yml"
            end
        end
    end

Step 2: Create an Ansible playbook for Kubernetes master.

Create a directory named kubernetes-setup in the same directory as the Vagrantfile. Create two files named master-playbook.yml and node-playbook.yml in the directory kubernetes-setup.

In the file master-playbook.yml, add the code below.

Step 2.1: Install Docker and its dependent components.

We will be installing the following packages, and then adding a user named “vagrant” to the “docker” group. - docker-ce - docker-ce-cli - containerd.io

---
- hosts: all
  become: true
  tasks:
  - name: Install packages that allow apt to be used over HTTPS
    apt:
      name: "{{ packages }}"
      state: present
      update_cache: yes
    vars:
      packages:
      - apt-transport-https
      - ca-certificates
      - curl
      - gnupg-agent
      - software-properties-common

  - name: Add an apt signing key for Docker
    apt_key:
      url: https://download.docker.com/linux/ubuntu/gpg
      state: present

  - name: Add apt repository for stable version
    apt_repository:
      repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable
      state: present

  - name: Install docker and its dependecies
    apt: 
      name: "{{ packages }}"
      state: present
      update_cache: yes
    vars:
      packages:
      - docker-ce 
      - docker-ce-cli 
      - containerd.io
    notify:
      - docker status

  - name: Add vagrant user to docker group
    user:
      name: vagrant
      group: docker

Step 2.2: Kubelet will not start if the system has swap enabled, so we are disabling swap using the below code.

  - name: Remove swapfile from /etc/fstab
    mount:
      name: "{{ item }}"
      fstype: swap
      state: absent
    with_items:
      - swap
      - none

  - name: Disable swap
    command: swapoff -a
    when: ansible_swaptotal_mb > 0

Step 2.3: Installing kubelet, kubeadm and kubectl using the below code.

  - name: Add an apt signing key for Kubernetes
    apt_key:
      url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
      state: present

  - name: Adding apt repository for Kubernetes
    apt_repository:
      repo: deb https://apt.kubernetes.io/ kubernetes-xenial main
      state: present
      filename: kubernetes.list

  - name: Install Kubernetes binaries
    apt: 
      name: "{{ packages }}"
      state: present
      update_cache: yes
    vars:
      packages:
        - kubelet 
        - kubeadm 
        - kubectl

Step 2.3: Initialize the Kubernetes cluster with kubeadm using the below code (applicable only on master node).

  - name: Initialize the Kubernetes cluster using kubeadm
    command: kubeadm init --apiserver-advertise-address="192.168.50.10" --apiserver-cert-extra-sans="192.168.50.10"  --node-name k8s-master --pod-network-cidr=192.168.0.0/16

Step 2.4: Setup the kube config file for the vagrant user to access the Kubernetes cluster using the below code.

  - name: Setup kubeconfig for vagrant user
    command: "{{ item }}"
    with_items:
     - mkdir -p /home/vagrant/.kube
     - cp -i /etc/kubernetes/admin.conf /home/vagrant/.kube/config
     - chown vagrant:vagrant /home/vagrant/.kube/config

Step 2.5: Setup the container networking provider and the network policy engine using the below code.

  - name: Install calico pod network
    become: false
    command: kubectl create -f https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/hosted/calico.yaml

Step 2.6: Generate kube join command for joining the node to the Kubernetes cluster and store the command in the file named join-command.

  - name: Generate join command
    command: kubeadm token create --print-join-command
    register: join_command

  - name: Copy join command to local file
    local_action: copy content="{{ join_command.stdout_lines[0] }}" dest="./join-command"

Step 2.7: Setup a handler for checking Docker daemon using the below code.

  handlers:
    - name: docker status
      service: name=docker state=started

Step 3: Create the Ansible playbook for Kubernetes node.

Create a file named node-playbook.yml in the directory kubernetes-setup.

Add the code below into node-playbook.yml

Step 3.1: Start adding the code from Steps 2.1 till 2.3.

Step 3.2: Join the nodes to the Kubernetes cluster using below code.

  - name: Copy the join command to server location
    copy: src=join-command dest=/tmp/join-command.sh mode=0777

  - name: Join the node to cluster
    command: sh /tmp/join-command.sh

Step 3.3: Add the code from step 2.7 to finish this playbook.

Step 4: Upon completing the Vagrantfile and playbooks follow the below steps.

$ cd /path/to/Vagrantfile
$ vagrant up

Upon completion of all the above steps, the Kubernetes cluster should be up and running. We can login to the master or worker nodes using Vagrant as follows:

$ ## Accessing master
$ vagrant ssh k8s-master
vagrant@k8s-master:~$ kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   18m     v1.13.3
node-1       Ready    <none>   12m     v1.13.3
node-2       Ready    <none>   6m22s   v1.13.3

$ ## Accessing nodes
$ vagrant ssh node-1
$ vagrant ssh node-2

0001.01.01


title: “ 如何在AWS上部署安全,可审计,可复现的k8s集群 “ date: 2016-04-15 slug: kubernetes-on-aws_15

url: /blog/2016/04/Kubernetes-On-Aws_15

今天的客座文章是由Colin Hom撰写,CoreOS的基础架构工程师。CoreOS致力于推广谷歌的基础架构模式(Google’s Infrastructure for Everyone Else, #GIFEE),让全世界的容器都能在CoreOS Linux, Tectonic 和 Quay上安全运行。

加入到我们的柏林CoreOS盛宴,这是一个开源分布式系统主题的会议,在这里可以了解到更多关于CoreOS和Kubernetes的信息。

在CoreOS, 我们一直都是在生产环境中大规模部署Kubernetes。今天我们非常兴奋地想分享一款工具,它能让你的Kubernetes生产环境大规模部署更加的轻松。Kube-aws这个工具可以用来在AWS上部署可审计,可复现的k8s集群,而CoreOS本身就在生产环境中使用它。

也许今天,你更多的可能是用手工的方式来拼接Kubernetes组件。但有了这个工具之后,Kubernetes可以流水化地打包、交付,节省时间,减少了相互间的依赖,更加快捷地实现生产环境的部署。

借助于一个简单的模板系统,来生成集群配置,这么做是因为一套声明式的配置模板可以版本控制,审计以及重复部署。而且,由于整个创建过程只用到了AWS CloudFormation 和 cloud-init,你也就不需要额外用到其它的配置管理工具。开箱即用!

如果要跳过演讲,直接了解这个项目,可以看看kube-aws的最新发布,支持Kubernetes 1.2.x。如果要部署集群,可以参考[文档]](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html).

为什么是kube-aws?安全,可审计,可复现

Kube-aws设计初衷有三个目标。

安全 : TLS 资源在嵌入到CloudFormation JSON之前,通过AWS 秘钥管理服务加密。通过单独管理KMS密钥的IAM 策略,可以将CloudFormation栈的访问与TLS秘钥的访问分离开。

可审计 : kube-aws是围绕集群资产的概念来创建。这些配置和账户资产是对集群的完全描述。由于KMS被用来加密TLS资产,因而可以无所顾忌地将未加密的CloudFormation栈 JSON签入到版本控制服务中。

可重复 : –export 选项将参数化的集群定义打包成一整个JSON文件,对应一个CloudFormation栈。这个文件可以版本控制,然后,如果需要的话,通过现有的部署工具直接提交给CloudFormation API。

如何开始用kube-aws

在此基础之上,kube-aws也实现了一些功能,使得在AWS上部署Kubernetes集群更加容易,灵活。下面是一些例子。

Route53集成 : Kube-aws 可以管理你的集群DNS记录,作为配置过程的一部分。

cluster.yaml

externalDNSName: my-cluster.kubernetes.coreos.com

createRecordSet: true

hostedZone: kubernetes.coreos.com

recordSetTTL: 300

现有VPC支持 : 将集群部署到现有的VPC上。

cluster.yaml

vpcId: vpc-xxxxx

routeTableId: rtb-xxxxx

验证 : kube-aws 支持验证 cloud-init 和 CloudFormation定义,以及集群栈会集成用到的外部资源。例如,下面就是一个cloud-config,外带一个拼写错误的参数:

userdata/cloud-config-worker

#cloud-config

coreos:

  flannel:
    interrface: $private\_ipv4
    etcd\_endpoints: {{ .ETCDEndpoints }}

$ kube-aws validate

> Validating UserData… Error: cloud-config validation errors: UserDataWorker: line 4: warning: unrecognized key “interrface”

考虑如何起步?看看kube-aws 文档

未来的工作

一如既往,kube-aws的目标是让生产环境部署更加的简单。尽管我们现在在AWS下使用kube-aws进行生产环境部署,但是这个项目还是pre-1.0,所以还有很多的地方,kube-aws需要考虑、扩展。

容错 : CoreOS坚信 Kubernetes on AWS是强健的平台,适于容错、自恢复部署。在接下来的几个星期,kube-aws将会迎接新的考验:混世猴子(Chaos Monkey)测试 - 控制平面以及全部!

零停机更新 : 更新CoreOS节点和Kubernetes组件不需要停机,也不需要考虑实例更新策略(instance replacement strategy)的影响。

有一个github issue来追踪这些工作进展。我们期待你的参与,提交issue,或是直接贡献。

想要更多地了解Kubernetes,来柏林CoreOS盛宴看看,- 五月 9-10, 2016

– Colin Hom, 基础架构工程师, CoreOS

  • Jan 1
  • Jan 1