Kubernetes 博客

Friday, March 15, 2019

Kubernetes Setup Using Ansible and Vagrant

Author: Naresh L J (Infosys)

Objective

This blog post describes the steps required to setup a multi node Kubernetes cluster for development purposes. This setup provides a production-like cluster that can be setup on your local machine.

Why do we require multi node cluster setup?

Multi node Kubernetes clusters offer a production-like environment which has various advantages. Even though Minikube provides an excellent platform for getting started, it doesn’t provide the opportunity to work with multi node clusters which can help solve problems or bugs that are related to application design and architecture. For instance, Ops can reproduce an issue in a multi node cluster environment, Testers can deploy multiple versions of an application for executing test cases and verifying changes. These benefits enable teams to resolve issues faster which make the more agile.

Why use Vagrant and Ansible?

Vagrant is a tool that will allow us to create a virtual environment easily and it eliminates pitfalls that cause the works-on-my-machine phenomenon. It can be used with multiple providers such as Oracle VirtualBox, VMware, Docker, and so on. It allows us to create a disposable environment by making use of configuration files.

Ansible is an infrastructure automation engine that automates software configuration management. It is agentless and allows us to use SSH keys for connecting to remote machines. Ansible playbooks are written in yaml and offer inventory management in simple text files.

Prerequisites

  • Vagrant should be installed on your machine. Installation binaries can be found here.
  • Oracle VirtualBox can be used as a Vagrant provider or make use of similar providers as described in Vagrant’s official documentation.
  • Ansible should be installed in your machine. Refer to the Ansible installation guide for platform specific installation.

Setup overview

We will be setting up a Kubernetes cluster that will consist of one master and two worker nodes. All the nodes will run Ubuntu Xenial 64-bit OS and Ansible playbooks will be used for provisioning.

Step 1: Creating a Vagrantfile

Use the text editor of your choice and create a file with named Vagrantfile, inserting the code below. The value of N denotes the number of nodes present in the cluster, it can be modified accordingly. In the below example, we are setting the value of N as 2.

IMAGE_NAME = "bento/ubuntu-16.04"
N = 2

Vagrant.configure("2") do |config|
    config.ssh.insert_key = false

    config.vm.provider "virtualbox" do |v|
        v.memory = 1024
        v.cpus = 2
    end
      
    config.vm.define "k8s-master" do |master|
        master.vm.box = IMAGE_NAME
        master.vm.network "private_network", ip: "192.168.50.10"
        master.vm.hostname = "k8s-master"
        master.vm.provision "ansible" do |ansible|
            ansible.playbook = "kubernetes-setup/master-playbook.yml"
        end
    end

    (1..N).each do |i|
        config.vm.define "node-#{i}" do |node|
            node.vm.box = IMAGE_NAME
            node.vm.network "private_network", ip: "192.168.50.#{i + 10}"
            node.vm.hostname = "node-#{i}"
            node.vm.provision "ansible" do |ansible|
                ansible.playbook = "kubernetes-setup/node-playbook.yml"
            end
        end
    end

Step 2: Create an Ansible playbook for Kubernetes master.

Create a directory named kubernetes-setup in the same directory as the Vagrantfile. Create two files named master-playbook.yml and node-playbook.yml in the directory kubernetes-setup.

In the file master-playbook.yml, add the code below.

Step 2.1: Install Docker and its dependent components.

We will be installing the following packages, and then adding a user named “vagrant” to the “docker” group. - docker-ce - docker-ce-cli - containerd.io

---
- hosts: all
  become: true
  tasks:
  - name: Install packages that allow apt to be used over HTTPS
    apt:
      name: "{{ packages }}"
      state: present
      update_cache: yes
    vars:
      packages:
      - apt-transport-https
      - ca-certificates
      - curl
      - gnupg-agent
      - software-properties-common

  - name: Add an apt signing key for Docker
    apt_key:
      url: https://download.docker.com/linux/ubuntu/gpg
      state: present

  - name: Add apt repository for stable version
    apt_repository:
      repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable
      state: present

  - name: Install docker and its dependecies
    apt: 
      name: "{{ packages }}"
      state: present
      update_cache: yes
    vars:
      packages:
      - docker-ce 
      - docker-ce-cli 
      - containerd.io
    notify:
      - docker status

  - name: Add vagrant user to docker group
    user:
      name: vagrant
      group: docker

Step 2.2: Kubelet will not start if the system has swap enabled, so we are disabling swap using the below code.

  - name: Remove swapfile from /etc/fstab
    mount:
      name: "{{ item }}"
      fstype: swap
      state: absent
    with_items:
      - swap
      - none

  - name: Disable swap
    command: swapoff -a
    when: ansible_swaptotal_mb > 0

Step 2.3: Installing kubelet, kubeadm and kubectl using the below code.

  - name: Add an apt signing key for Kubernetes
    apt_key:
      url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
      state: present

  - name: Adding apt repository for Kubernetes
    apt_repository:
      repo: deb https://apt.kubernetes.io/ kubernetes-xenial main
      state: present
      filename: kubernetes.list

  - name: Install Kubernetes binaries
    apt: 
      name: "{{ packages }}"
      state: present
      update_cache: yes
    vars:
      packages:
        - kubelet 
        - kubeadm 
        - kubectl

Step 2.3: Initialize the Kubernetes cluster with kubeadm using the below code (applicable only on master node).

  - name: Initialize the Kubernetes cluster using kubeadm
    command: kubeadm init --apiserver-advertise-address="192.168.50.10" --apiserver-cert-extra-sans="192.168.50.10"  --node-name k8s-master --pod-network-cidr=192.168.0.0/16

Step 2.4: Setup the kube config file for the vagrant user to access the Kubernetes cluster using the below code.

  - name: Setup kubeconfig for vagrant user
    command: "{{ item }}"
    with_items:
     - mkdir -p /home/vagrant/.kube
     - cp -i /etc/kubernetes/admin.conf /home/vagrant/.kube/config
     - chown vagrant:vagrant /home/vagrant/.kube/config

Step 2.5: Setup the container networking provider and the network policy engine using the below code.

  - name: Install calico pod network
    become: false
    command: kubectl create -f https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/hosted/calico.yaml

Step 2.6: Generate kube join command for joining the node to the Kubernetes cluster and store the command in the file named join-command.

  - name: Generate join command
    command: kubeadm token create --print-join-command
    register: join_command

  - name: Copy join command to local file
    local_action: copy content="{{ join_command.stdout_lines[0] }}" dest="./join-command"

Step 2.7: Setup a handler for checking Docker daemon using the below code.

  handlers:
    - name: docker status
      service: name=docker state=started

Step 3: Create the Ansible playbook for Kubernetes node.

Create a file named node-playbook.yml in the directory kubernetes-setup.

Add the code below into node-playbook.yml

Step 3.1: Start adding the code from Steps 2.1 till 2.3.

Step 3.2: Join the nodes to the Kubernetes cluster using below code.

  - name: Copy the join command to server location
    copy: src=join-command dest=/tmp/join-command.sh mode=0777

  - name: Join the node to cluster
    command: sh /tmp/join-command.sh

Step 3.3: Add the code from step 2.7 to finish this playbook.

Step 4: Upon completing the Vagrantfile and playbooks follow the below steps.

$ cd /path/to/Vagrantfile
$ vagrant up

Upon completion of all the above steps, the Kubernetes cluster should be up and running. We can login to the master or worker nodes using Vagrant as follows:

$ ## Accessing master
$ vagrant ssh k8s-master
vagrant@k8s-master:~$ kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   18m     v1.13.3
node-1       Ready    <none>   12m     v1.13.3
node-2       Ready    <none>   6m22s   v1.13.3

$ ## Accessing nodes
$ vagrant ssh node-1
$ vagrant ssh node-2

0001.01.01


title: “SIG-Networking: Kubernetes Network Policy APIs Coming in 1.3 “ date: 2016-04-18 slug: kubernetes-network-policy-apis

url: /blog/2016/04/Kubernetes-Network-Policy-APIs

编者按:这一周,我们的封面主题是 Kubernetes 特别兴趣小组;今天的文章由网络兴趣小组撰写,来谈谈 1.3 版本中即将出现的网络策略 API - 针对安全,隔离和多租户的策略。

自去年下半年起,Kubernetes 网络特别兴趣小组经常定期开会,讨论如何将网络策略带入到 Kubernetes 之中,现在,我们也将慢慢看到这些工作的成果。

很多用户经常会碰到的一个问题是, Kubernetes 的开放访问网络策略并不能很好地满足那些需要对 pod 或服务( service )访问进行更为精确控制的场景。今天,这个场景可以是在多层应用中,只允许临近层的访问。然而,随着组合微服务构建原生应用程序潮流的发展,如何控制流量在不同服务之间的流动会别的越发的重要。

在大多数的(公共的或私有的) IaaS 环境中,这种网络控制通常是将 VM 和“安全组”结合,其中安全组中成员的通信都是通过一个网络策略或者访问控制表( Access Control List, ACL )来定义,以及借助于网络包过滤器来实现。

“网络特别兴趣小组”刚开始的工作是确定 特定的使用场景 ,这些用例需要基本的网络隔离来提升安全性。 让这些API恰如其分地满足简单、共通的用例尤其重要,因为它们将为那些服务于 Kubernetes 内多租户,更为复杂的网络策略奠定基础。

根据这些应用场景,我们考虑了集中不同的方法,然后定义了一个最简策略规范。 基本的想法是,如果是根据命名空间的不同来进行隔离,那么就会根据所被允许的流量类型的不同,来选择特定的 pods 。

快速支持这个实验性 API 的办法是往 API 服务器上加入一个 ThirdPartyResource 扩展,这在 Kubernetes 1.2 就能办到。

如果你还不是很熟悉这其中的细节, Kubernetes API 是可以通过定义 ThirdPartyResources 扩展在特定的 URL 上创建一个新的 API 端点。

third-party-res-def.yaml

kind: ThirdPartyResource
apiVersion: extensions/v1beta1
metadata:
	- name: network-policy.net.alpha.kubernetes.io
description: "Network policy specification"
versions:
	- name: v1alpha1
$kubectl create -f third-party-res-def.yaml

这条命令会创建一个 API 端点(每个命名空间各一个):

/net.alpha.kubernetes.io/v1alpha1/namespace/default/networkpolicys/

第三方网络控制器可以监听这些端点,根据资源的创建,修改或者删除作出必要的响应。 注意:在接下来的 Kubernetes 1.3 发布中, Network Policy API 会以 beta API 的形式出现,这也就不需要像上面那样,创建一个 ThirdPartyResource API 端点了。

网络隔离默认是关闭的,因而,所有的 pods 之间可以自由地通信。 然而,很重要的一点是,一旦开通了网络隔离,所有命名空间下的所有 pods 之间的通信都会被阻断,换句话说,开通隔离会改变 pods 的行为。

网络隔离可以通过定义命名空间, net.alpha.kubernetes.io 里的 network-isolation 注释来开通关闭:

net.alpha.kubernetes.io/network-isolation: [on | off]

一旦开通了网络隔离,一定需要使用 显示的网络策略来允许 pod 间的通信。

一个策略规范可以被用到一个命名空间中,来定义策略的细节(如下所示):

POST /apis/net.alpha.kubernetes.io/v1alpha1/namespaces/tenant-a/networkpolicys/
{
  "kind": "NetworkPolicy",
  "metadata": {
    "name": "pol1"
  },
  "spec": {
    "allowIncoming": {
      "from": [
        {
          "pods": {
            "segment": "frontend"
          }
        }
      ],
      "toPorts": [
        {
          "port": 80,
          "protocol": "TCP"
        }
      ]
    },
    "podSelector": {
      "segment": "backend"
    }
  }
}

在这个例子中,tenant-a 空间将会使用 pol1 策略。 具体而言,带有 segment 标签为 backend 的 pods 会允许 segment 标签为 frontend 的 pods 访问其端口 80 。

今天,Romana, OpenShift, OpenContrail 以及 Calico 都已经支持在命名空间和pods中使用网络策略。 而 Cisco 和 VMware 也在努力实现支持之中。 Romana 和 Calico 已经在最近的 KubeCon 中展示了如何在 Kubernetes 1.2 下使用这些功能。 你可以在这里看到他们的演讲: Romana (幻灯片), Calico (幻灯片).

这是如何工作的

每套解决方案都有自己不同的具体实现。尽管今天,他们都借助于每种主机上( on-host )的实现机制,但未来的实现可以通过将策略使用在 hypervisor 上,亦或是直接使用到网络本身上来达到同样的目的。

外部策略控制软件(不同实现各有不同)可以监听 pods 创建以及新加载策略的 API 端点。 当产生一个需要策略配置的事件之后,监听器会确认这个请求,相应的,控制器会配置接口,使用该策略。 下面的图例展示了 API 监视器和策略控制器是如何通过主机代理在本地应用网络策略的。 这些 pods 的网络接口是使用过主机上的 CNI 插件来进行配置的(并未在图中注明)。

controller.jpg

如果你一直受网络隔离或安全考虑的困扰,而犹豫要不要使用 Kubernetes 来开发应用程序,这些新的网络策略将会极大地解决你这方面的需求。并不需要等到 Kubernetes 1.3 ,现在就可以通过 ThirdPartyResource 的方式来使用这个实现性 API 。

如果你对 Kubernetes 和网络感兴趣,可以通过下面的方式参与、加入其中:

网络“特别兴趣小组”每两周下午三点(太平洋时间)开会,地址是SIG-Networking hangout.

–Chris Marino, Co-Founder, Pani Networks

  • Jan 1
  • Jan 1