Installing a multinode Kubernetes cluster using Kubespray

Installing a multinode Kubernetes cluster using Kubespray

Introduction

So you’ve started on your Kubernetes journey and you’ve probably started with something like Minikube (https://kubernetes.io/docs/setup/minikube/) on your laptop. Moving you containers to production however, requires a multinode Kubernetes cluster. To setup such a cluster requires some form of automation for both the deployment and maintenance of the Kubernetes cluster. For that you can use Kubespray, allowing you deoploy and maintain you Kubernetes cluster. Kubespray is an Open Source project based on Ansible to maintain your Kubernetes cluster.

While this might sound a bit new and scary to some, it’s actually not that hard to use if you have some basic Linux knowledge. This post will assume that you have some vanilla Ubuntu Linux servers running (I personally use VMware to run these servers, which is absolutely fine to run Kubernetes on).

Use the live demo video of you’d rather watch than read 😉

About my environment

In this example I will deploy a six Kubernetes cluster and use a seventh Ubuntu server as the administrative node, from which I run Ansible to deploy Kubernetes. All Ubuntu servers have at the minimum:

CPU: 1 core
Memory: 1536 GB RAM (I used 2048 GB)
HDD: 16 GB HDD
OS: Ubuntu LTS 18.04

In my example the administrative Ubuntu server has IP addresses 10.1.1.100, the Kubernetes nodes have IP addresses 10.1.1.101 through 10.1.1.106.

Enable SSH passwordless login

The first step is to generate the SSH keys to enable passwordless SSH access from the administrative host to all Kubernetes hosts.
SSH into the administrative node (10.1.1.100) as a user who has sudo permissions. Generate the SSH keys as follows:

ssh-keygen -t rsa

Now we will need to copy the SSH public key from the administrative host to the other nodes, so that we can login. This step should be repeated for each Kubernetes node, where the IP address (10.1.1.101) should be replaced with the IP address of all nodes.

ssh-copy-id 10.10.1.101
...
ssh-copy-id 10.10.1.106

NOTE: For the steps above I used a normal user not the root user. The reasons for this is that by default the root user is not allowed to SSH access with username and password in Ubuntu for security reasons. If you’d like to use root, you would need to edit the file /etc/ssh/sshd_config and change the line line “#PermitRootLogin prohibit-password” to “PermitRootLogin yes” and restart the sshd service (sudo service ssh restart). However I’d advice to use a seperate user that has sudo permissions.

Allow your user to sudo without password

The next step is to SSH into each node and add the following line into the /etc/sudoers file at the end of the file, or at least after the line %sudo   ALL=(ALL:ALL) ALL, otherwise this line will override our config for this user.
Please note that in this example I use the user id “pureuser”, replace this username to whatever username you are using to login with.

pureuser ALL=(ALL) NOPASSWD:ALL

To be able to edit the /etc/sudoers file, use sudo as shown below. Since the sudoers file is a readonly file, you will have to force overwrite the file (use :wq! to force overwrite from vi).

sudo vi /etc/sudoers

Install python3 and pip

Next we need to install Python3 and the latest version of pip. Since the default installation of pip for Ubuntu 16.04 LTS is based on python2.7, we install python3-pip and then set pip3 as the default version for pip.

sudo apt install python3-pip
sudo pip3 install --upgrade pip

If all is well, pip –version should now return a recent pip version for python3 as shown below:

pureuser@k8s-admin:~$ pip --version
pip 19.1 from /usr/local/lib/python3.6/dist-packages/pip (python 3.6)

Download Kubespray

Now we are ready to go ahead with Kubespray, which we’ll use to deploy Kubernetes. The first step is to download Kubespray from Github.

git clone https://github.com/kubernetes-sigs/kubespray.git

Then we can install the requirements for Kubespray, which can easily be installed with pip using the requirements.txt file supplied by Kubespray.

cd kubespray
sudo pip install -r requirements.txt

Build our Kubernetes cluster config files

The next step is to build our own Kubernetes configuration that we want to deploy. We will start by cloning the sample configuration to our own cluster definition.

cp -rfp inventory/sample inventory/mycluster

The next step is to modify this config for the hosts that we wish to deploy to. First we declare a IPS variable for all the Kubernetes nodes that we wish to deploy, for each node we add the IP address of that node to the following command.

declare -a IPS=(10.1.1.101 10.1.1.102 10.1.1.103 10.1.1.104 10.1.1.105 10.1.1.106)

Next we generate a hosts.yml YAML file using the inventory.py script supplied by Kubespray, and the variable IPS as input. This will generate a default config for your Kubernetes cluster.

CONFIG_FILE=inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}

Check your config files before deploying

Now you can review your configuration, before kicking off the deployment. Check the following three files for any modifications you’d like to make:

inventory/mycluster/hosts.yml
This file contains the configuration for your Kubernetes nodes and the roles that will be assigned to each node. Particularly review the following sections:
– kube-master: contains the master nodes (generally two)
– kube-etcd: contains the nodes on which the etcd cluster is to be installed
– kube-node: contains the nodes which are used as worker nodes in the cluster.

inventory/mycluster/group_vars/all/all.yml

Generic (more infrastructure related) settings for the Kubernetes cluster, like the details for an external load balancer, details for the upstream DNS servers, proxy servers to be used, etc.

inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml

More specific details about the Kubernetes cluster, like the version of Kubernetes to deploy (kube_version), The network overlay plugin to use (kube_network_plugin: calico), the internal IP range to use for the PODs (kube_service_addresses, kube_pods_subnet,) and the cluster_name.

Deploy Kubernetes using Kubespray

Now that we are all set, we can go ahead and deploy the Kubernetes cluster using Kubespray. To do that we need to run the following command. This will kick of the Kubespray ansible playbook for deploying a cluster (cluster.yml). The proces can take quite some time (eg. 30 minutes), but is fully automated, so great moment to get some coffee and be amazed by the beauty of Kubespray and Ansible.

ansible-playbook -i inventory/mycluster/hosts.yml --become --become-user=root cluster.yml

Administer your new cluster

Now that your new Kubernetes cluster is deployed, you want to kick-off using it. To do that you need to install kubectl (for
pronunciation, please see https://www.youtube.com/watch?v=2wgAIvXpJqU).

Login to the administrative node, and download the latest version of kubectl:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl

Make it an executable and move it to your bin directory:

chmod +x kubectl
sudo mv kubectl /usr/local/bin/

Next we need to supply kubectl with our cluster config. The easiest way to do so is to copy the admin.conf file from one of the master nodes:

ssh 10.10.1.111 sudo cp /etc/kubernetes/admin.conf /home/pureuser/config
ssh 10.10.1.111 sudo chmod +r ~/config
scp 10.10.1.111:~/config .kube/
ssh 10.10.1.111 sudo rm ~/config

With this kubectl is ready to be used and you are ready to explore the Kubernetes magic!

kubectl version
kubectl get nodes -o wide

Conclusion

And that concludes the deployment. It wasn’t all that hard. In this blog we’ve seen how to deploy Kubernetes using Kubespray, which mainly involves setting up some Linux machines, copying our SSH keys for password less authentication, a bit of configuration and then running Kubespray.

6 thoughts on “Installing a multinode Kubernetes cluster using Kubespray

  1. ssh 10.10.1.111 sudo cp /etc/kubernetes/admin.conf /home/pureuser/config
    ssh 10.10.1.111 sudo chmod +r ~/config
    scp 10.10.1.111:~/config .kube/
    ssh 10.10.1.111 sudo rm ~/config

    hello!
    Good job. But one moment, this command does not work scp 10.10.1.111:~/config .kube/

    this my log:
    sys@kubemaster:~/kubespray$ chmod +x kubectl
    sys@kubemaster:~/kubespray$ sudo mv kubectl /usr/local/bin/
    sys@kubemaster:~/kubespray$ sudo cp /etc/kubernetes/admin.conf /home/sys/config
    sys@kubemaster:~/kubespray$ sudo chmod +r ~/config
    sys@kubemaster:~/kubespray$ ~/config .kube/
    -bash: /home/quersys/config: Permission denied
    rsys@kubemaster:~/kubespray$ sudo ~/config .kube/
    sudo: /home/sys/config: command not found

    Maybe you mistaked with code?

    1. If you run the full command, so “scp 10.10.1.111:~/config .kube/” that should work, since it will copy (scp) the remote file “~/config” to your local system in the “.kube/” directory. You seem to leave out the “scp 10.10.1.111:” part, based on the output above.
      To copy the file locally, you’d need to use “cp ~/config .kube/”

  2. after this command ansible-playbook -i inventory/mycluster/hosts.yml –become –become-user=root cluster.yml
    i reveive this error error invalid options for include_role apply
    what i should do

    1. I believe this could be an issue with your local Ansible version? Which version are you using? The other possibility could be an issue in the Kubespray version you are using. You might try a newer version or older version.

  3. after kubctl get pods i get this error
    The connection to the server localhost:8080 was refused – did you specify the right host or port?
    why please

    1. Most likely you don’t have the right kube config file in place. Make sure that you copy the “/etc/kubernetes/admin.conf” from one of your master nodes to the system you’re working on in the the file “~/.kube/config”.

Leave a Reply

Your email address will not be published.