Kubespray (part 3/3): Deploy Kubernetes

Kubespray (part 3/3): Deploy Kubernetes

About this lab

vTeam Specialization Program

Recently I was nominated to join the Pure Storage vTeam Specialization Program for New Stack. The idea behind the program is to provide a community within Pure for Puritans to learn and develop skills and grow into a subject matter expert.

The program consists of training and lab exercises that are focussed on developing experience in the New Stack space (Kubernetes, Ansible, Open Stack and more). And since I think there are more people out there how want to learn more about New Stack, I though I’d blog my progress of the lab exercises.

Lab instructions

The purpose of Lab 1 is to build a vanila Kubernetes cluster using Kubespray.

Name:Install Vanila k8s using KubesprayStatus
Description:Build a 3 node k8s cluster using kubespray installed on a seperate control node
Objective:Have a 3 node kubernetes cluster running with no added bells or whistles
Task #1:Ensure all 4 nodes are fully patched with latest software packagesBlog part 1
Task #2:SSH communication between hosts is establishedBlog post 2
Task #3:Install prerequisite software for kubespray on control nodeThis Blog
Task #4:Clone the kubespray repo on your control nodeThis Blog
Task #5:Create Ansible inventory for 3 node kubespray deploymentThis Blog
Task #6:Customize kubespray to enable helm install and volume snapshot and volume cloning feature gates in 3 node k8s clusterThis Blog
Task #7:Deploy kubesprayThis Blog
Task #8:Check k8s and helm versions on deployed clusterThis Blog
Success Criteria:3 node Kubernetes cluster running at 1.17.3 with Helm 3.1 installed. All necessary features gates are openThis Blog
Lab 1 goals and tasks

I’ll be splitting this lab into a three part blog series, the first blog (read here) will be about installing Ubuntu server. The second one (read here) will be about deploying and preparing the VM’s that will used to install Kubernetes and the third and final one will focus on Kubernetes deployment using Kubespray (this blog).

Prepare for Kubespray

Install Python

Kubespray uses Ansible, which is build upon Python. So before starting on Kubespray, we need to make sure Python and the Python and the package installer for Python called Pip are installed on the control node.

The following command will install Python 3 and pip.

sudo apt install python3-pip

And now we want to make sure we use the latest version of pip that is available:

sudo pip3 install --upgrade pip

To make sure we have everything setup correctly, execute pip -V which will show us our pip version and our Python version.:

dnix@adminhost:~$ pip -V
pip 20.1.1 from /usr/local/lib/python3.8/dist-packages/pip (python 3.8)

Prepare for Kubespray

The next step is to clone the Kubespray repository, which can be found here. This is a great time to check this repo out and familiarize yourself with Kubespray by reading the documentation.

git clone https://github.com/kubernetes-sigs/kubespray.git

This will download the latest release of Kubespray which is generally what you’d like to do.

dnix@adminhost:~$ git clone https://github.com/kubernetes-sigs/kubespray.git
Cloning into 'kubespray'…
remote: Enumerating objects: 43915, done.
remote: Total 43915 (delta 0), reused 0 (delta 0), pack-reused 43915
Receiving objects: 100% (43915/43915), 12.82 MiB | 12.74 MiB/s, done.
Resolving deltas: 100% (24419/24419), done.

Kubespray needs some additional local software, which is conveniently specified in the requirements.txt file. Install the requirements using the following:

cd kubespray
sudo pip install -r requirements.txt

This should download all required software for Kubespray to run including for example Ansible.

Configure our deployment

The next step is to build our own Kubernetes configuration that we want to deploy. We will start by cloning the sample configuration to our own cluster definition.

cp -rfp inventory/sample inventory/mycluster

We can now modify this configuration for our deployment. We will first we declare the IPS variable for all the Kubernetes nodes that we wish to deploy to, for each node we add the IP address of that node to the following command.

declare -a IPS=(192.168.10.121 192.168.10.122 192.168.10.123)

Now we will use use the inventory.py script supplied by Kubespray to generate a hosts.yml YAML file, which will generate a default node config for our Kubernetes cluster.

CONFIG_FILE=inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}

To see what we’ve just generated, check out the newly created hosts.yml:

nano inventory/mycluster/hosts.yml

You’ll see that all our Kubernetes nodes are added under hosts and in the children section you can see which services are deployed to which hosts.

The next step is to make any changes you require to the inventory before running Kubespray. I will not go into the details here, I’ll only change the Kubernetes version to deploy (since we need to configure a 1.17.3 Kubernetes cluster) and enable the required feature gates for the Pure Storage Pure Service Orchestrator.

Set the Kubernetes version to deploy

The Kubernetes version to deploy is specified in the k8s-cluster.yml file. Open the file:

nano inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml

And change the version to 1.17.3 in the following line:

## Change this to use another Kubernetes version, e.g. a current beta release
kube_version: v1.17.3

Enable feature gates

Now we need to enable the specific feature gates required for the Pure Service Orchestrator to use all functionalities (incl. cloning and snapshots). The requirements are documented in the repository of Pure Service Orchestrator and this includes a short instruction for Kubespray as well.

Edit the following file:

nano roles/kubespray-defaults/defaults/main.yaml

Look for the line start starts with kube_feature_gates: and change this to (so adding the Pure Service Orchestrator lines above it and changing the [] to what is shown below:

# Pure Service Orchestrator feature gates
volume_clones: True
volume_snapshots: True

feature_gate_snap_clone:
  - "VolumeSnapshotDataSource={{ volume_snapshots | string }}"
  - "VolumePVCDataSource={{ volume_clones | string }}"

## List of key=value pairs that describe feature gates for
## the k8s cluster.
kube_feature_gates: |-
  {{ feature_gate_snap_clone }}

Save the file. By the way, to enable Feature Gates for a running cluster, check out this blog.

Deploy the Kubernetes cluster

Now that we are all set, we can go ahead and install the Kubernetes cluster. To do the actual installation we need to run the following command. This will kick of the Kubespray ansible playbook for deploying a cluster (cluster.yml). The proces can take quite some time (eg. 30 minutes), but is fully automated, so great moment to get some coffee and be amazed by the beauty of Kubespray and Ansible.

ansible-playbook -i inventory/mycluster/hosts.yml --become --become-user=root cluster.yml

Troubleshooting for Ubuntu 20.04

So…. I would love to say that the command above completed successfully without any issues, but unfortunately that was not the case today. The reason was that in part 1 of this blog I decided to go with the new Ubuntu 20.04 version, which turns out to not be 100% compatible (yet?)… So I probably should suggest that you’d use Ubuntu 18.04 and you’d be totally fine.

UPDATE June 16th 2020: Ubuntu 20.04 support has now been added to Kubespray (https://github.com/kubernetes-sigs/kubespray/pull/6157) so you shouldn’t run into the issues mentioned below anymore.

If you like a bit of hacking, continue reading… This is what I did in a temporary and totally not support way to resolve the issues. The first issue was with installing the package python-minimal, this package was replaced by python2-minimal. Resulting in this error

Package python-minimal is not available, but is referred to by another package.”, “This may mean that the package is missing, has been obsoleted, or”, “is only available from another source”, “However the following packages replace it:”, ”  python2-minimal”, “”, “E: Package ‘python-minimal’ has no installation candidate”]}

So I edited the following file:

nano roles/bootstrap-os/tasks/bootstrap-debian.yml

And changed python-minimal to python2-minimal as shown below:

- name: Install python 
  raw:
    apt-get update && \
    DEBIAN_FRONTEND=noninteractive apt-get install -y python2-minimal
  become: true
  environment: {}
  when:
    - need_bootstrap.rc != 0

The second issue was around the installation of docker:

failed: [node1] (item={‘name’: ‘docker-ce-cli=5:18.09.9~3-0~ubuntu-focal’, ‘force’: True})

This version of docker is not available for Ubuntu 20.04 it seems. I didn’t want to spend too much time, so I changed the following file:

nano roles/container-engine/docker/vars/ubuntu-amd64.yml

The problem again is with the supported versions for Ubuntu 20.04, which I changed the following:

# https://download.docker.com/linux/ubuntu/
...
'18.06': docker-ce=18.06.2~ce~3-0~ubuntu
'18.09': docker-ce=5:19.03.9~3-0~ubuntu-{{ ansible_distribution_release|lower }}
'19.03': docker-ce=5:19.03.7~3-0~ubuntu-{{ ansible_distribution_release|lower }}
...
docker_cli_versioned_pkg:
  'latest': docker-ce-cli
  '18.09': docker-ce-cli=5:19.03.9~3-0~ubuntu-{{ ansible_distribution_release|lower }}
  '19.03': docker-ce-cli=5:19.03.9~3-0~ubuntu-{{ ansible_distribution_release|lower }}
...

These were the only two visible issues, so when I ran the install again (just launching the same Ansible playbook again), it now completed successfully. However this is definitely not supported or advised. The best step is to move to Ubuntu 18.04 or wait until Kubespray provides support for Ubuntu 20.04.

Update May 27th, 2020: turns out that the issues above are known (https://github.com/kubernetes-sigs/kubespray/issues/5835) and this solution was only possible as of today, since the Docker packages for Ubuntu had only been released today. I expect a update for Kubespray to support Ubuntu 20.04 soon.

The issue mentioned above also pointed me at a wrong configuration of the kubelet DNS, causing coredns to fail, since it was pointing to itself for upstream DNS resolution. Was easily fixed with the following command:

cd ~/kubespray<br>cp roles/kubernetes/node/vars/ubuntu-18.yml roles/kubernetes/node/vars/ubuntu-20.yml

And running the Ansible playbook once more.

Finishing off the install

Once the installation has completed, our Kubernetes cluster is up-and-running. The only thing we want now is to be able to access and manage the Kubernetes cluster from our control node. For this we need two things:

  • We need to install kubectl
  • We need to copy the cluster config to the control node

First lets getkubectl, I’ll use the following commands to get the latest stable version, make it executable and move it to /use/local/bin/ so that it’s in our path:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl

chmod +x kubectl
sudo mv kubectl /usr/local/bin/

Next I’ll copy the Kubernetes config file from the first cluster node (/etc/kubernetes/admin.conf) to the control node (~/.kube/config). To do this I use a couple of commands to work around some access permissions:

ssh 192.168.10.121 sudo cp /etc/kubernetes/admin.conf /home/pureuser/config
ssh 192.168.10.121 sudo chmod +r ~/config
scp 192.168.10.121:~/config .kube/
ssh 192.168.10.121 sudo rm ~/config

Now we have the config file on our control node ( see ~/.kube/config) and we have kubectl installed.

With this we are done and ready to explore the Kubernetes magic!

kubectl version

Returns the following (so the latest stable version of kubectl client locally and the requested 1.17.3 version of Kubernetes for the cluster):

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:52:00Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:07:13Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}

To get the status of our nodes, I’ll issue kubectl get nodes -o wide which returns the following:

dnix@adminhost:~$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node1 Ready master 24m v1.17.3 192.168.10.121 Ubuntu 20.04 LTS 5.4.0-31-generic docker://19.3.9
node2 Ready master 23m v1.17.3 192.168.10.122 Ubuntu 20.04 LTS 5.4.0-31-generic docker://19.3.9
node3 Ready 22m v1.17.3 192.168.10.123 Ubuntu 20.04 LTS 5.4.0-31-generic docker://19.3.9

And finally to check if the feature-gates we had specified are actually enabled, login to node1 or node2 (the master nodes) and run the following command:

ps aux | grep apiserver | grep feature-gates

This should return a quite long output, including:

--feature-gates=VolumeSnapshotDataSource=True,VolumePVCDataSource=True

Let’s finish off with Helm

One of the success criteria of the lab was to make sure Helm was installed, for this I’ll install helm3 on the control node (using the steps described on the Helm website):

sudo snap install helm --classic

And to show the version:

dnix@adminhost:~/kubespray$ helm version
version.BuildInfo{Version:"v3.2.1", GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a", GitTreeState:"clean", GoVersion:"go1.13.10"}

Conclusion

That’s all folks! We’ve created our Ubuntu template in part 1, deployed and configured four VM’s in part 2 and installed Kubernetes using Kubespray in this post. That completes the lab for now. On to the next lab where we’ll install the Pure Storage Pure Service Orchestrator to start provisioning storage!

Hope this was useful, let me know any feedback you have and see in in one of my next blogs.

2 thoughts on “Kubespray (part 3/3): Deploy Kubernetes

Leave a Reply

Your email address will not be published.