Kubespray (part 2/3): Prepare Ubuntu VM’s

Kubespray (part 2/3): Prepare Ubuntu VM’s

About this lab

vTeam Specialization Program

Recently I was nominated to join the Pure Storage vTeam Specialization Program for New Stack. The idea behind the program is to provide a community within Pure for Puritans to learn and develop skills and grow into a subject matter expert.

The program consists of training and lab exercises that are focussed on developing experience in the New Stack space (Kubernetes, Ansible, Open Stack and more). And since I think there are more people out there how want to learn more about New Stack, I though I’d blog my progress of the lab exercises.

Lab instructions

The purpose of Lab 1 is to build a vanila Kubernetes cluster using Kubespray.

Name:Install Vanila k8s using KubesprayStatus
Description:Build a 3 node k8s cluster using kubespray installed on a seperate control node
Objective:Have a 3 node kubernetes cluster running with no added bells or whistles
Task #1:Ensure all 4 nodes are fully patched with latest software packagesBlog post 1
Task #2:SSH communication between hosts is establishedThis blog
Task #3:Install prerequisite software for kubespray on control nodeBlog post 3
Task #4:Clone the kubespray repo on your control nodeBlog post 3
Task #5:Create Ansible inventory for 3 node kubespray deploymentBlog post 3
Task #6:Customize kubespray to enable helm install and volume snapshot and volume cloning feature gates in 3 node k8s clusterBlog post 3
Task #7:Deploy kubesprayBlog post 3
Task #8:Check k8s and helm versions on deployed clusterBlog post 3
Success Criteria:3 node Kubernetes cluster running at 1.17.3 with Helm 3.1 installed. All necessary features gates are openBlog post 3
Lab 1 goals and tasks

I’ll be splitting this lab into a three part blog series, the first blog (read here) will be about installing Ubuntu server. The second one (this blog) will be about deploying and preparing the VM’s that will used to install Kubernetes and the third and final one will focus on Kubernetes deployment using Kubespray.

Cloning my Ubuntu VM’s

So since the previous post I have converted my Ubuntu VM to a VMware Template, which allows me to easily create multiple VM’s for the next part of the lab… deploying Kubernetes. But first let’s clone the template into my VM’s. I cloned my template in VMware four times, using the VM name as mentioned below. VMware offers the option to customize the OS or hardware, but I choose not to, as I’ll be configuring the VM’s myself.

VM nameNetwork detailsDescription
adminhostHostname: adminhost
IP address: 192.168.10.120/24
Gateway: 192.168.10.254
DNS: 8.8.8.8, 8.8.4.4
This will be our “control node” as mentioned in our lab instructions.
node1Hostname: node1
IP address: 192.168.10.121/24
Gateway: 192.168.10.254
DNS: 8.8.8.8, 8.8.4.4
This will become a Kubernetes node.
node2Hostname: node2
IP address: 192.168.10.122/24
Gateway: 192.168.10.254
DNS: 8.8.8.8, 8.8.4.4
This will become a Kubernetes node.
node3Hostname: node3
IP address: 192.168.10.123/24
Gateway: 192.168.10.254
DNS: 8.8.8.8, 8.8.4.4
This will become a Kubernetes node.

Customize to my new VM’s

The next step is to customize my VM’s to set the network details as mentioned above. As I used DHCP for our template the new VM’s are assigned a DHCP address, which VMware conveniently shows as you can see in the screenshot.

It’s interesting to note that I did not specifically install VMware tools in my VM, but the Ubuntu installation automatically took care of that. Pretty handy I think!

Now we will use SSH to login to the server. If you’re using Mac, use the following command (replace dnix with your username and 192.168.10.126 with your IP address)::

ssh dnix@192.168.10.153

If you’re using Putty enter the IP address in Putty and login with your own username and password.

Setting het IP address

To see the current IP address and network details we first need to install a software package called net-tools.

sudo apt install net-tools

This command will request you to enter you password and should than install the required software without any further questions. Once done we can list our current network information as follows:

ifconfig -a

This should output something similar to the following (I manually highlighted the network adapter and IP address):

ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.10.153  netmask 255.255.255.0  broadcast 192.168.10.255
        inet6 fe80::250:56ff:fe80:36a3  prefixlen 64  scopeid 0x20<link>
        ether 00:50:56:80:36:a3  txqueuelen 1000  (Ethernet)
        RX packets 2890  bytes 447457 (447.4 KB)
        RX errors 0  dropped 1081  overruns 0  frame 0
        TX packets 451  bytes 45150 (45.1 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 986  bytes 70722 (70.7 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 986  bytes 70722 (70.7 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

We will use this information to change the network settings for our VM. Our network settings are in the /etc/netplan directory, however the settings file can have different names, based on the deployment we used. Use the following command to identify the correct filename:

ls -l /etc/netplan/
total 4
-rw-r--r-- 1 root root 117 May 25 10:58 00-installer-config.yaml

Now let’s edit this file to set the correct IP address details.

sudo nano /etc/netplan/00-installer-config.yaml

Change the contents of the as shown below to set the IP address (obviously you’ll need to use your own IP info):

# This is the network config written by 'subiquity'
network:
  ethernets:
    ens160:
      dhcp4: no
      addresses: [192.168.10.120/24]
      gateway4:  192.168.10.254
      nameservers:
        addresses: [8.8.8.8, 8.8.4.4]
  version: 2

Now use CTRL-O to save the file and CTRL-X to exit the nano editor. And next we’ll apply our configuration using the following command:

sudo netplan apply

Please note that after activating the new network config, the SSH connection will be lost, since we just change the IP address of the VM. Reconnect to the VM using the new fixed IP address that we entered. If anything went wrong and you’re unable to connect to the VM, use the VMware Console to login to the VM and try to find what went wrong (eg. use ifconfig -a to show the current IP).

Setting the hostname

Next we’ll change our hostname, for which we’ll need to change two files on the VM. But before we do that, let’s just quickly check the current hostname:

hostname
dnix@ubuntu:~$ hostname
ubuntu

To change the hostname, we need to edit the /etc/hostname. There is only one line in this file, specifying the hostname of this server. Change the hostname to the new hostname.

sudo nano /etc/hostname

Now use CTRL-O to save the file and CTRL-X to exit the nano editor.

Next we’ll update our hosts file, so that localhost correctly resolves to our new hostname.

sudo nano /etc/hosts

Find any occurrences of the old hostname (in our example ubuntu) and change this to the new hostname (adminhost)

127.0.0.1 localhost
127.0.1.1 adminhost

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Now use CTRL-O to save the file and CTRL-X to exit the nano editor.

Finally to make sure the new hostname is updated, reboot the VM.

sudo reboot

Repeat these steps for all VM’s

Repeat the steps mentioned before for all your VM’s, so that they all have the correct (fixed) IP addresses and hostnames.

SSH password less login

For Kubespray (which is build using Ansible) to work correctly, the control node (called adminhost in my example) needs to be able to login to the Kubernetes nodes using public/private key for authentication, after which a password is no longer required to connect to the host.

Login to the adminhost using SSH and start by creating our public and private keys.

ssh-keygen -t rsa

Accept all the default settings and create to key pair.

Generating public/private rsa key pair.
Enter file in which to save the key (/home/dnix/.ssh/id_rsa):
Created directory '/home/dnix/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/dnix/.ssh/id_rsa
Your public key has been saved in /home/dnix/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:F5pk8ls75c9wgBw+/yFmBy1eWc2zhLMpq69Xetk7xM8 dnix@adminhost
The key's randomart image is:
+---[RSA 3072]----+
| |
| …|
| . o o o o+|
| = = + .=oo|
| S B.=o=. |
| + Bo* o |
| . o.@ B..|
| .* @ +E|
| o+.. +.o|
+----[SHA256]-----+

Now we have a public and private key stored in our home directory under the hidden folder ~/.ssh:

dnix@adminhost:~$ ls -l ~/.ssh/
total 8
-rw------- 1 dnix dnix 2602 May 27 07:51 id_rsa
-rw-r--r-- 1 dnix dnix 569 May 27 07:51 id_rsa.pub

We should keep the private key (id_rsa) save as this is what is being used for authentication to the other nodes. If someone has access to your private key, they can login to all nodes that you have access to. So be carefull.

Next we will copy our public key to the other servers, using the ssh-copy-id command. This command will basically copy are public key (id_rsa.pub) and add it to the ~/.ssh/authorized_keys file on the destination node:

ssh-copy-id 192.168.10.121
ssh-copy-id 192.168.10.122
ssh-copy-id 192.168.10.123

Execute the commands as mentioned above and enter your password for the last time while copying the keys. Once all nodes are done, make sure key authentication works, by connecting with SSH to the node, this should now no longer ask for a password

ssh 192.168.10.121

Password less SUDO

The next thing we’ll do is to make sure we no longer need to enter a password when we run commands using sudo. We will use the visudo command for this, which is the correct way of changing the suoers file.

sudo visudo

Once in the editors view, find the line that starts with %sudo and change the line as shown below:

# Allow members of group sudo to execute any command
%sudo   ALL=(ALL) NOPASSWD:ALL

This will allow all users that are member of the sudo group to use sudo for all commands without a password. You could also choose to only add your own user to the file as such:

dnix   ALL=(ALL) NOPASSWD:ALL

However, keep in mind that subsequent lines in the sudoers file could overwrite the settings you make, so if you add your user directly after root AND your user is also member of the sudo group, the sudo permissions will be used, since they are specified later in the document.

To test if your changes were successful, logout of the SSH session and log back in again. Then try if you can run the following without entering a password:

sudo visudo

Conclusion

With that we have completed the roll-out of the control node and VM’s we’ll be using for our Kubernetes deployment.

In the next blog I will continue with the installation of Kubernetes using Kubespray.

You can now continue with part 3 of this lab!

Leave a Reply

Your email address will not be published.