Kubernetes cluster deployment and OSM attachment

1)    Preparing the Cluster Deployment Environment

The first step is to prepare a number of VMs equal to the number of nodes you want the cluster to comprise of, including the controller/master node. These VMs will be the supporting hardware for each cluster node and provide the needed entity separation. Between Kubernetes cluster nodes.

SSH into each of the (soon to be kubernetes cluster) nodes (previously prepared) Ubuntu 20.04 VMs, and for each, do the following: 

  1. “sudo passwd root” to change root user password 
    1. sudo su 
    1. sudo nano /etc/ssh/sshd_config 
      1. Add/uncomment and modify “PermitRootLogin yes” 
    1. sudo service sshd restart 

2)    Preparation of Deployment Machine and Deployment Setup Files

SSH keys also have to be copied to all the nodes, from the deployment machine (presented below), as per the following tutorial https://www.webhi.com/how-to/how-to-use-a-private-key-for-ssh-authentication/ .

kubespray has to be cloned and ansible installed on the machine to deploy the cluster (via the kubespray readme – https://github.com/kubernetes-sigs/kubespray.git ) note: I used ansible [core 2.12.3] and the latest version (v2.21.0) of Kubespray.

Then, the inventory/mycluster/hosts.yaml file needs to be configured (automatically, via the provided script, and then verified, to look like this:

for a 3-node cluster deployment – then, as the README says, check all relevant variables – I only changed the Kubernetes version to 1.24.0, but now this tutorial should work with the latest versions of Kubernetes – at the time of writing the default  version is 1.26.1 

Once the ssh keys are also copied to all the nodes, specifically for authentication-less ssh access, the playbook can be applied/executed – NOTE: it is safer to do all of this from the root of the machine where kubespray is applied from, as well! 

3)    Post-deployment Setup and OSM Attachment Preparation

Once the clusters are up, the storage-class openebs service has to be started on the controller/master node (tutorial at https://platform9.com/learn/v1.0/tutorials/openebs

Then the kubernetes admin.conf file can be copied from /etc/kubernetes/admin.conf to the OSM machine – NOTE The IP has to be changed to that of the controller/master node 

Lastly, the cluster can be attached to OSM via the  

4)    OSM K8s Cluster Attachment

osm –user 5gasp_univbris –password 5GASPUNiVBRI$ –project 5GASP_UNIVBRIS k8scluster-add –creds k8-5gasp.conf –version ‘1.24.0’ –vim hpn-site –description “Proper K8s cluster deployment” –k8s-nets ‘{(net1:5GASP-Management)}’ cluster 

command 

5)    OSM Community Slack and Engagement

As this was a well-known issue in the OSM slack, and I was initially referred to it by one of my former colleagues, Navdeep Uniyal, they did not find a solution for the cluster integration of the later versions of Kubernetes, with the latest version of Ansible.

IMPORTANT NOTE: The above versions should be noted and compared to the current versions’ requirements, as Ansible and Kubespray have specific dependency and version requirements. If the above do not work, please consult as follows for debugging purposes:

  • If the cluster cannot be deployed fully, please check the error logs that Ansible produces, and address the errors exactly – they should be pretty straightforward
  • If (at a later point) the OSM does not attach the cluster appropriately, please check OSM’s Resource Orchestrator pod’s logs (the Kubernetes pod within the OSM machine should have “RO” within its name, and be associated with the “osm” namespace), and be sure to see if there are any errors related to the cluster attachment command, issued via LCM