Introduction

In today’s world, the Raspberry Pi is not just a tool for hobbyists, but a robust, cost-effective, and scalable solution for a variety of tech projects. But have you ever wondered if there’s a way to harness multiple Raspberry Pis to operate in unison, multiplying their combined computational prowess? Welcome to the realm of Raspberry Pi clusters!

In this guide, we’ll embark on an exciting journey, setting up a Raspberry Pi cluster comprising three nodes: one master and two workers. Our destination? Deploying K3s, a lightweight Kubernetes distribution tailored for small-scale clusters and IoT devices, with the power of Ansible—a renowned automation tool that simplifies complex configurations and deployments.

Whether you’re a seasoned developer, an IT professional, or just someone keen on scaling the boundaries of what’s possible with these tiny computing machines, this tutorial promises a rewarding experience. By the end of it, you’ll have a fully functional Raspberry Pi cluster running K3s, and a plethora of possibilities to explore.

So, plug in those Raspberry Pis, and let’s dive deep into the world of micro-clusters!

A Raspberry Pi cluster with rbg lighting


What you’ll need

Item(s)Cost £
3 x Raspberry Pi 4 Model B – 8GB RAM224
1 x GeeekPi Raspberry Pi Cluster Case30
1 x 5-Pack Cat6 Cables8
1 x Lexar 32GB Micro SD Card 3 Pack14
3 x Integral Micro SD USB3.0 Memory18
3 x GeeekPi Raspberry Pi 4 HAT60
1 x TP-Link PoE Switch 5-Port Switch27
Total441

Steps

Assemble it

It’s pretty straightforward to assemble following the instructions that came with the various pieces of hardware. However, there are a few modifications that you’ll have to make. Firstly, the fans on the PoE hats clash with the acrylic shelves. So, you’ll need to remove the fans from the bottom 2 PoE hats. You’ll then use the terminals that were left exposed from the hats to power the case fan and the RGB lights for the case fan.

Create bootable USB images

Follow a headless install via the Ubuntu Raspberry PI guide. Remember to access the advanced options and set a hostname per Pi e.g. my-pi-1, my-pi-2. Also, enable ssh and set a username and password.

Lookup IP Addresses

Login to your router, search for your devices and ideally enable DHCP to reserve fixed IP addresses for them.

SSH

Verify your connectivity to each of your Pis via SSH using this format: ssh yourusername@pi-ip An example command might be: ssh jack@192.168.0.50

Devcontainer

If you’re working on a windows machine, there’s a few options for installing Ansible. It tends to be the general guidance is use WSL. I wanted something repeatable so I created a devcontainer which I run via vscode. This dev container will install all of the required dependencies, and also some vscode extensions for working with ansible files.

The following need to be created in the root of your project:

Folder and File structure

.devcontainer
 ├─ devcontainer.json
 └─ dev.dockerfile

 1{
 2    "dockerFile": "dev.dockerfile",
 3    "postAttachCommand": "chmod 755 /workspaces/PiKubernetes/ansible",
 4    "customizations": {
 5        "vscode": {
 6            "extensions": [
 7				"redhat.ansible",
 8				"ms-azuretools.vscode-docker"
 9			]
10        }        
11    }
12}
 1FROM ubuntu:22.04
 2
 3# Set the timezone
 4ENV TZ=Etc/UTC
 5RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
 6
 7# Install packages and set the UK locale
 8RUN DEBIAN_FRONTEND=noninteractive apt-get update && \
 9    apt-get install -y python3-pip ansible vim sshpass locales && \
10    locale-gen en_GB.UTF-8 && \
11    pip3 install ansible-lint
12
13# Set the locale environment variables
14ENV LANG en_GB.UTF-8
15ENV LANGUAGE en_GB:en
16ENV LC_ALL en_GB.UTF-8

Ansible

Create your ansible inventory, this is where you’ll need to put names, and IP addresses for your pi’s. As well as a variable for the output folder of your node_token (this will be used later)

ansible.cfg

Disable host_key_checking to prevent SSH prompting us which will block Ansible.

1[defaults]
2host_key_checking = false
3inventory = hosts.ini

hosts.ini

 1[raspberry_pis]
 2pi-1 ansible_host=192.168.0.1 ansible_user=myuser
 3pi-2 ansible_host=192.168.0.2 ansible_user=myuser
 4pi-3 ansible_host=192.168.0.3 ansible_user=myuser
 5
 6[raspberry_pi_masters]
 7pi-1
 8
 9[raspberry_pi_workers]
10pi-2
11pi-3
12
13[all:vars]
14node_token_dest="/tmp/ansible/node-token.txt"

bootstrap-pi.yml

The following playbook will update the apt cache and update packages. We then install linux-modules-extra-raspi which fixes an issue which prevents k3s from starting. Finally we disable the UFW firewall, alternatively you can change this to keep it enabled but allow traffic on the relevant ports. However the k3s documentation recommended disabling it. I won’t be routing public traffic through my cluster so I chose to disable it.

 1---
 2- name: Bootstrap the Raspberry Pi ready for k3s
 3  hosts: raspberry_pis
 4  become: true
 5  tasks:
 6    - name: Update repositories cache
 7      ansible.builtin.apt:
 8        update_cache: true
 9        cache_valid_time: 3600
10
11    - name: Upgrade all packages to the latest version
12      ansible.builtin.apt:
13        upgrade: true
14
15    - name: Install Packages
16      ansible.builtin.apt:
17        pkg:
18          # We have to install this to fix this issue - https://github.com/k3s-io/k3s/issues/5423, https://github.com/k3s-io/k3s/issues/4234
19          - linux-modules-extra-raspi
20
21    - name: Disable UFW
22      community.general.ufw:
23        state: disabled

setup-master.yml

For the master node we download the k3s shell script and execute it, we then extract the node_token which is required for worker nodes to connect to the master node. Additionally we output the kubeconfig which is useful if you want to connect to your cluster externally.

If you want to use Traefik for your ingress you can remove the --disable traefik argument and the Install Nginx Ingress Bare Metal tasks. I’ve chosen Nginx as I prefer it… mainly because I have more experience with it!

 1# K3s provides an installation script that is a convenient way to install it as a service on systemd or openrc based systems.
 2# This script is available at https://get.k3s.io
 3# After running this installation:
 4# - The K3s service will be configured to automatically restart after node reboots or if the process crashes or is killed
 5# - Additional utilities will be installed, including kubectl, crictl, ctr, k3s-killall.sh, and k3s-uninstall.sh
 6# - A kubeconfig file will be written to /etc/rancher/k3s/k3s.yaml and the kubectl installed by K3s will automatically use it
 7# curl -sfL https://get.k3s.io | sh -
 8---
 9- name: Configure Master Nodes
10  hosts: raspberry_pi_masters
11  become: true
12  tasks:
13    - name: Download k3s installer script
14      ansible.builtin.get_url:
15        url: https://get.k3s.io
16        dest: "/tmp/k3s_install.sh"
17        mode: '0755'
18
19    - name: Execute k3s installer script
20      ansible.builtin.command: "/tmp/k3s_install.sh --disable traefik"
21      changed_when: true
22
23    - name: Read node-token from master
24      ansible.builtin.slurp:
25        path: "/var/lib/rancher/k3s/server/node-token"
26      register: node_token
27
28    - name: Print node_token
29      ansible.builtin.debug:
30        var: node_token.content | b64decode
31
32    - name: Ensure directory exists
33      ansible.builtin.file:
34        path: "{{ node_token_dest | dirname }}"
35        state: directory
36        mode: '0755'
37
38    - name: Save node_token to file
39      ansible.builtin.copy:
40        content: "{{ node_token.content | b64decode }}"
41        dest: "{{ node_token_dest }}"
42
43    - name: Copy file from remote host to Ansible controller
44      ansible.builtin.fetch:
45        src: "{{ node_token_dest }}"
46        dest: "{{ node_token_dest }}"
47        flat: yes
48        validate_checksum: yes
49
50    - name: Read kubeconfig
51      ansible.builtin.slurp:
52        path: "/etc/rancher/k3s/k3s.yaml"
53      register: kubeconfig
54
55    - name: Print kubeconfig
56      ansible.builtin.debug:
57        var: kubeconfig.content | b64decode
58
59    - name: Install Nginx Ingress Bare Metal
60      ansible.builtin.command: "kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/baremetal/deploy.yaml"
61      changed_when: true

setup-workers.yml To setup the worker nodes it’s similar to the master node above however we provide environment variables K3S_TOKEN which is used to store our node_token from above and K3S_URL which will hold the url of the master node.

You’ll need to replace K3S_URL: https://192.168.0.1:6443 with the ip address of your master node.

 1# K3s provides an installation script that is a convenient way to install it as a service on systemd or openrc based systems.
 2# This script is available at https://get.k3s.io
 3# After running this installation:
 4# - The K3s service will be configured to automatically restart after node reboots or if the process crashes or is killed
 5# - Additional utilities will be installed, including kubectl, crictl, ctr, k3s-killall.sh, and k3s-uninstall.sh
 6# - A kubeconfig file will be written to /etc/rancher/k3s/k3s.yaml and the kubectl installed by K3s will automatically use it
 7# curl -sfL https://get.k3s.io | sh -
 8---
 9- name: Configure Worker Nodes
10  hosts: raspberry_pi_workers
11  become: true
12  tasks:
13    - name: Download k3s installer script
14      ansible.builtin.get_url:
15        url: https://get.k3s.io
16        dest: "/tmp/k3s_install.sh"
17        mode: '0755'
18
19    - name: Read node-token from master
20      ansible.builtin.set_fact:
21        k3s_token: "{{ lookup('file', node_token_dest) }}"
22
23    - name: Print the contents of K3S_TOKEN
24      ansible.builtin.debug:
25        msg: "{{ k3s_token }}"
26
27    - name: Execute k3s installer script
28      ansible.builtin.command: "/tmp/k3s_install.sh"
29      environment:
30        K3S_TOKEN: "{{ k3s_token }}"
31        K3S_URL: https://192.168.0.1:6443
32      changed_when: true

Finally we run our playbooks as follows

ansible-playbook bootstrap-pi.yml --ask-pass --ask-become-pass
ansible-playbook setup-master.yml --ask-pass --ask-become-pass
ansible-playbook setup-workers.yml --ask-pass --ask-become-pass

Once all of the above has run you can SSH into one of the nodes and run kubectl get nodes to see the status of the nodes, it should show your three nodes.

Congrats! You now have a working Pi cluster where you can begin to kubectl your own containerised applications into it.

Other Interesting Reads