Featured image of post Homelab(1):Using Kubernetes (K8s) or K3s to Build a Home Lab

Homelab(1):Using Kubernetes (K8s) or K3s to Build a Home Lab

Using Kubernetes (K8s) or K3s to Build a Home Lab

Motivation

After about 5 years of building my own home server and using container technology to deploy various private services, I have become familiar with a solution that manages all containers using Docker Compose. This solution is elegant and useful, but it is limited to a single computer. Recently, I suddenly had the need to manage multiple computers, and I finally began to feel the necessity of transitioning from a single server to a server cluster, or what is known as a Homelab.

Specifically, the computer I previously used as a server had a relatively large amount of memory and hard disk space but did not have a graphics card, which was sufficient for many of the services I had deployed before. Now I have a computer with a graphics card and want to run some large language models. Of course, I could deploy them individually on this computer, but that would create a disconnect from my previous solution. I hope to find a solution that can manage both computers simultaneously, making it easier to expand when I add more computers in the future.

Of course, before transitioning to Kubernetes, it is best to have a good understanding of container concepts. I have previously written a series of articles on containers (mainly Docker) for reference:

Homelab

What is a Homelab

On the surface, a Homelab refers to setting up a laboratory at home for learning, experimentation, and development. A Homelab typically includes one or more servers, networking equipment, storage devices, etc., which can be used to run various services and applications.

The scale of a Homelab can vary greatly. If you have sufficient financial resources, you can purchase multiple high-performance servers to build a large home lab. If your budget is limited, you can also use a regular computer or even a Raspberry Pi to set up a small home lab.

Why Do You Need a Homelab

Server clusters or Homelabs may seem distant from individual users, but this is not the case. I even believe that many people who play with private servers may gradually move towards a Homelab. Even if you don’t have multiple computers or servers, you can still try using Kubernetes (K8s) or K3s to manage your containerized applications. Later, if you acquire more computers or servers, you can easily expand your existing Kubernetes cluster to multiple machines.

In summary, a Homelab offers high flexibility and scalability to meet various needs of individual users. Even if you only have simple requirements and basic hardware, you can learn a lot of new technologies and knowledge by building a Homelab.

Kubernetes (K8s) and K3s

Kubernetes (K8s) is an open-source container orchestration platform for automating the deployment, scaling, and management of containerized applications. It is likely the most widely used solution for building a Homelab.

Of course, the knowledge within Kubernetes is quite complex, involving many concepts and components such as Pods, Services, Deployments, Ingress, etc. For beginners, it may seem overly complicated and somewhat difficult to understand. K3s is a lightweight version of Kubernetes, designed for resource-constrained environments. It removes some unnecessary components and features, making K3s easier to install and manage. K3s is well-suited for use in home labs or small clusters.

In this series of articles, we will start with a very simple application and use K3s to build a Kubernetes cluster from scratch, gradually expanding to more complex applications and multi-node setups.

K3s Basics

Kubernetes (K8s) is a complex system with many components. K3s simplifies this by retaining only the most essential components. Here, we will introduce only the core and fundamental concepts to provide a basic understanding of how K3s operates, with other components and concepts introduced as needed later.

The following diagram shows a basic framework example of K3s running on a single-node server:

K3s Basic Scheme

When a user makes a request to a service, the process is as follows:

  1. When an external user’s request reaches the K3s cluster, it is first handled by the Ingress Controller (usually Traefik).
  2. The Ingress Controller forwards the request to the appropriate Service based on the routing rules defined in the Ingress resource.
  3. The Service forwards the request to the corresponding Pod, where the actual application container is running.

Additionally, administrators can manage the components and resources within the K3s cluster using the command-line tool kubectl.

Next, let’s briefly introduce these concepts.

Pod

Pod is the smallest deployment unit in Kubernetes and can contain one or more containers. The containers in a Pod share network and storage resources. Pods are typically used to run a single application or service.

Service

Service is an abstract concept in Kubernetes used to define a set of access policies for Pods. A Service can be accessed through a fixed IP address and port, regardless of how the actual IP addresses of the Pods change. Services can be divided into ClusterIP, NodePort, LoadBalancer, and other types.

  • ClusterIP: The default type, the Service can only be accessed from within the cluster.
  • NodePort: The Service can be accessed from outside the cluster through a specified port.
  • LoadBalancer: The Service can be accessed through a cloud provider’s load balancer.

Ingress and Ingress Controller

Ingress is a resource in Kubernetes used to manage routing rules for external access to services within the cluster. Ingress can route requests to different Services based on domain names or paths. Ingress is typically used in conjunction with an Ingress Controller, which is responsible for implementing the routing rules defined in the Ingress resource.

Deployment

Deployment is a controller in Kubernetes used to manage the deployment and updating of Pods. A Deployment can define the number of replicas of a Pod, update strategies, and more. With a Deployment, we can easily scale the number of Pods up or down, and perform rolling updates without affecting the availability of the service.

K3s Installation and Configuration

Next, let’s take a single-node cluster as an example to illustrate how to use K3s. We will then discuss how to expand the cluster when more nodes are added.

Here, we choose to deploy the simplest whoami service.

1. Install K3s

Installing K3s on a Linux server is very simple, just run the following command:

1
curl -sfL https://get.k3s.io | sudo sh -

After it finishes running, K3s should be installed successfully and you should see output similar to the following:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink '/etc/systemd/system/multi-user.target.wants/k3s.service''/etc/systemd/system/k3s.service'.
[INFO]  systemd: Starting k3s

This indicates that the command has done the following:

  1. Downloaded and installed K3s.
  2. Created symlinks for kubectl, crictl, and ctr, which are tools used to manage the K3s cluster. Among them:
    • kubectl is the command-line tool for Kubernetes, used to manage Kubernetes clusters.
    • crictl is the command-line tool for the Container Runtime Interface (CRI), used to manage containers.
    • ctr is the command-line tool for the container runtime, used to interact directly with the container runtime.
  3. Created k3s-killall.sh and k3s-uninstall.sh scripts for stopping and uninstalling K3s.
    • Running the command sudo /usr/local/bin/k3s-killall.sh can stop the K3s service.
    • Running the command sudo /usr/local/bin/k3s-uninstall.sh can uninstall K3s.
  4. Created an environment file /etc/systemd/system/k3s.service.env for configuring the K3s service.
  5. Created the K3s systemd service file /etc/systemd/system/k3s.service and enabled the service. After that, the K3s service will start automatically every time the computer boots.
  6. Started the K3s service.

2. Verify K3s Installation

After the installation is complete, K3s creates a configuration file named k3s.yaml in the /etc/rancher/k3s directory. This file contains the configuration information for the K3s cluster, including the API server address, authentication information, and more.

As mentioned above, K3s also installs a kubectl as the command-line tool for managing K3s. When kubectl runs, it needs to know which cluster it is managing based on the API server address and authentication information. By default, kubectl reads the configuration information from the /etc/rancher/k3s/k3s.yaml file. If you are not running kubectl as root, you may encounter permission issues when trying to read the /etc/rancher/k3s/k3s.yaml file. In this case, you can copy the /etc/rancher/k3s/k3s.yaml file to the ~/.kube directory and change the file permissions to allow kubectl to read the cluster information:

1
2
3
mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $(id -u):$(id -g) ~/.kube/config

Now you can use kubectl to manage the K3s cluster. You can run the following command to verify that K3s is installed successfully:

1
kubectl get nodes

If everything is working correctly, you should see output similar to the following:

1
2
NAME                  STATUS   ROLES                  AGE   VERSION
fedora.attlocal.net   Ready    control-plane,master   15s   v1.32.5+k3s1

You can check the status of all Pods in the cluster by running:

1
kubectl get pods -A

If everything is working correctly, you should see output similar to the following:

1
2
3
4
5
6
NAME                                      READY   STATUS              RESTARTS   AGE   IP       NODE                  NOMINATED NODE   READINESS GATES
coredns-697968c856-scmft                  0/1     ContainerCreating   0          15s   <none>   fedora.attlocal.net   <none>           <none>
helm-install-traefik-crd-7dkch            0/1     ContainerCreating   0          15s   <none>   fedora.attlocal.net   <none>           <none>
helm-install-traefik-qkl97                0/1     ContainerCreating   0          15s   <none>   fedora.attlocal.net   <none>           <none>
local-path-provisioner-774c6665dc-jrbrj   0/1     ContainerCreating   0          15s   <none>   fedora.attlocal.net   <none>           <none>
metrics-server-6f4c6675d5-v97zv           0/1     ContainerCreating   0          15s   <none>   fedora.attlocal.net   <none>           <none>

This output shows that K3s has successfully started several Pods, including the DNS service (coredns), Traefik Ingress Controller (helm-install-traefik-crd and helm-install-traefik-qkl97), the default storage class for K3s (local-path-provisioner), and the Kubernetes metrics server (metrics-server).

3. Deploy a Simple Application (whoami)

Now we can deploy a simple whoami service to test the K3s cluster. whoami is a very simple HTTP service that returns the request’s IP address, request headers, and other information, making it ideal for testing the K3s cluster.

Create the whoami Application Manifest

K3s uses a YAML file called a manifest to define the deployment of applications. We can create a file named whoami.yaml with the following content:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
apiVersion: apps/v1
kind: Deployment
metadata:
  name: whoami
  labels:
    app: whoami
spec:
  replicas: 1
  selector:
    matchLabels:
      app: whoami
  template:
    metadata:
      labels:
        app: whoami
    spec:
      containers:
      - name: whoami
        image: traefik/whoami
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: whoami
  labels:
    app: whoami
spec:
  type: ClusterIP
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: whoami
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: whoami-ingress
spec:
  ingressClassName: traefik
  rules:
  - host: whoami.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: whoami
            port:
              number: 80

Although this is a YAML file, it actually contains three parts. For simplicity, we have placed these three parts in the same file. In later articles, we will explain how to better organize and manage applications in K3s by structuring files.

These three parts are separated by --- and define:

  1. Deployment: This defines a Deployment named whoami, indicating that we want to deploy an application called whoami. This application has 1 replica and uses the official Traefik whoami image, listening on port 80.
  2. Service: This defines a Service named whoami, indicating that we want to create a service called whoami. This service is of type ClusterIP, meaning it can only be accessed from within the cluster. It listens on port 80 and forwards requests to the whoami container’s port 80 in the Pod.
  3. Ingress: This defines an Ingress named whoami-ingress, indicating that we want to create an Ingress resource called whoami-ingress. This Ingress uses Traefik as the Ingress Controller and routes requests to the Service named whoami.
Apply the whoami Service

Now we can use kubectl to apply this manifest file. Run the following command:

1
kubectl apply -f whoami.yaml

If everything is working correctly, you should see output similar to the following:

1
2
3
deployment.apps/whoami created
service/whoami created
ingress.networking.k8s.io/whoami-ingress created
Verify the whoami Service

First, let’s confirm that the whoami service has been successfully deployed. Run the following command:

1
kubectl get deployments

If everything is working correctly, you should see output similar to the following:

1
2
NAME          READY   UP-TO-DATE   AVAILABLE   AGE
whoami        1/1     1            1           2m

Then, let’s check the status of the Pods in the cluster. Run the following command:

1
kubectl get pods

If everything is working correctly, you should see output similar to the following:

1
2
NAME                        READY   STATUS    RESTARTS   AGE
whoami-5b6c7f8d9f-2j4k5      1/1     Running   0          2m

Next, let’s confirm that the whoami service’s Service has been successfully created. Run the following command:

1
kubectl get svc

If everything is working correctly, you should see output similar to the following:

1
2
3
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
whoami       ClusterIP   10.43.42.148   <none>         8080/TCP  20s
kubernetes   ClusterIP   10.43.0.1      <none>         443/TCP   3m15s
Access the whoami Service via Ingress

Now we can access the whoami service via Ingress. First, we can use the local address or the address within the same local area network to access the whoami service from the computer where whoami is deployed. If you are accessing it on your local machine, you can use the following command:

1
curl "Host: whoami.example.com" http://localhost

If you are accessing it from another computer on the same local area network, you can use the following command (replace 192.168.1.233 with your K3s server’s IP address):

1
curl "Host: whoami.example.com" http://192.168.1.233

Here, we added the Host header because Ingress needs this header to route the request to the corresponding Service. Otherwise, how would Ingress know which Service you want to access?

If everything is working correctly, you should see output similar to the following:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
Hostname: whoami-64f6cf779d-zxsm4
IP: 127.0.0.1
IP: ::1
IP: 10.42.0.9
IP: fe80::b43d:c0ff:fe52:80bd
RemoteAddr: 10.42.0.8:39990
GET / HTTP/1.1
Host: whoami.example.com
User-Agent: curl/8.12.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 10.42.0.1
X-Forwarded-Host: whoami.jinli.li
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: traefik-c98fdf6fb-5q6m6
X-Real-Ip: 10.42.0.1

If you can see output similar to the above, it means the whoami service is running successfully. However, note that the values of X-Forwarded-For and X-Real-Ip are 10.42.0.1 and 10.42.0.1, respectively. This is the internal IP address of the K3s cluster, not the IP address of the computer you are accessing from. This is because the Ingress Controller (Traefik) uses the internal IP address of the K3s cluster when forwarding requests to the whoami service.

TODO: How to fix this problem?

If you want to access the whoami service from outside the K3s cluster, you need to resolve the domain name whoami.example.com to the public IP address of your K3s server. Then you can access the whoami service through a browser or other HTTP clients.

If everything is working correctly, you should see something like the following in your browser:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
Hostname: whoami-64f6cf779d-ktwm9
IP: 127.0.0.1
IP: ::1
IP: 10.42.0.18
IP: fe80::1453:3cff:fe59:5835
RemoteAddr: 10.42.0.8:50132
GET / HTTP/1.1
Host: whoami.jinli.li
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7
Accept-Encoding: gzip, br
Accept-Language: en
Cache-Control: max-age=0
Priority: u=0, i
Sec-Ch-Ua: "Chromium";v="136", "Google Chrome";v="136", "Not.A/Brand";v="99"
Sec-Ch-Ua-Mobile: ?0
Sec-Ch-Ua-Platform: "macOS"
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: none
Sec-Fetch-User: ?1
Upgrade-Insecure-Requests: 1
X-Forwarded-For: 10.42.0.11
X-Forwarded-Host: whoami.jinli.li
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: traefik-c98fdf6fb-5q6m6
X-Real-Ip: 10.42.0.11

Conclusion

In this article, we introduced the basics of K3s and demonstrated how to deploy a simple whoami service on a single-node K3s cluster.

In the upcoming articles, we will continue to expand the K3s cluster, covering how to deploy more complex applications, manage multi-node clusters, and utilize other features of K3s to meet various needs.

Acknowledgments

I referenced the video From Zero to Hero: K3s, Traefik & Cloudflare Your Home Lab Powerhouse by YouTuber LinuxCloudHacks while learning how to use K3s.

comments powered by Disqus