Etcd is key value data store. It is used as default database to store cluster state in kubernetes. You can find there information like connected nodes or deployed resources. Etcd can be deployed in high availability mode. When one of the etcd nodes is down, the rest will elect next leader. But to have it working, you need odd number of nodes. Etcd synchronously replicates the data across all of the nodes, in order to maintain consitency.

Most k8s distributions is shipped with etcd preinstalled by default. However, k3s uses sqlite. For development purposes, sqlite is good enough. But if you want to mimic production environment, I would switch to etcd. In k3s it can be done by editing /etc/systemd/system/k3s.service and adding --cluster-init to ExecStart:

ExecStart=/usr/local/bin/k3s \
    server --disable traefik \
    --cluster-init

next

sudo systemctl daemon-reload 
sudo systemctl restart k3s.service   

after the restart, there will be two opened ports:

sudo netstat -tulpn | grep -E '2379|2380'                                                                                                  
tcp        0 192.168.1.43:2380       0.0.0.0:*        LISTEN      2266838/k3s server
tcp        0 192.168.1.43:2379       0.0.0.0:*        LISTEN      2266838/k3s server
tcp        0 127.0.0.1:2379          0.0.0.0:*        LISTEN      2266838/k3s server
tcp        0 127.0.0.1:2380          0.0.0.0:*        LISTEN      2266838/k3s server

2379 is for client communication (like k8s api, or etcdctl), second is for internal peer2peer communication.

Lets connect to the DB and investigate the content

sudo ETCDCTL_ENDPOINTS='https://127.0.0.1:2379' ETCDCTL_CACERT='/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt' ETCDCTL_CERT='/var/lib/rancher/k3s/server/tls/etcd/server-client.crt' ETCDCTL_KEY='/var/lib/rancher/k3s/server/tls/etcd/server-client.key' ETCDCTL_API=3 etcdctl get / --prefix --keys-only

Command is quite long, because we need to supply CACERT, client cert and client key. It will show all the keys in DB. The key is needed, because etcd include sensitive data like cluster secrets.

Because etcd reflects current state of the cluster, it is best option for creating backups. Theoretically, you can run something like:

kubectl get all -A > backup.yaml

but you need to add crds, configmaps, secrets, roles, role bindings, cluster roles, cluster role bindings, namespaces, storages and ingresses. And after that, there is a chance, the backup will not be consistent.

Much better option is either:

k3s build in mechanism: sudo k3s etcd-snapshot save --name {{name}}. You can list them by either sudo k3s etcd-snapshot list:

Name                        Location                                                                    Size     Created
a-fedora-1.home-1731606336  file:///var/lib/rancher/k3s/server/db/snapshots/a-fedora-1.home-1731606336  57872416 2024-11-14T18:45:36+01:00
19-fedora-1.home-1731607482 file:///var/lib/rancher/k3s/server/db/snapshots/19-fedora-1.home-1731607482 60571680 2024-11-14T19:04:42+01:00
19-fedora-1.home-1731607495 file:///var/lib/rancher/k3s/server/db/snapshots/19-fedora-1.home-1731607495 60571680 2024-11-14T19:04:55+01:00

or kubectl get etcdsnapshotfile.

kubectl get etcdsnapshotfile -o custom-columns="NAME:.metadata.name,SIZE:.status.size"                                                                                                                                                                                                         
NAME                                       SIZE
local-19-fedora-1.home-1731607482-a134be   60571680
local-19-fedora-1.home-1731607495-4ec46a   60571680
local-19-fedora-1.home-1731607502-b86a8f   60571680
local-19-fedora-1.home-1731607509-1ce0fa   60571680
local-19-fedora-1.home-1731607518-a5c0f5   60571680
local-a-fedora-1.home-1731606336-3a1503    57872416
local-a-fedora-1.home-1731620822-e70700    60571680

Snapshot schedule can be configured by editing "/etc/rancher/k3s/config.yaml, it can look like this:

# Enable etcd snapshot backup
etcd-snapshot-schedule-cron: "0 */6 * * *"
etcd-snapshot-retention: 100
etcd-snapshot-compress: true
etcd-snapshot-dir: "/var/lib/rancher/k3s/server/db/snapshots"
etcd-snapshot-name: "k3s_etcd_snapshot"

Another method, independent of k3s toolset is:

sudo ETCDCTL_ENDPOINTS='https://127.0.0.1:2379' ETCDCTL_CACERT='/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt' ETCDCTL_CERT='/var/lib/rancher/k3s/server/tls/etcd/server-client.crt' ETCDCTL_KEY='/var/lib/rancher/k3s/server/tls/etcd/server-client.key' ETCDCTL_API=3 etcdctl snapshot save {{path}}

It is much more generic, and closer to k8s.

tldr; consider creating backups, both of etcd and persistent volumes.

Leave a Reply

Your email address will not be published. Required fields are marked *

+