Docker registry is an integral part of each k8s cluster. It stores standalone application executables in form of docker images. In general, we have two options to store docker images:

  • public repositories, like docker hub, or gitlab registry
  • self hosted docker registry

As always, self-hosting gives a set of advantages:

  • enhanced security and privacy
  • reduced dependency on external services – e.g. docker hub impose limits on anonymous and free accounts
  • compliance requirements
  • control over features and updates
  • higher download/upload speed and lower latencies
  • additional options, for docker registry that would be: multi-architecture support or automated vulnerability assessments

Let’s take a look at possible self-hosted options:

  • docker registry provided by docker. Simple, lightweight solution with basic features only.
  • Harbor – build on top of the docker registry. Adds user management, image replication, vulnerability scanner and more. But with bigger feature list, comes bigger footprint.
  • Quay – RedHat docker registry with similar feature list as Harbor. Adds integration with OpenShift
  • Portus – not maintained for a long time
  • Artifactory – universal artifacts manager
  • Nexus – another multi-purpose repository manager
  • GitLab Container Registry – part of the GitLab package

As I am interested in hosting docker images only, best choice would be simple docker registry or Harbor/Quay. First option seems best for simple lab environment, but for educational purposes I will chose Harbor. When the memory/cpu become an issue, I can migrate to docker registry.

Because harbor is available as a helm chart, start by adding helm repository:

helm repo add harbor https://helm.goharbor.io
helm fetch harbor/harbor --untar

Now, prepare the values. The full list can be found here: https://github.com/goharbor/harbor-helm/blob/main/values.yaml

The values I used:

expose:
  ingress:
    hosts:
      core: registry.piasecki.it
externalURL: https://registry.piasecki.it
harborAdminPassword: "redacted"
nginx:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: kubernetes.io/arch
                operator: In
                values:
                  - amd64
portal:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: kubernetes.io/arch
                operator: In
                values:
                  - amd64
core:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: kubernetes.io/arch
                operator: In
                values:
                  - amd64
jobservice:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: kubernetes.io/arch
                operator: In
                values:
                  - amd64
registry:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: kubernetes.io/arch
                operator: In
                values:
                  - amd64
trivy:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: kubernetes.io/arch
                operator: In
                values:
                  - amd64
database:
  internal:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
            - matchExpressions:
                - key: kubernetes.io/arch
                  operator: In
                  values:
                    - amd64
redis:
  internal:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
            - matchExpressions:
                - key: kubernetes.io/arch
                  operator: In
                  values:
                    - amd64
exporter:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: kubernetes.io/arch
                operator: In
                values:
                  - amd64

It is a little bit repetitive, but it is as it is. I had to add affinity, because harbor is not supporting ARM architecture, and in my cluster there is a mix of AMD64 and ARM.

In k8s, affinity is a preference used during workload scheduling. Basically, there are two options, flexible and strict. The flexible is preferredDuringSchedulingIgnoredDuringExecution where scheduler tries to find the node that meets the rule. But if the node is not available, the pod will still be scheduled somewhere else. That is not the case if requiredDuringSchedulingIgnoredDuringExecution is used – the workload will not be scheduled. Both of them has IgnoredDuringExecution suffix. It means that if labels or taints are added after the pods are scheduled, they will be ignored.

Install the harbor with:

helm upgrade --install harbor harbor/harbor --namespace harbor --create-namespace -f values.yaml --debug 

After a few moments, the pods will show up.

NAME                                READY   STATUS    RESTARTS      AGE
harbor-core-77db798559-95t49        1/1     Running   0             109s
harbor-database-0                   1/1     Running   0             109s
harbor-jobservice-5757787b4-xb68f   1/1     Running   0             109s
harbor-portal-6ccd874b6-bmjkg       1/1     Running   0             109s
harbor-redis-0                      1/1     Running   0             109s
harbor-registry-9847665dc-t58m7     2/2     Running   0             109s
harbor-trivy-0                      1/1     Running   0             109s

Now, add the DNS entry for the hostname used in the config. I will edit pihole secret.

k edit cm pihole-custom-dnsmasq --namespace pihole

and restart the deployment:

k rollout restart deployment/pihole 

check if working:

dig +short registry.piasecki.it                                                                                                            
192.168.1.43
192.168.1.23

The chart already prepared ingress, so we can already open the page in the browser.

If everything works, simply execute docker login:

docker login https://registry.piasecki.it 

and provide username and the password.

Lets check if it works:

docker pull ubuntu
docker tag ubuntu registry.piasecki.it/library/ubuntu:latest

docker push registry.piasecki.it/library/ubuntu:latest                                                                                     

The push refers to repository [registry.piasecki.it/library/ubuntu]
687d50f2f6a6: Pushed
latest: digest: sha256:b0c08a4b639b5fca9aa4943ecec614fe241a0cebd1a7b460093ccaeae70df698 size: 529

There are two ways to authorize k3s to pull images from the docker registry:

I will cover the second option, as it is little bit more complicated.

Docker credentials are stored in ~/.docker/config.json. Let’s create a secret from that file:

kubectl create secret generic regcred \
    --from-file=.dockerconfigjson=/home/{{user}}/.docker/config.json \
    --type=kubernetes.io/dockerconfigjson \
    --namespace=kube-system

secret/regcred created

you can inspect it:

k get secret -n kube-system regcred -o yaml                                                                                                

Easiest way to use it across whole cluster, would be to add it imagePullSecrets to default service account:

kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "regcred"}]}'

More granular options would be adding imagePullSecrets to custom service accounts, or even single workloads.

Now lets check if everything works:

kubectl run harbor-test-pod --image=registry.piasecki.it/library/ubuntu:latest --restart=Never --command -- sleep 3600 

That sums up the harbor installation in k3s. In upcoming posts I will cover more advanced harbor features such as LDAP integration, vulnerability scanning or image replication.

Leave a Reply

Your email address will not be published. Required fields are marked *

+ ,