Why self-host a git repository?

If you are running a homelab, your infrastructure-as-code has to live somewhere. Git is an excellent option for version control - if something goes bad - you can easily restore previous version. Storing k3s manifests, Helm values, and ArgoCD configs on GitHub or GitLab means your entire cluster definition sits on someone else’s server. And the goal of self-hosting is to get rid of that dependency. Usually, committing API keys and secrets to a git repository is a bad practice. But since it is private, and not shared with anyone, I will have a comfort to commit everything, which will save me a lot of hassle. Last but not least - automatic build and deployment is another useful feature.

Self-hosted git options

There are several self-hosted git solutions, ranging from bare SSH to full platforms.

Bare git over SSH

The simplest option is a bare repo on any Linux server and ssh server:

ssh myserver 'git init --bare /srv/git/myrepo.git'
git remote add origin ssh://myserver/srv/git/myrepo.git
git push origin main

No dependencies, no updates, no web UI, no overhead. If all you need is a remote to push/pull from, this is it. The downside is obvious - no web interface, no CI/CD. But for a single developer who just needs a backup target, it works.

Comparison

Bare SSHGogsGiteaForgejoGitLab CE
RAM0~60 MB~130 MB~130 MB4+ GB
Web UINoYesYesYesYes
CI/CDNoNoYes (Actions)Yes (Actions)Yes (GitLab CI)
Container registryNoNoYes (OCI)Yes (OCI)Yes
GitHub Actions compatibleNoNoYesYesNo
GovernanceN/ASingle devCorporateCommunityCorporate
Helm chartN/ACommunityOfficialOfficialOfficial
Development activityN/ASlowActiveVery activeVery active

Summary

Gogs - lightest option with a web UI, but no CI/CD and development has slowed down significantly.

Gitea - fork of Gogs that added many features including Actions - a GitHub Actions compatible CI/CD system. Same workflow syntax, same reusable actions ecosystem. In 2022 the project was transferred to a for-profit company (Gitea Ltd), which raised community concerns about its direction.

Forgejo - forked from Gitea in response to that corporate takeover. Governed by Codeberg e.V., a non-profit. Inherits the same Actions compatibility from Gitea, but the community-first governance gives more confidence in its long-term direction.

GitLab CE - the most feature-rich option by far. Issue boards, merge request pipelines, monitoring, pages, and much more. But 4 GB RAM minimum, realistically 8 GB for comfortable use. Hard to justify for a single-developer homelab.

Deploying Forgejo on k3s

Forgejo has an official Helm chart. Deploy it via ArgoCD:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: forgejo
  namespace: argocd
spec:
  project: k3s 
  sources:
    - repoURL: 'oci://code.forgejo.org/forgejo-helm/forgejo'
      chart: forgejo
      targetRevision: '15.0.3'
      helm:
        valueFiles:
          - $values/forgejo/values.yaml
    - repoURL: 'ssh://git@example.com:222/forgejo-admin/k3s.git'
      targetRevision: HEAD
      ref: values
  destination:
    server: 'https://kubernetes.default.svc'
    namespace: forgejo
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true

Key configuration decisions in values.yaml:

  • External PostgreSQL — reuse an existing database instead of the bundled one
  • Longhorn PVC — 20Gi for repo storage, survives node failures
  • Traefik ingress with cert-manager for TLS
  • SSH via LoadBalancer on port 222 — so git clone ssh://git@example.com:222/... works directly

Runner

Forgejo itself does not execute CI/CD jobs. It only stores the workflow definitions and displays the results. The actual execution is done by a separate component called a runner.

This is the same model GitHub uses - it hosts your repo and workflow files, but the jobs run on GitHub-hosted runners (or your own self-hosted ones). The difference is that with Forgejo, there are no hosted runners provided for you. You have to bring your own.

The runner is a daemon that:

  1. Polls the Forgejo instance for pending workflow jobs
  2. Spins up a container for each job (using Docker or Podman)
  3. Executes the workflow steps inside that container
  4. Streams logs back to Forgejo and reports the final result

Setting up a Forgejo Runner

Architecture

The runner pod has three components:

  1. Init container — registers the runner with Forgejo on first boot (skipped if already registered)
  2. Runner containerforgejo-runner daemon that polls for workflow jobs
  3. DinD sidecardocker:dind providing a Docker daemon for container-based actions

Registration

Before deploying, create a runner token in Forgejo under Site Administration → Runners → Create new runner, then store it as a Secret:

kubectl create secret generic forgejo-runner-secret \
  --from-literal=token=<TOKEN> \
  -n forgejo

StatefulSet

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: forgejo-runner
  namespace: forgejo
spec:
  replicas: 1
  serviceName: forgejo-runner
  updateStrategy:
    type: RollingUpdate
  template:
    spec:
      securityContext:
        fsGroup: 1000
      initContainers:
        # Native sidecar (k8s 1.28+) — starts before other init containers,
        # keeps running for the lifetime of the pod.
        - name: dind
          image: docker:dind
          restartPolicy: Always
          securityContext:
            privileged: true
          env:
            - name: DOCKER_TLS_CERTDIR
              value: ""
          startupProbe:
            exec:
              command: [docker, info]
            periodSeconds: 2
            failureThreshold: 30
 
        - name: register
          image: data.forgejo.org/forgejo/runner:12
          workingDir: /data
          command:
            - /bin/sh
            - -c
            - |
              if [ ! -f /data/.runner ]; then
                forgejo-runner register \
                  --no-interactive \
                  --instance http://forgejo-http.forgejo.svc.cluster.local:3000 \
                  --token "$(RUNNER_TOKEN)" \
                  --name alpine-3.20 \
                  --labels alpine-latest:docker://alpine:3.20
              fi
          env:
            - name: RUNNER_TOKEN
              valueFrom:
                secretKeyRef:
                  name: forgejo-runner-secret
                  key: token
            - name: DOCKER_HOST
              value: "tcp://localhost:2375"
          volumeMounts:
            - name: runner-data
              mountPath: /data
 
      containers:
        - name: runner
          image: data.forgejo.org/forgejo/runner:12
          workingDir: /data
          command:
            - forgejo-runner
            - daemon
            - --config
            - /config/config.yml
          env:
            - name: DOCKER_HOST
              value: "tcp://localhost:2375"

Hints

  • Native sidecar for DinD — the runner needs a Docker daemon running alongside it, but all containers in a pod start at the same time. Without ordering, the runner starts before Docker is ready and crashes. Before Kubernetes 1.28, the only workaround was a sleep loop in the runner container waiting for Docker to come up. Since 1.28, init containers support restartPolicy: Always — this makes them start first (like a normal init container), but instead of exiting, they keep running for the lifetime of the pod (like a sidecar). Combined with a startupProbe, Kubernetes waits until Docker reports healthy before starting the next init container (register) and then the main runner container. The startup order becomes: DinD ready → register → runner.
  • fsGroup — the runner image runs as uid 1000. Without fsGroup: 1000 on the pod, the PVC is owned by root and the runner cannot write its .runner registration file.
  • Stale runner registrations — if the init container crashes and restarts repeatedly, each attempt registers a new runner in Forgejo. The .runner file check prevents this, but only after the first successful registration.

Default container image

The runner label alpine-latest:docker://alpine:3.20 means workflows with runs-on: alpine-latest execute inside an Alpine container. This is intentionally minimal — if your workflows do docker build, the actual build happens inside the DinD sidecar using whatever base image your Dockerfile specifies. The runner’s default image just needs a shell.

Result

After deployment, the runner appears in Forgejo under Site Administration → Runners and starts polling for jobs. Any repository with a .forgejo/workflows/ directory will have its workflows picked up automatically.

Testing the runner

Workflow files go in .forgejo/workflows/ inside your repository. Any YAML file in that directory is picked up as a workflow. For example, .forgejo/workflows/hello.yaml:

name: Hello World
on:
  push:
    branches: [main]
  workflow_dispatch:
jobs:
  hello:
    runs-on: alpine-latest
    steps:
      - name: Hello
        run: echo "Hello from Forgejo Runner!"

The workflow_dispatch trigger allows running the workflow manually - either from the Actions tab in the web UI, or via the API.

Triggering a workflow from CLI

curl -u "user:password" \
  -X POST \
  "https://forgejo.example.com/api/v1/repos/{owner}/{repo}/actions/workflows/hello.yaml/dispatches" \
  -H "Content-Type: application/json" \
  -d '{"ref":"main"}'

Returns HTTP 204 on success. The ref field specifies which branch to run the workflow from.

Listing completed runs

curl -s -u "user:password" \
  "https://forgejo.example.com/api/v1/repos/{owner}/{repo}/actions/tasks" \
  | jq -r '.workflow_runs[0]'

Output:

{
  "id": 7,
  "name": "hello",
  "head_branch": "main",
  "head_sha": "6b3f1f030b130ccb9bde4654bcb85de5a1bc4a6b",
  "run_number": 7,
  "event": "workflow_dispatch",
  "display_title": "Hello World",
  "status": "success",
  "workflow_id": "hello.yaml",
  "url": "https://example.com/forgejo-admin/k3s/actions/runs/7",
  "created_at": "2026-03-19T14:14:54Z",
  "updated_at": "2026-03-19T14:14:55Z",
  "run_started_at": "2026-03-19T14:14:54Z"
}