It is a continuation of previous article wordpress-quartz. I will describe how to run it locally, run it using k3s, and how to design ci/cd pipeline. This pipeline will work as follows:
- to add a new article I need to create a new commit. What will happen next is:
- Forgejo Action reads the currently deployed version
- builds a Docker image with a new tag
- pushes the Docker image
- adds a commit to the repository, updating the deployment manifest
- ArgoCD picks up that change, deploys the newer version
- k3s performs a rolling update without downtime
Running locally
First, without jumping into automation, lets run Quarz locally. Assuming you have Node 22+, all you need to do is clone the repo, then from the quartz/ directory:
git clone https://github.com/jackyzha0/quartz.git
cd quartz
npm ci
npm run serveThis builds the site and starts a dev server with live reload at http://localhost:8080. Content changes trigger an automatic rebuild.
running in kubernetes environment
First step to run an application in k8s would be writing a dockerfile. But before that, we need to select one of two approaches:
dynamic: node + mounted content
Run Node.js with npm run serve, mount the content directory as a volume. File watcher rebuilds on change.
Pros:
- Instant updates when editing files
- no deploy step for content changes
- good for rapid iteration. Cons:
- Node runtime (~200MB+ memory)
- larger attack surface
static: nginx
Pre-build the site with npm run build, copy the output into an nginx image, serve static files. No Node.js.
Pros:
- small image (~40MB)
- low memory
- fast serving
- minimal attack surface
Cons
- Rebuild and redeploy required for every content change
- CI/CD or manual process needed
I will choose generatic static content, is more secure, and will use less RAM. To mitigate an issue with rebuild and redeploy, I will implement forgejo action.
Dockerfile
The upstream Quartz Dockerfile has two drawbacks. First, it keeps Node.js in the final image even though serving static HTML only needs a web server - that means a ~200MB image instead of ~40MB, more RAM, and a larger attack surface. Second, it runs npm run build at container startup, so every pod restart (rollout, node drain, crash) triggers a full rebuild - slow startup and wasted CPU.
A 2-stage build fixes both: build once at image build time, then discard Node.js and ship only the static output in a minimal nginx image. No runtime rebuild, no Node in production.
FROM node:22.22.1-slim AS builder
WORKDIR /app
COPY quartz/package.json quartz/package-lock.json* ./
RUN npm ci
COPY quartz/ .
RUN npm run build
FROM nginx:1.28.2-alpine
COPY blog/nginx.conf /etc/nginx/conf.d/default.conf
COPY --from=builder /app/public /usr/share/nginx/html
EXPOSE 80Stage 1 installs dependencies and generates static files into public/. The COPY package.json before COPY quartz/ ensures Docker layer caching - npm ci is skipped when only content changes.
Stage 2 copies the static output into an nginx:alpine image. The final image is ~40MB instead of ~200MB, starts instantly, and serves with production-grade gzip and caching.
nginx config
The nginx config used in the dockerfile, contains few directives. Let me explain them.
try_files
For a request to /cicd/quartz-blog-pipeline/, nginx tries in order:
- (1) a file at that exact path,
- (2)
quartz-blog-pipeline/index.html, - (3)
quartz-blog-pipeline.html. Step 2 matches, so the page loads. Without this, clean URLs would 404.
Gzip
compresses text before sending. A 100KB HTML file might become 20KB over the wire. The gzip_types line lists which formats to compress: HTML, CSS, JS, JSON, XML, SVG. Images (PNG, JPG) are already compressed, so we skip them.
Caching
The first location block matches URLs ending in .js, .css, .png, etc. For those, we tell the browser: “cache this for 30 days, don’t revalidate.” That works because Quartz puts hashes in filenames (e.g. styles.abc123.css). When you publish a new version, the filename changes, so the browser fetches the new file. HTML pages are not in this block — they get normal short cache, so readers see updates when you deploy.
server {
listen 80;
root /usr/share/nginx/html;
gzip on;
gzip_types text/plain text/css application/json application/javascript
text/xml application/xml image/svg+xml;
location ~* \.(js|css|png|jpg|svg|woff2)$ {
expires 30d;
add_header Cache-Control "public, immutable";
}
location / {
try_files $uri $uri/index.html $uri.html =404;
}
error_page 404 /404.html;
}Kubernetes deployment
The blog/ directory contains standard k8s manifests:
- Deployment - single replica,
RollingUpdatestrategy, readiness/liveness probes on/ - Service - ClusterIP on port 80
- Ingress - Traefik with cert-manager TLS for
blog.example.com - Secret - Harbor registry pull credentials (
imagePullSecrets)
blog/deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: blog
namespace: blog
labels:
app.kubernetes.io/name: blog
spec:
revisionHistoryLimit: 3
replicas: 1
strategy:
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: blog
template:
metadata:
labels:
app.kubernetes.io/name: blog
spec:
imagePullSecrets:
- name: registry-auth
containers:
- name: blog
image: registry.example.com/blog/quartz:0.5
ports:
- name: http
containerPort: 80
protocol: TCP
resources:
requests:
memory: "32Mi"
cpu: "10m"
limits:
memory: "128Mi"
cpu: "100m"
livenessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 5
periodSeconds: 30
readinessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 3
periodSeconds: 10blog/service.yaml:
apiVersion: v1
kind: Service
metadata:
name: blog
namespace: blog
labels:
app.kubernetes.io/name: blog
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: blogblog/ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: blog
namespace: blog
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
traefik.ingress.kubernetes.io/router.middlewares: cert-manager-redirect-https@kubernetescrd
spec:
ingressClassName: traefik
rules:
- host: blog.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: blog
port:
number: 80
tls:
- hosts:
- blog.example.com
secretName: blog-tlsblog/secret.yaml — a kubernetes.io/dockerconfigjson secret for Harbor pull credentials. Create it with kubectl create secret docker-registry registry-auth ... or store the base64-encoded .dockerconfigjson in the manifest.
ArgoCD manages everything via argocd/apps/blog-app.yaml pointing to the blog/ directory. I will cover argoCD later on.
Zero-downtime deployments
With RollingUpdate and 1 replica, Kubernetes defaults to maxUnavailable: 0 and maxSurge: 1. This means it creates the new pod first, waits for the readiness probe to pass, then kills the old one. Since nginx starts in under a second, there’s no downtime during image updates.
CI/CD with Forgejo Actions
A workflow in .forgejo/workflows/blog-deploy.yaml automates the build-deploy cycle:
- Trigger on push to
mainthat touchesquartz/**, or manually viaworkflow_dispatch - Read the current image version from
blog/deployment.yaml - Bump the patch version (0.1 → 0.2 → 0.3 …)
docker buildanddocker pushto Harbor atregistry.example.com- Update
deployment.yamlwith the new version - Commit and push with
[skip ci]to avoid infinite loops
The trigger path filter (quartz/**) is the primary loop prevention - the commit only changes blog/deployment.yaml, which doesn’t match the trigger.
.forgejo/workflows/blog-deploy.yaml:
name: Build and Deploy Blog
on:
push:
branches: [main]
paths:
- 'quartz/**'
workflow_dispatch:
jobs:
build-and-deploy:
if: "!contains(github.event.head_commit.message, '[skip ci]')"
runs-on: alpine-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
token: ${{ secrets.PUSH_TOKEN }}
fetch-depth: 0
- name: Install dependencies
run: apk add --no-cache docker-cli git
- name: Read current version
id: version
run: |
CURRENT=$(sed -n 's/.*quartz:\([0-9]*\.[0-9]*\).*/\1/p' blog/deployment.yaml)
MAJOR=$(echo "$CURRENT" | cut -d. -f1)
MINOR=$(echo "$CURRENT" | cut -d. -f2)
NEW_MINOR=$((MINOR + 1))
NEW_VERSION="${MAJOR}.${NEW_MINOR}"
echo "current=$CURRENT" >> "$GITHUB_OUTPUT"
echo "new=$NEW_VERSION" >> "$GITHUB_OUTPUT"
- name: Build image
run: docker build -f blog/Dockerfile -t registry.example.com/blog/quartz:${{ steps.version.outputs.new }} .
- name: Push image
run: |
echo "${{ secrets.REGISTRY_PASSWORD }}" | docker login registry.example.com -u "${{ secrets.REGISTRY_USER }}" --password-stdin
docker push registry.example.com/blog/quartz:${{ steps.version.outputs.new }}
- name: Update deployment manifest
run: sed -i "s|quartz:${{ steps.version.outputs.current }}|quartz:${{ steps.version.outputs.new }}|" blog/deployment.yaml
- name: Commit and push
run: |
git clone --depth=1 https://your-username:${{ secrets.PUSH_TOKEN }}@forgejo.example.com/your-username/your-repo.git /tmp/repo
cp blog/deployment.yaml /tmp/repo/blog/deployment.yaml
cd /tmp/repo
git config user.name "forgejo-runner"
git config user.email "runner@example.com"
git add blog/deployment.yaml
git commit -m "bump blog image to ${{ steps.version.outputs.new }} [skip ci]"
git pushRunner setup
The Forgejo runner runs as a StatefulSet in the forgejo namespace with a DinD (Docker-in-Docker) sidecar for container builds. The runner label alpine-latest maps to node:22-alpine, which provides both Node.js (needed by actions/checkout) and apk for installing docker-cli.
The runner config mounts the DinD socket into workflow containers:
container:
options: "-v /var/run/docker.sock:/var/run/docker.sock"
valid_volumes:
- "/var/run/docker.sock"
docker_host: "tcp://localhost:2375"Triggering manually
curl -s -X POST \
-H "Authorization: token $(cat .forgejo-token)" \
-H "Content-Type: application/json" \
-d '{"ref": "main"}' \
https://forgejo.example.com/api/v1/repos/your-username/your-repo/actions/workflows/blog-deploy.yaml/dispatches