Skip to content

building home lab part 8

In last part I setup monitoring system, and this part I am going to setup logging system.


Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate. It does not index the contents of the logs, but rather a set of labels for each log stream.


Agent - An agent or client, for example Promtail, which is distributed with Loki, or the Grafana Agent. The agent scrapes logs, turns the logs into streams by adding labels, and pushes the streams to Loki through an HTTP API.

Loki - The main server, responsible for ingesting and storing logs and processing queries. It can be deployed in three different configurations, for more information see deployment modes.

Grafana for querying and displaying log data. You can also query logs from the command line, using LogCLI or using the Loki API directly.

deployment modes

There are simple, monolithic, and microservice. Illustration and description of each is available and easy to understand.

(simple scalable deployment mode)/It strikes a balance between deploying in monolithic mode or deploying each component as a separate microservice.

I am going with simple scalable mode.

installing loki

Here is the list of items you can see from the helm values file. I am omitting enterprise section because that's not an option for me building homelab.

  • loki
    • storage
      • bucketNames
        • chunks, ruler, admin
      • type: s3
      • s3 access details
    • memcached
      • chunk_cache (enabled: false)
      • results_cache (enabled: false)
  • monitoring
    • dashboard
    • rules (prometheus rules)
    • service monitor
    • self monitoring
    • loki canary
  • write
  • table manager (enabled: false, deprecated as per v2.9 doc)
  • read
  • backend
  • single binary (replicas 0)
  • ingress (enabled: false)
  • memberlist service
  • gateway (enabled: true) # changed this to false
  • network policy (enabled: false)
  • minio (enabled: false)
  • sidecar

preparing s3 bucket

I will just follow the default bucket names found in the values file and create each bucket on my minio tenant.

  • admin
  • chunks
  • ruler

I create a new group named loki-group and user named loki, set rw policy to the group and generate access key and secret for the user.

Loki helm chart does not seem to have an option to use s3 credentials in secret like gitlab-runner helm chart.

preparing memcached

There are two sets of memcached cluster recommended. They are named chunk_cache and results_cache in the values file.

Here is the instructed memcached settings:

  • chunk_cache: --memory-limit=4096 --max-item-size=2m --conn-limit=1024
  • results_cache: --memory-limit=1024 --max-item-size=5m --conn-limit=1024

The number of concurrent connections limit is set to 1024 by default as per the official wiki.

# cd {homelab repo}/infrastructure/CLUSTERNAME/controllers/default-values

# confirm the version, 6.14.0 as of 20240308 (memcached v1.6.24)
helm show chart oci://

# get the values file
helm show values oci:// > memcached-values.yaml
cp memcached-values.yaml ../.

Here is my values file. This one is for chunk, and I have another copy named results-memcached-values.yaml for results_cache with modified args -m 1024 and -I 5m.

Additional note: I had to change the architecture setting from standalone to high-availability to have more than 1 replicas.

## @param architecture Memcached architecture. Allowed values: standalone or high-availability
architecture: high-availability
diff --git a/infrastructure/homelab/controllers/memcached-values.yaml b/infrastructure/homelab/controllers/chunk-memcached-values.yaml
index c61da0e..0a15118 100644
--- a/infrastructure/homelab/controllers/memcached-values.yaml
+++ b/infrastructure/homelab/controllers/memcached-values.yaml
@@ -17,7 +17,7 @@ global:
   ##   - myRegistryKeySecretName
   imagePullSecrets: []
-  storageClass: ""
+  storageClass: "directpv-min-io"
   ## Compatibility adaptations for Kubernetes platforms
@@ -127,7 +127,11 @@ command: []
 ##   - -I <maxItemSize>
 ##   - -vv
-args: []
+  - /
+  - -m 4096
+  - -I 2m
+  - --conn-limit=1024
 ## @param extraEnvVars Array with extra environment variables to add to Memcached nodes
 ## e.g:
 ## extraEnvVars:
@@ -145,7 +149,7 @@ extraEnvVarsSecret: ""

 ## @param replicaCount Number of Memcached nodes
-replicaCount: 1
+replicaCount: 3
 ## @param containerPorts.memcached Memcached container port
@@ -550,7 +554,7 @@ serviceAccount:
   ## @param persistence.enabled Enable Memcached data persistence using PVC. If false, use emptyDir
-  enabled: false
+  enabled: true
   ## @param persistence.storageClass PVC Storage Class for Memcached data volume
   ## If defined, storageClassName: <storageClass>
   ## If set to "-", storageClassName: "", which disables dynamic provisioning
@@ -558,7 +562,7 @@ persistence:
   ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
   ##   GKE, AWS & OpenStack)
-  storageClass: ""
+  storageClass: "directpv-min-io"
   ## @param persistence.accessModes PVC Access modes

Here is my script to generate flux helmrepo and hr.


# add flux helmrepo to the manifest
flux create source helm bitnami \
    --url=oci:// \
    --interval=1h0m0s \
    --export >memcached.yaml

# add flux helm release to the manifest including the customized values.yaml file
flux create helmrelease chunk-memcached \
    --interval=10m \
    --target-namespace=memcached \
    --source=HelmRepository/bitnami \
    --chart=memcached \
    --chart-version=6.14.0 \
    --values=chunk-memcached-values.yaml \
    --export >>memcached.yaml

# add flux helm release to the manifest including the customized values.yaml file
flux create helmrelease results-memcached \
    --interval=10m \
    --target-namespace=memcached \
    --source=HelmRepository/bitnami \
    --chart=memcached \
    --chart-version=6.14.0 \
    --values=results-memcached-values.yaml \
    --export >>memcached.yaml