Services Catalog¶
Inventory of all services deployed in the fzymgc-house cluster.
Quick Reference¶
| Service | URL | Namespace | Category |
|---|---|---|---|
| Vault | vault.fzymgc.house |
vault | Platform |
| Authentik | auth.fzymgc.house |
authentik | Platform |
| Grafana | grafana.fzymgc.house |
grafana | Platform |
| ArgoCD | argocd.fzymgc.house |
argocd | Platform |
| Temporal | temporal.fzymgc.house |
temporal | Application |
| Mealie | mealie.fzymgc.house |
mealie | Application |
| Longhorn | longhorn.fzymgc.house |
longhorn-system | Infrastructure |
| Tailscale | Internal | tailscale | Infrastructure |
| Traefik | Internal | traefik | Infrastructure |
| Router Hosts Operator | Internal | router-hosts-operator | Infrastructure |
| NATS | Internal | nats | Infrastructure |
| Mosquitto | mqtt.fzymgc.house:8883 |
mosquitto | Infrastructure |
| VictoriaMetrics | Internal | prometheus | Observability |
| Loki | Internal | loki | Observability |
| Grafana Alloy | alloy-ingest.fzymgc.house |
alloy | Observability |
| Uptime Kuma | status.fzymgc.house |
uptime-kuma | Observability |
| Merlin (OpenClaw) | Internal | merlin | Application |
| Merlin Workshop | Internal | merlin | Application |
| Dolt SQL Server | doltdb.fzymgc.house |
dolt | Application |
| CNPG | postgres.fzymgc.house |
postgres | Infrastructure |
| Gateway API | Internal | kube-system | Infrastructure |
| Grafana Operator | Internal | grafana-operator | Infrastructure |
| Grafana MCP | grafana-mcp.fzymgc.house |
grafana-mcp | Application |
| K8s OIDC RBAC | Internal | N/A | Platform |
| Kubernetes Replicator | Internal | kube-system | Infrastructure |
| Reloader | Internal | kube-system | Infrastructure |
| System Upgrade Controller | Internal | system-upgrade | Infrastructure |
| Velero | Internal | velero | Infrastructure |
Platform Services¶
Vault¶
| Property | Value |
|---|---|
| URL | vault.fzymgc.house |
| Alt URLs | vault-0.fzymgc.house, vault-1.fzymgc.house, vault-2.fzymgc.house |
| Namespace | vault |
| Ingress Type | TCP Passthrough (TLS termination at Vault) |
| Auth Method | OIDC (Authentik) |
| Vault Path | secret/fzymgc-house/cluster/vault/* |
| Status | Operational |
Authentik¶
| Property | Value |
|---|---|
| URL | auth.fzymgc.house |
| Namespace | authentik |
| Ingress Type | Traefik IngressRoute |
| Auth Method | Native (IdP) |
| Vault Path | secret/fzymgc-house/cluster/authentik |
| Status | Operational |
Grafana¶
| Property | Value |
|---|---|
| URL | grafana.fzymgc.house |
| Namespace | grafana |
| Ingress Type | Helm Managed |
| Auth Method | OIDC (Authentik) |
| Vault Path | secret/fzymgc-house/cluster/grafana |
| Status | Operational |
ArgoCD¶
| Property | Value |
|---|---|
| URL | argocd.fzymgc.house |
| Namespace | argocd |
| Ingress Type | Helm Managed |
| Auth Method | OIDC (Authentik) |
| Vault Path | secret/fzymgc-house/cluster/argocd |
| Status | Operational |
K8s OIDC RBAC¶
| Property | Value |
|---|---|
| URL | Internal only |
| Namespace | Cluster-scoped |
| Purpose | Kubernetes RBAC bindings for Authentik OIDC groups |
| Auth Method | OIDC (Authentik) |
| Status | Operational |
RBAC Bindings:
| Authentik Group | ClusterRole | Description |
|---|---|---|
k8s-admins |
cluster-admin |
Full cluster access |
k8s-developers |
edit |
Edit resources in namespaces |
k8s-viewers |
view |
Read-only access |
How it works:
- Authentik issues OIDC tokens with group claims (
k8s-admins, etc.) - ClusterRoleBindings map those groups to Kubernetes RBAC roles
- k3s validates tokens using
--kube-apiserver-arg=oidc-*flags
Application Services¶
Temporal¶
| Property | Value |
|---|---|
| URL | temporal.fzymgc.house |
| Namespace | temporal |
| Ingress Type | Traefik IngressRoute |
| Auth Method | Forward-Auth (Authentik) |
| Vault Path | secret/fzymgc-house/cluster/temporal/* |
| Database | CNPG main cluster: temporal (default store), temporal_visibility (visibility store) |
| Workers Repo | fzymgc-house/temporal-workers |
| Status | Active |
Components:
temporal-server- Core Temporal services (frontend, history, matching, worker)temporal-web- Web UI for workflow visibilitytemporal-admintools- CLI tools for namespace managementtemporal-worker-controller- Manages worker deployments via CRDs
Mealie¶
| Property | Value |
|---|---|
| URL | mealie.fzymgc.house |
| Alt URL | mealie.k8s.fzymgc.house |
| Namespace | mealie |
| Ingress Type | Traefik IngressRoute |
| Auth Method | Forward-Auth (Authentik) |
| Vault Path | secret/fzymgc-house/cluster/mealie |
| Status | Operational |
Grafana MCP¶
| Property | Value |
|---|---|
| URL | grafana-mcp.fzymgc.house |
| Namespace | grafana-mcp |
| Ingress Type | Traefik IngressRoute |
| Auth Method | OIDC (Authentik) |
| Vault Path | secret/fzymgc-house/cluster/grafana |
| Chart | grafana/mcp-grafana v0.2.2 |
| Status | Operational |
Purpose:
MCP (Model Context Protocol) server for Claude Code integration. Provides AI assistants with structured access to Grafana data sources:
- Query metrics from Prometheus/VictoriaMetrics
- Query logs from Loki
- List and access dashboards
- Execute Grafana API operations
Usage:
Claude Code connects via the fzymgc-house:grafana skill which uses this MCP server to:
- Investigate infrastructure issues via Grafana data
- Check application metrics and logs
- Create or update dashboards
Merlin (OpenClaw)¶
| Property | Value |
|---|---|
| URL | Internal only |
| Namespace | merlin |
| Controller | StatefulSet (1 replica) |
| Auth Method | Gateway token (Vault) |
| Vault Path | secret/fzymgc-house/cluster/merlin |
| Storage | 5Gi config, 20Gi workspace, 5Gi homebrew |
| Status | Operational |
Purpose: AI assistant gateway (powered by OpenClaw) running as a StatefulSet. Provides conversational AI capabilities via Discord and other channels.
Tools (via Homebrew): kubectl, gh, vault, terraform, Claude Code CLI
Merlin Workshop¶
| Property | Value |
|---|---|
| URL | Internal only (exec access) |
| Namespace | merlin |
| Controller | Deployment (1 replica) |
| Auth Method | Shared merlin-secrets |
| Storage | 20Gi workspace, 10Gi homebrew |
| Status | Operational |
Purpose: Persistent Ubuntu 24.04 arm64 development environment for code reviews, PR creation, builds, and development tasks. Designed for kubectl exec access — no ingress.
Access:
Tools: Homebrew (install anything), Claude Code CLI
Dolt SQL Server¶
| Property | Value |
|---|---|
| URL | doltdb.fzymgc.house |
| Namespace | dolt |
| Port | 3306 (MySQL wire protocol) |
| Ingress Type | MetalLB LoadBalancer |
| Auth Method | MySQL native (password) |
| Vault Path | secret/fzymgc-house/cluster/dolt |
| Status | Operational |
Purpose:
Version-controlled SQL database (MySQL-compatible) for Beads, Gastown, and AI agents. Dolt provides Git-like versioning for database content, enabling branch/merge workflows on structured data.
Infrastructure Services¶
Traefik¶
| Property | Value |
|---|---|
| URL | Internal only |
| Namespace | traefik |
| Ingress Type | N/A (is the ingress controller) |
| Auth Method | None |
| Ports | 80 (HTTP), 443 (HTTPS) |
| Status | Operational |
Longhorn¶
| Property | Value |
|---|---|
| URL | longhorn.fzymgc.house |
| Namespace | longhorn-system |
| Ingress Type | Traefik IngressRoute |
| Auth Method | Forward-Auth (Authentik) |
| Status | Operational |
MetalLB¶
| Property | Value |
|---|---|
| URL | Internal only |
| Namespace | metallb |
| Ingress Type | N/A (provides LoadBalancer IPs) |
| Auth Method | None |
| IP Pools | 192.168.20.145-149, 192.168.20.155-159 |
| Status | Operational |
cert-manager¶
| Property | Value |
|---|---|
| URL | Internal only |
| Namespace | cert-manager |
| Ingress Type | N/A |
| Auth Method | None |
| Issuers | Let's Encrypt (production), Self-signed (internal) |
| Status | Operational |
External Secrets Operator¶
| Property | Value |
|---|---|
| URL | Internal only |
| Namespace | external-secrets |
| Ingress Type | N/A |
| Auth Method | Vault Kubernetes Auth |
| ClusterSecretStore | vault |
| Status | Operational |
Cloudflared¶
| Property | Value |
|---|---|
| URL | N/A (outbound tunnel) |
| Namespace | cloudflared |
| Purpose | External ingress via Cloudflare Tunnel |
| Status | Operational |
Tailscale¶
| Property | Value |
|---|---|
| URL | Internal only |
| Namespace | tailscale |
| Ingress Type | N/A (outbound mesh network) |
| Auth Method | OAuth (Tailscale API) |
| Purpose | Subnet router and exit node for tailnet access |
| Vault Path | secret/fzymgc-house/cluster/tailscale/oauth |
| Status | Operational |
Router Hosts Operator¶
| Property | Value |
|---|---|
| URL | Internal only |
| Namespace | router-hosts-operator |
| Chart Source | ghcr.io/fzymgc-house/charts/router-hosts-operator |
| Chart Version | 0.8.14 |
| Image | ghcr.io/fzymgc-house/router-hosts-operator:0.8.14 |
| Auth Method | mTLS (Vault PKI) |
| Purpose | DNS host entry management via HostEntry CRD |
| Vault PKI Role | router-hosts-client |
| Server Version | 0.9.7 (Ansible: Firewalla router image tag) |
| Status | Operational |
NATS¶
| Property | Value |
|---|---|
| URL | Internal only |
| Namespace | nats |
| Ingress Type | N/A (internal messaging) |
| Auth Method | NKey (Ed25519 signatures) |
| Vault Path | secret/fzymgc-house/cluster/nats |
| Storage | 10Gi per node (longhorn-encrypted) |
| Cluster Size | 3 replicas |
| Status | Active |
Features:
- JetStream persistence with Raft consensus
- 3-node cluster for high availability
- TLS for client and cluster connections
- Account-based multi-tenancy (SYS, SERVICES, IOT)
- Prometheus metrics on port 7777
Accounts:
| Account | Purpose | Consumers |
|---|---|---|
SYS |
System monitoring and admin | nats-box, monitoring |
SERVICES |
Cluster service communication | Temporal, future services |
IOT |
IoT device messaging | Home Assistant (future) |
Ports:
| Port | Protocol | Purpose |
|---|---|---|
| 4222 | NATS | Client connections |
| 1883 | MQTT | MQTT listener |
| 6222 | NATS | Cluster routes |
| 7777 | HTTP | Prometheus metrics |
| 8222 | HTTP | Monitoring endpoint |
See NATS Operations for key management and administration.
Mosquitto¶
| Property | Value |
|---|---|
| URL | mqtt.fzymgc.house:8883 |
| Namespace | mosquitto |
| Ingress Type | LoadBalancer (MetalLB) |
| Auth Method | Username/password (mosquitto_passwd) |
| Password File | /mosquitto/config/passwd (from mosquitto-auth) |
| Purpose | MQTT broker for IoT and Home Assistant |
| TLS | External on port 8883 |
| Bridge TLS | Uses fzymgc-ica1-ca full chain bundle |
| Status | Operational |
CNPG (CloudNativePG)¶
| Property | Value |
|---|---|
| URL | postgres.fzymgc.house |
| Namespace | postgres |
| Ingress Type | Traefik IngressRoute |
| Auth Method | PostgreSQL native (TLS + password) |
| Cluster Name | main |
| Instances | 3 replicas |
| Storage | 10Gi per instance (postgres-storage) |
| PostgreSQL Version | 18.1 |
| Status | Operational |
Purpose:
CloudNativePG is a Kubernetes operator for PostgreSQL. It manages the main PostgreSQL cluster that provides databases for multiple applications.
Databases:
| Database | Owner | Application |
|---|---|---|
authentik |
authentik |
Authentik IdP |
grafana |
grafana |
Grafana |
mealie |
mealie |
Mealie |
temporal |
temporal |
Temporal (default store) |
temporal_visibility |
temporal |
Temporal (visibility store) |
Features:
- Automatic failover with Raft consensus
- Continuous backup to Longhorn snapshots
- TLS encryption for all connections
- WAL archiving via Barman Cloud plugin
- 15-day retention policy for backups
Connection:
Gateway API¶
| Property | Value |
|---|---|
| URL | Internal only |
| Namespace | kube-system |
| Version | v1.4.1 (experimental CRDs) |
| Purpose | Kubernetes Gateway API resources |
| Status | Operational |
Purpose:
Gateway API provides Kubernetes-native traffic routing resources. Currently deployed for experimental features and future traffic management capabilities.
Installed CRDs:
- GatewayClass, Gateway, HTTPRoute
- TCPRoute, UDPRoute, TLSRoute
- ReferenceGrant, BackendTLSPolicy
Grafana Operator¶
| Property | Value |
|---|---|
| URL | Internal only |
| Namespace | grafana-operator |
| Purpose | Kubernetes operator for Grafana resources |
| Status | Operational |
Purpose:
Manages Grafana resources declaratively via Kubernetes CRDs:
GrafanaFolder- Dashboard organizationGrafanaDashboard- Dashboard definitionsGrafanaAlertRuleGroup- Alert rulesGrafanaDatasource- Data source configuration
Usage:
Applications define their Grafana resources in their app-configs directories, and the operator syncs them to the Grafana instance.
Kubernetes Replicator¶
| Property | Value |
|---|---|
| URL | Internal only |
| Namespace | kube-system |
| Source | mittwald/kubernetes-replicator |
| Purpose | Replicate Secrets and ConfigMaps across namespaces |
| Status | Operational |
Purpose:
Automatically replicates Secrets and ConfigMaps to multiple namespaces based on annotations.
Usage:
Add annotation to source Secret/ConfigMap:
Or use regex patterns:
Reloader¶
| Property | Value |
|---|---|
| URL | Internal only |
| Namespace | kube-system |
| Replicas | 2 |
| Purpose | Auto-reload Deployments on ConfigMap/Secret changes |
| Status | Operational |
Purpose:
Watches for changes in ConfigMaps and Secrets, then triggers rolling restarts of associated Deployments/StatefulSets/DaemonSets.
Usage:
Add annotation to Deployment:
Or specify exact resources:
configmap.reloader.stakater.com/reload: "my-configmap"
secret.reloader.stakater.com/reload: "my-secret"
System Upgrade Controller¶
| Property | Value |
|---|---|
| URL | Internal only |
| Namespace | system-upgrade |
| Version | v0.18.0 |
| Purpose | Automated k3s cluster upgrades |
| Status | Operational |
Purpose:
Manages rolling upgrades of k3s server and agent nodes using Plan CRDs.
Upgrade Plans:
| Plan | Target | Channel |
|---|---|---|
k3s-server |
Control plane nodes | stable |
k3s-agent |
Worker nodes | stable |
Process:
- Plans check k3s release channel for new versions
- Server nodes upgrade first (one at a time)
- Agent nodes upgrade after servers complete
- Nodes are cordoned/drained during upgrade
Velero¶
| Property | Value |
|---|---|
| URL | Internal only |
| Namespace | velero |
| Purpose | Kubernetes backup and disaster recovery |
| Vault Path | secret/fzymgc-house/cluster/velero |
| Status | Operational |
Purpose:
Backs up Kubernetes resources and persistent volumes for disaster recovery.
Backup Strategy:
Uses exclude-only approach - all namespaces backed up by default except infrastructure/stateless ones.
Schedules:
| Schedule | Frequency | TTL | Description |
|---|---|---|---|
daily-backup |
Daily 2 AM | 30 days | Core resources |
weekly-full-backup |
Sunday 3 AM | 90 days | Extended resources including NetworkPolicies |
Excluded Namespaces:
- Kubernetes core:
kube-system,kube-node-lease,kube-public,default - Networking:
calico-*,traefik,metallb - Operators:
cert-manager,external-secrets,cnpg-system,grafana-operator - Ephemeral:
arc-systems,arc-runners,system-upgrade - Telemetry:
alloy,loki,prometheus,cloudflared
Observability Services¶
VictoriaMetrics¶
| Property | Value |
|---|---|
| URL | Internal only |
| Namespace | prometheus |
| Ingress Type | None |
| Auth Method | None |
| Purpose | Metrics storage (Prometheus-compatible) |
| Status | Operational |
Loki¶
| Property | Value |
|---|---|
| URL | Internal only |
| Namespace | loki |
| Ingress Type | None |
| Auth Method | None |
| Purpose | Log aggregation |
| Status | Operational |
Grafana Alloy¶
Alloy is deployed as two separate workloads for failure isolation and proper scaling:
Cluster Monitoring (DaemonSet)¶
| Property | Value |
|---|---|
| Application | monitoring-alloy |
| URL | Internal only |
| Namespace | alloy |
| Controller | DaemonSet (runs on all nodes) |
| Auth Method | None (internal cluster access) |
| Purpose | Collect pod logs and Kubernetes events |
| Status | Operational |
Collects:
- Pod logs via Kubernetes API (
loki.source.kubernetes) - Kubernetes events (
loki.source.kubernetes_events)
OTLP Ingestion (Deployment)¶
| Property | Value |
|---|---|
| Application | monitoring-alloy-ingest |
| External URL | alloy-ingest.fzymgc.house (OTLP/HTTP) |
| Namespace | alloy |
| Controller | Deployment (2 replicas) |
| Ingress Type | Traefik IngressRoute |
| Auth Method | Bearer Token (Vault) |
| Vault Path | secret/fzymgc-house/cluster/alloy |
| Purpose | Receive telemetry from external collectors |
| Status | Operational |
Features:
- OTLP/gRPC (port 4317) and OTLP/HTTP (port 4318) receivers
- Bearer token authentication via
otelcol.auth.bearer - Exports logs to Loki, metrics to Prometheus remote write
Why Separated:
- Failure isolation: Auth issues don't affect cluster monitoring
- Independent scaling: 2 replicas vs every node
- Cleaner configuration: Each deployment has focused purpose
- Easier debugging: Separate logs and metrics per function
External Collector Support:
External Alloy instances (Firewalla, other hosts) send telemetry to alloy-ingest.fzymgc.house:443 using OTLP/HTTP with bearer token authentication. See ansible/roles/alloy/ for Firewalla deployment.
Uptime Kuma¶
| Property | Value |
|---|---|
| URL | status.fzymgc.house |
| Namespace | uptime-kuma |
| Ingress Type | Traefik IngressRoute |
| Auth Method | ForwardAuth (Authentik SSO) |
| Storage | 1Gi Longhorn PVC (SQLite) |
| Status | Operational |
Purpose: External service uptime monitoring with status page capabilities.
Initial Monitors:
https://auth.fzymgc.house- Authentikhttps://vault.fzymgc.house- Vault UIhttps://grafana.fzymgc.house- Grafana
Planned Enhancement: Terraform-managed monitors via breml/uptimekuma provider (see selfhosted-cluster-ucb).
GitOps Services¶
HCP Terraform Operator¶
| Property | Value |
|---|---|
| URL | Internal only |
| Namespace | hcp-terraform |
| Purpose | Terraform Cloud workspace management |
| Status | Operational |
Actions Runner Controller¶
| Property | Value |
|---|---|
| URL | Internal only |
| Controller Namespace | arc-systems |
| Runners Namespace | arc-runners |
| Purpose | GitHub Actions self-hosted runners |
| Status | Operational |
External Services¶
| Service | Purpose | Management |
|---|---|---|
| Cloudflare | DNS, Tunnels, WAF | Terraform (tf/cloudflare) |
| HCP Terraform | Infrastructure automation | Web UI |
| GitHub | Source control, Actions | Web UI |
| Let's Encrypt | TLS certificates | cert-manager |
Auth Method Reference¶
| Method | Description | Configuration |
|---|---|---|
| OIDC | Direct OpenID Connect authentication | Authentik provider integration |
| Forward-Auth | Traefik middleware proxies auth to Authentik | forwardAuth middleware |
| Certificate | mTLS client certificate | Vault PKI integration |
| None | No authentication required | Internal services only |
Ingress Type Reference¶
| Type | Description | TLS Handling |
|---|---|---|
| Traefik IngressRoute | Native Traefik CRD | Traefik terminates TLS |
| TCP Passthrough | Raw TCP proxy | Backend terminates TLS |
| Helm Managed | Ingress defined in Helm values | Varies by chart |
| Cloudflare Tunnel | External via cloudflared |
Cloudflare terminates |
| kube-vip VIP | Direct LoadBalancer IP | Service handles TLS |
Adding a New Service¶
- Create Kubernetes manifests in
argocd/app-configs/<service>/ - Configure ingress (IngressRoute or Ingress resource)
- Set up authentication:
- OIDC: Create Authentik application and provider
- Forward-Auth: Add middleware reference
- Add secrets to Vault if needed
- Create ExternalSecret for Kubernetes secret sync
- Update this catalog