K8s OIDC Authentication Implementation Plan¶
For Claude: REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
Goal: Enable kubectl authentication via Authentik OIDC with group-based RBAC for k8s-admins, k8s-developers, and k8s-viewers.
Architecture: Users authenticate to Authentik via browser (kubelogin), receive JWT with groups claim, k3s API server validates JWT and maps groups to ClusterRoleBindings.
Tech Stack: Terraform (Authentik provider), Ansible (k3s config), Kubernetes (RBAC), kubelogin (OIDC credential plugin)
Design Document: docs/plans/2025-12-28-k8s-oidc-authentication-design.md
Task 1: Create Authentik OIDC Provider (Terraform)¶
Files:
- Create: tf/authentik/kubernetes-oidc.tf
- Reference: tf/authentik/data-sources.tf, tf/authentik/grafana.tf (pattern)
Step 1: Create kubernetes-oidc.tf
# tf/authentik/kubernetes-oidc.tf
# Kubernetes OIDC Provider for kubectl authentication
# Users authenticate via kubelogin, receive JWT with groups claim
# Custom scope to include groups claim in JWT
resource "authentik_property_mapping_provider_scope" "kubernetes_groups" {
name = "Kubernetes Groups"
scope_name = "groups"
expression = "return list(request.user.ak_groups.values_list('name', flat=True))"
}
# OAuth2 Provider for Kubernetes API Server
resource "authentik_provider_oauth2" "kubernetes" {
name = "Provider for Kubernetes"
client_id = "kubernetes"
# Public client - no secret required for native CLI apps (kubelogin)
client_type = "public"
# Use explicit consent - user confirms once per session
authorization_flow = data.authentik_flow.default_provider_authorization_explicit_consent.id
invalidation_flow = data.authentik_flow.default_provider_invalidation_flow.id
# Token lifetimes
access_token_validity = "minutes=15"
refresh_token_validity = "hours=8"
# Allowed redirect URIs for kubelogin
allowed_redirect_uris = [
{ matching_mode = "strict", url = "http://localhost:8000" },
{ matching_mode = "strict", url = "http://localhost:18000" },
{ matching_mode = "strict", url = "urn:ietf:wg:oauth:2.0:oob" }
]
# Include groups in token
property_mappings = [
data.authentik_property_mapping_provider_scope.openid.id,
data.authentik_property_mapping_provider_scope.email.id,
data.authentik_property_mapping_provider_scope.profile.id,
authentik_property_mapping_provider_scope.kubernetes_groups.id,
]
signing_key = data.authentik_certificate_key_pair.tls.id
}
# Kubernetes Application (CLI app, no launch URL)
resource "authentik_application" "kubernetes" {
name = "Kubernetes"
slug = "kubernetes"
protocol_provider = authentik_provider_oauth2.kubernetes.id
meta_launch_url = "blank://blank"
meta_description = "kubectl authentication via kubelogin"
}
Step 2: Format and validate
Run: terraform -chdir=tf/authentik fmt
Run: terraform -chdir=tf/authentik validate
Expected: No errors
Step 3: Plan changes
Run: terraform -chdir=tf/authentik plan
Expected: Plan shows 3 resources to add (scope mapping, provider, application)
Step 4: Commit
git add tf/authentik/kubernetes-oidc.tf
git commit -m "feat(authentik): add Kubernetes OIDC provider for kubectl auth
Implements the Authentik side of OIDC authentication:
- Public OAuth2 client for kubelogin CLI
- Custom groups scope mapping
- 15-min access tokens, 8-hr refresh tokens
Part of k8s-oidc-authentication implementation."
Task 2: Add OIDC Variables to Ansible (k3s-common)¶
Files:
- Modify: ansible/roles/k3s-common/defaults/main.yml
Step 1: Add OIDC variables to defaults/main.yml
Add after line 38 (after k3s_retry_delay):
# OIDC Authentication Configuration
# Enables kubectl authentication via Authentik OIDC
k3s_oidc_enabled: true
k3s_oidc_issuer_url: "https://auth.fzymgc.house/application/o/kubernetes/"
k3s_oidc_client_id: "kubernetes"
k3s_oidc_username_claim: "email"
k3s_oidc_groups_claim: "groups"
k3s_oidc_username_prefix: "oidc:"
k3s_oidc_groups_prefix: "oidc:"
# Audience validation - prevents token reuse from other Authentik applications
k3s_oidc_required_claims: "aud=kubernetes"
Step 2: Validate YAML syntax
Run: yamllint ansible/roles/k3s-common/defaults/main.yml
Expected: No errors (or only line-length warnings if any)
Step 3: Commit
git add ansible/roles/k3s-common/defaults/main.yml
git commit -m "feat(ansible): add OIDC configuration variables for k3s
Adds variables for OIDC authentication:
- Issuer URL pointing to Authentik kubernetes application
- Claims configuration (email, groups)
- Prefix configuration for user/group isolation
Part of k8s-oidc-authentication implementation."
Task 3: Update k3s Config Template for OIDC¶
Files:
- Modify: ansible/roles/k3s-common/templates/k3s-config.yaml.j2
Step 1: Add OIDC args to kube-apiserver-arg section
After line 22 (after the feature-gates for loop, before kubelet-arg:), add:
{% if k3s_oidc_enabled | default(false) %}
# OIDC Authentication via Authentik
- oidc-issuer-url={{ k3s_oidc_issuer_url }}
- oidc-client-id={{ k3s_oidc_client_id }}
- oidc-username-claim={{ k3s_oidc_username_claim }}
- oidc-groups-claim={{ k3s_oidc_groups_claim }}
- oidc-username-prefix={{ k3s_oidc_username_prefix }}
- oidc-groups-prefix={{ k3s_oidc_groups_prefix }}
- oidc-required-claim={{ k3s_oidc_required_claims }}
{% endif %}
Step 2: Verify template renders correctly
Run: ansible -i ansible/inventory/hosts.yml tpi-alpha-1 -m template -a "src=ansible/roles/k3s-common/templates/k3s-config.yaml.j2 dest=/dev/stdout" --check -e "k3s_role=server k3s_oidc_enabled=true" 2>/dev/null | grep -A10 "kube-apiserver-arg"
Expected: Output shows OIDC args in kube-apiserver-arg section
Step 3: Commit
git add ansible/roles/k3s-common/templates/k3s-config.yaml.j2
git commit -m "feat(ansible): add OIDC args to k3s server config template
Adds conditional OIDC arguments to kube-apiserver when enabled:
- Issuer URL, client ID, claims configuration
- Username and groups prefix for namespace isolation
- Required claims for audience validation
Part of k8s-oidc-authentication implementation."
Task 4: Create RBAC ClusterRoleBindings (ArgoCD)¶
Files:
- Create: argocd/app-configs/k8s-oidc-rbac/kustomization.yaml
- Create: argocd/app-configs/k8s-oidc-rbac/clusterrolebinding-admins.yaml
- Create: argocd/app-configs/k8s-oidc-rbac/clusterrolebinding-developers.yaml
- Create: argocd/app-configs/k8s-oidc-rbac/clusterrolebinding-viewers.yaml
Step 1: Create directory
mkdir -p argocd/app-configs/k8s-oidc-rbac
Step 2: Create kustomization.yaml
# argocd/app-configs/k8s-oidc-rbac/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# Note: No namespace specified - ClusterRoleBindings are cluster-scoped resources
resources:
- clusterrolebinding-admins.yaml
- clusterrolebinding-developers.yaml
- clusterrolebinding-viewers.yaml
commonLabels:
app.kubernetes.io/name: k8s-oidc-rbac
app.kubernetes.io/component: authentication
app.kubernetes.io/part-of: authentik
Step 3: Create clusterrolebinding-admins.yaml
# argocd/app-configs/k8s-oidc-rbac/clusterrolebinding-admins.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: oidc-k8s-admins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: oidc:k8s-admins # Matches k3s_oidc_groups_prefix + Authentik group
Step 4: Create clusterrolebinding-developers.yaml
# argocd/app-configs/k8s-oidc-rbac/clusterrolebinding-developers.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: oidc-k8s-developers
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: edit
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: oidc:k8s-developers
Step 5: Create clusterrolebinding-viewers.yaml
# argocd/app-configs/k8s-oidc-rbac/clusterrolebinding-viewers.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: oidc-k8s-viewers
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: oidc:k8s-viewers
Step 6: Validate kustomization
Run: kubectl kustomize argocd/app-configs/k8s-oidc-rbac
Expected: Three ClusterRoleBinding resources with correct labels
Step 7: Commit
git add argocd/app-configs/k8s-oidc-rbac/
git commit -m "feat(argocd): add OIDC RBAC ClusterRoleBindings
Creates ClusterRoleBindings for OIDC-authenticated groups:
- oidc:k8s-admins → cluster-admin
- oidc:k8s-developers → edit
- oidc:k8s-viewers → view
Groups are prefixed with 'oidc:' to match k3s OIDC configuration.
Part of k8s-oidc-authentication implementation."
Task 5: Create ArgoCD Application Definition¶
Files:
- Create: argocd/cluster-app/k8s-oidc-rbac.yaml
- Reference: Existing application definitions in argocd/cluster-app/
Step 1: Check existing application pattern
Look at an existing application in argocd/cluster-app/ for the pattern.
Step 2: Create application definition
# argocd/cluster-app/k8s-oidc-rbac.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: k8s-oidc-rbac
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
repoURL: https://github.com/fzymgc-house/selfhosted-cluster.git
targetRevision: HEAD
path: argocd/app-configs/k8s-oidc-rbac
destination:
server: https://kubernetes.default.svc
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=false # ClusterRoleBindings are cluster-scoped
Step 3: Validate YAML syntax
Run: yamllint argocd/cluster-app/k8s-oidc-rbac.yaml
Expected: No errors
Step 4: Commit
git add argocd/cluster-app/k8s-oidc-rbac.yaml
git commit -m "feat(argocd): add k8s-oidc-rbac application definition
ArgoCD Application for managing OIDC RBAC ClusterRoleBindings.
Auto-syncs from argocd/app-configs/k8s-oidc-rbac/.
Part of k8s-oidc-authentication implementation."
Task 6: Create User Documentation¶
Files:
- Create: docs/kubernetes-access.md
- Archive: None (no existing file to archive for this topic)
Step 1: Create kubernetes-access.md
# Kubernetes Access Guide
## Overview
This cluster uses OIDC authentication via Authentik for human users. Service accounts and the static admin kubeconfig are available for automation and break-glass access.
## Prerequisites
| Tool | Installation |
|------|--------------|
| kubectl | `brew install kubectl` |
| kubelogin | `brew install kubelogin` |
## OIDC Authentication (Recommended)
### First-Time Setup
1. **Create kubeconfig file** at `~/.kube/configs/fzymgc-house-oidc.yml`:
```yaml
apiVersion: v1
kind: Config
clusters:
- name: fzymgc-house
cluster:
server: https://192.168.20.140:6443
certificate-authority-data: <base64-encoded-ca>
contexts:
- name: fzymgc-house-oidc
context:
cluster: fzymgc-house
user: oidc
users:
- name: oidc
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: kubectl
args:
- oidc-login
- get-token
- --oidc-issuer-url=https://auth.fzymgc.house/application/o/kubernetes/
- --oidc-client-id=kubernetes
interactiveMode: IfAvailable
current-context: fzymgc-house-oidc
Note: Get the CA certificate from
/etc/rancher/k3s/k3s.yamlon any control plane node.
-
Set KUBECONFIG:
export KUBECONFIG=~/.kube/configs/fzymgc-house-oidc.yml -
First command opens browser:
kubectl get nodes # Browser opens → Login to Authentik → Return to terminal
Access Levels¶
| Authentik Group | Kubernetes Role | Permissions |
|---|---|---|
| k8s-admins | cluster-admin | Full cluster access |
| k8s-developers | edit | Create/modify workloads (no RBAC) |
| k8s-viewers | view | Read-only access |
Token Lifecycle¶
| Token | Lifetime | Behavior |
|---|---|---|
| Access | 15 min | Auto-refreshes silently |
| Refresh | 8 hours | Browser re-auth when expired |
Clear cache to force re-auth:
rm -rf ~/.kube/cache/oidc-login
Break-Glass Access¶
When Authentik is unavailable, use the static admin kubeconfig:
export KUBECONFIG=~/.kube/configs/fzymgc-house-admin.yml
kubectl --context fzymgc-house get nodes
Warning: Admin kubeconfig has full cluster-admin privileges. Use only for emergencies.
Troubleshooting¶
Browser doesn't open¶
# Try manual browser flow
kubectl oidc-login get-token \
--oidc-issuer-url=https://auth.fzymgc.house/application/o/kubernetes/ \
--oidc-client-id=kubernetes
Token expired during long operation¶
Use admin kubeconfig for streaming operations like kubectl logs -f.
Group membership not updated¶
Clear token cache and re-authenticate:
rm -rf ~/.kube/cache/oidc-login
kubectl get nodes # Re-authenticates
**Step 2: Commit**
```bash
git add docs/kubernetes-access.md
git commit -m "docs: add OIDC-based Kubernetes access guide
Documents:
- kubelogin setup and configuration
- Access levels and group mappings
- Token lifecycle and refresh behavior
- Break-glass procedures
- Troubleshooting steps
Part of k8s-oidc-authentication implementation."
Task 7: Update Notion Documentation¶
Files: - Notion Services Catalog: Add Kubernetes OIDC entry - Notion Tech References: Add kubelogin entry
Step 1: Add Kubernetes OIDC to Services Catalog
Using Notion MCP, add entry to Services Catalog database (ID: 50a1adf14f1d4d3fbd78ccc2ca36facc):
- Name: Kubernetes OIDC
- Category: Platform
- Hostname: N/A (API endpoint)
- Alt Hostnames: N/A
- Ingress Type: N/A
- Auth Method: OIDC
- Vault Path: N/A (no secrets stored)
- Namespace: kube-system
- Status: Operational (after deployment)
Step 2: Add kubelogin to Tech References
Using Notion MCP, add entry to Tech References database (ID: f7548c57375542b395694ae433ff07a4):
- Technology: kubelogin
- Category: Security
- Docs URL: https://github.com/int128/kubelogin
- Version: v1.28.0
Step 3: Commit (for tracking)
No git commit needed - Notion is external.
Task 8: Push Feature Branch and Create PR¶
Step 1: Push branch
git push -u origin feat/k8s-oidc-authentication
Step 2: Create PR
gh pr create \
--title "feat: implement K8s OIDC authentication via Authentik" \
--body "## Summary
Implements OIDC authentication for kubectl using Authentik as the identity provider.
### Changes
**Terraform (Phase 1)**
- New \`tf/authentik/kubernetes-oidc.tf\` with OAuth2 provider and application
**Ansible (Phase 2)**
- OIDC variables in \`k3s-common/defaults/main.yml\`
- OIDC args in \`k3s-config.yaml.j2\` template
**ArgoCD (Phase 3)**
- New \`argocd/app-configs/k8s-oidc-rbac/\` with ClusterRoleBindings
- ArgoCD Application definition
**Documentation**
- New \`docs/kubernetes-access.md\` user guide
- Notion updates (Services Catalog, Tech References)
### Deployment Order
1. **Terraform**: Apply Authentik changes (creates OIDC provider)
2. **Ansible**: Apply k3s changes (rolling restart of control plane)
3. **ArgoCD**: Auto-syncs RBAC manifests after merge
### Testing
After deployment:
\`\`\`bash
# Install kubelogin
brew install kubelogin
# Configure kubeconfig (see docs/kubernetes-access.md)
kubectl --context fzymgc-house-oidc get nodes
\`\`\`
### Design Document
See \`docs/plans/2025-12-28-k8s-oidc-authentication-design.md\`
---
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>"
Deployment Instructions (Post-Merge)¶
Phase 1: Apply Terraform¶
# Plan
terraform -chdir=tf/authentik plan
# Apply
terraform -chdir=tf/authentik apply
# Verify in Authentik admin
# Navigate to: Applications → Providers → "Provider for Kubernetes"
Phase 2: Apply Ansible (Rolling Restart)¶
⚠️ Requires rolling restart of k3s servers
# Dry run first
ansible-playbook -i ansible/inventory/hosts.yml ansible/k3s-playbook.yml \
--tags k3s-server --check --diff --limit tpi-alpha-1
# Apply to first control plane node
ansible-playbook -i ansible/inventory/hosts.yml ansible/k3s-playbook.yml \
--tags k3s-server --limit tpi-alpha-1
# Wait for node to be Ready
kubectl --context fzymgc-house get nodes -w
# Verify OIDC args
ssh tpi-alpha-1 'sudo cat /etc/rancher/k3s/config.yaml | grep oidc'
# Repeat for tpi-alpha-2, tpi-alpha-3
Phase 3: ArgoCD Sync¶
ArgoCD auto-syncs after merge. Verify:
kubectl --context fzymgc-house get clusterrolebindings | grep oidc
# Expected: oidc-k8s-admins, oidc-k8s-developers, oidc-k8s-viewers
Phase 4: Test Authentication¶
# Install kubelogin
brew install kubelogin
# Create kubeconfig (see docs/kubernetes-access.md)
# Test (browser opens)
kubectl --context fzymgc-house-oidc get nodes
# Verify identity
kubectl --context fzymgc-house-oidc auth whoami