Kubernetes OIDC Authentication via Authentik¶
Context: This design replaces the original Vault PKI client certificate approach documented in
archive/2025-12-26-k8s-vault-pki-access-implementation.md. That approach was blocked by a k3s limitation where--kube-apiserver-arg client-ca-file=is silently ignored (see k3s#9367).
Goal¶
Enable users to authenticate to the k3s cluster using Authentik OIDC, with group-based RBAC (k8s-admins, k8s-developers, k8s-viewers).
Architecture¶
┌────────────┐ OIDC Token ┌────────────┐ Verify Token ┌────────────┐
│ kubectl │ ───────────────► │ k3s │ ──────────────────► │ Authentik │
│ │ │ API Server │ │ (OIDC) │
└────────────┘ └────────────┘ └────────────┘
│ │
│ kubelogin │ RBAC
▼ ▼
┌────────────┐ ┌────────────┐
│ Browser │ │ ClusterRole│
│ Auth Flow │ │ Bindings │
└────────────┘ └────────────┘
Flow:
1. User runs kubectl with OIDC-configured kubeconfig
2. kubelogin (OIDC plugin) opens browser for Authentik login
3. User authenticates (with MFA if configured), Authentik issues JWT with groups claim
4. kubectl sends JWT to k3s API server
5. k3s validates JWT against Authentik OIDC endpoint (including audience claim)
6. User's groups (prefixed with oidc:) map to RBAC via ClusterRoleBindings
Components¶
1. Authentik OIDC Provider (Terraform)¶
Create tf/authentik/kubernetes-oidc.tf (separate from existing kubernetes-groups.tf):
# OAuth2 Provider for Kubernetes API Server
resource "authentik_provider_oauth2" "kubernetes" {
name = "Provider for Kubernetes"
client_id = "kubernetes"
# Public client - no secret required for native CLI apps
# Valid values: "public" (CLI tools) or "confidential" (web apps with secrets)
client_type = "public"
# Use explicit consent - user confirms once per session
# Note: Implicit consent flow would require creating a new flow in Authentik
authorization_flow = data.authentik_flow.default_provider_authorization_explicit_consent.id
invalidation_flow = data.authentik_flow.default_provider_invalidation_flow.id
# Token lifetimes
# Format: "minutes=N", "hours=N", or "days=N"
access_token_validity = "minutes=15"
refresh_token_validity = "hours=8" # Requires verification against provider schema
# Allowed redirect URIs for kubelogin
allowed_redirect_uris = [
{ matching_mode = "strict", url = "http://localhost:8000" },
{ matching_mode = "strict", url = "http://localhost:18000" },
{ matching_mode = "strict", url = "urn:ietf:wg:oauth:2.0:oob" }
]
# Include groups in token
property_mappings = [
data.authentik_property_mapping_provider_scope.openid.id,
data.authentik_property_mapping_provider_scope.email.id,
data.authentik_property_mapping_provider_scope.profile.id,
authentik_property_mapping_provider_scope.kubernetes_groups.id,
]
signing_key = data.authentik_certificate_key_pair.tls.id
}
# Custom scope to include groups claim
resource "authentik_property_mapping_provider_scope" "kubernetes_groups" {
name = "Kubernetes Groups"
scope_name = "groups"
expression = "return list(request.user.ak_groups.values_list('name', flat=True))"
}
resource "authentik_application" "kubernetes" {
name = "Kubernetes"
slug = "kubernetes"
protocol_provider = authentik_provider_oauth2.kubernetes.id
meta_launch_url = "blank://blank" # CLI app, no launch URL
}
File organization: Keep OIDC provider separate from groups for clarity:
- kubernetes-groups.tf - Group definitions (existing)
- kubernetes-oidc.tf - OIDC provider and application (new)
2. k3s OIDC Configuration (Ansible)¶
Variables (k3s-common/defaults/main.yml)¶
Add OIDC-specific variables following the k3s_oidc_* naming convention:
# OIDC Authentication Configuration
k3s_oidc_enabled: true
k3s_oidc_issuer_url: "https://auth.fzymgc.house/application/o/kubernetes/"
k3s_oidc_client_id: "kubernetes"
k3s_oidc_username_claim: "email"
k3s_oidc_groups_claim: "groups"
k3s_oidc_username_prefix: "oidc:"
k3s_oidc_groups_prefix: "oidc:"
# Audience validation - prevents token reuse from other Authentik applications
# Format: KEY=VALUE (per k3s docs: --oidc-required-claim=KEY=VALUE)
k3s_oidc_required_claims: "aud=kubernetes"
Template (k3s-config.yaml.j2)¶
Add to the existing kube-apiserver-arg section (append OIDC args after feature gates):
kube-apiserver-arg:
{% for gate in k3s_feature_gates %}
- feature-gates={{ gate }}
{% endfor %}
{% if k3s_oidc_enabled | default(false) %}
# OIDC Authentication via Authentik
- oidc-issuer-url={{ k3s_oidc_issuer_url }}
- oidc-client-id={{ k3s_oidc_client_id }}
- oidc-username-claim={{ k3s_oidc_username_claim }}
- oidc-groups-claim={{ k3s_oidc_groups_claim }}
- oidc-username-prefix={{ k3s_oidc_username_prefix }}
- oidc-groups-prefix={{ k3s_oidc_groups_prefix }}
- oidc-required-claim={{ k3s_oidc_required_claims }}
{% endif %}
Deployment Requirements¶
⚠️ Important: Changing kube-apiserver-arg requires a rolling restart of k3s servers:
- Run playbook with
--check --difffirst - Apply changes one control plane node at a time
- Wait for node to become Ready before proceeding to next
- Verify OIDC endpoint is accessible after restart
# Dry run first
ansible-playbook -i inventory/hosts.yml k3s-playbook.yml \
--tags k3s-server --check --diff --limit tpi-alpha-1
# Apply to first control plane node
ansible-playbook -i inventory/hosts.yml k3s-playbook.yml \
--tags k3s-server --limit tpi-alpha-1
# Wait and verify, then continue with tpi-alpha-2, tpi-alpha-3
3. RBAC ClusterRoleBindings (Kubernetes/ArgoCD)¶
Application Structure¶
Create argocd/app-configs/k8s-oidc-rbac/ following ArgoCD conventions:
argocd/app-configs/k8s-oidc-rbac/
├── kustomization.yaml
├── clusterrolebinding-admins.yaml
├── clusterrolebinding-developers.yaml
└── clusterrolebinding-viewers.yaml
kustomization.yaml¶
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# Note: No namespace specified - ClusterRoleBindings are cluster-scoped resources
resources:
- clusterrolebinding-admins.yaml
- clusterrolebinding-developers.yaml
- clusterrolebinding-viewers.yaml
commonLabels:
app.kubernetes.io/name: k8s-oidc-rbac
app.kubernetes.io/component: authentication
app.kubernetes.io/part-of: authentik
clusterrolebinding-admins.yaml¶
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: oidc-k8s-admins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: oidc:k8s-admins # Matches k3s_oidc_groups_prefix + Authentik group
clusterrolebinding-developers.yaml¶
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: oidc-k8s-developers
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: edit
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: oidc:k8s-developers
clusterrolebinding-viewers.yaml¶
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: oidc-k8s-viewers
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: oidc:k8s-viewers
4. Client Setup (kubelogin)¶
Installation¶
| Platform | Command |
|---|---|
| macOS (Homebrew) | brew install kubelogin |
| Linux (Homebrew) | brew install kubelogin |
| Linux (binary) | Download from kubelogin releases |
| Windows | choco install kubelogin or download binary |
| Go install | go install github.com/int128/kubelogin@latest |
Recommended version: v1.28.0 or later
Kubeconfig Configuration¶
Create or update ~/.kube/configs/fzymgc-house-oidc.yml:
apiVersion: v1
kind: Config
clusters:
- name: fzymgc-house
cluster:
server: https://192.168.20.140:6443
certificate-authority-data: <base64-encoded-ca>
contexts:
- name: fzymgc-house-oidc
context:
cluster: fzymgc-house
user: oidc
users:
- name: oidc
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: kubectl
args:
- oidc-login
- get-token
- --oidc-issuer-url=https://auth.fzymgc.house/application/o/kubernetes/
- --oidc-client-id=kubernetes
interactiveMode: IfAvailable
current-context: fzymgc-house-oidc
First-Time Login¶
# Set kubeconfig
export KUBECONFIG=~/.kube/configs/fzymgc-house-oidc.yml
# First command opens browser for authentication
kubectl get nodes
# Browser opens → Login to Authentik → Return to terminal
# Token is cached for subsequent commands
Token Lifetime and Refresh¶
| Token Type | Lifetime | Behavior |
|---|---|---|
| Access Token | 15 minutes | Used for API authentication |
| Refresh Token | 8 hours | Used to obtain new access tokens |
| ID Token | 15 minutes | Contains user claims |
Behavior during long-running operations:
- kubectl logs -f and similar streaming commands may fail if access token expires
- kubelogin automatically refreshes tokens when refresh token is valid
- If refresh token expires, user must re-authenticate via browser
Recommendation: For long-running operations, use the static admin kubeconfig (see Break-Glass Access below).
Break-Glass and Static Authentication¶
Break-Glass Access¶
When Authentik is unavailable, use the static admin kubeconfig:
| Item | Location |
|---|---|
| Admin kubeconfig | /etc/rancher/k3s/k3s.yaml on control plane nodes |
| Local copy | ~/.kube/configs/fzymgc-house-admin.yml |
| Context name | fzymgc-house |
# Break-glass access when Authentik is down
export KUBECONFIG=~/.kube/configs/fzymgc-house-admin.yml
kubectl --context fzymgc-house get nodes
Security notes: - Admin kubeconfig has full cluster-admin privileges - Store securely, do not share - Rotate periodically (k3s regenerates on restart if removed) - Audit usage via API server logs
Static Authentication Methods¶
| Method | Use Case | Credentials Location |
|---|---|---|
| k3s admin kubeconfig | Break-glass, automation | /etc/rancher/k3s/k3s.yaml |
| Service account tokens | CI/CD, automation | Kubernetes secrets |
| OIDC (this design) | Human users | Authentik |
Recovery Procedures¶
Authentik unavailable:
1. Use admin kubeconfig for immediate access
2. Check Authentik pod status: kubectl get pods -n authentik
3. Review Authentik logs for issues
4. Restore from backup if necessary
Token refresh failing:
1. Clear kubelogin cache: rm -rf ~/.kube/cache/oidc-login
2. Re-authenticate: kubectl oidc-login setup
3. If Authentik is down, use break-glass access
Security Considerations¶
Token Security¶
| Control | Implementation |
|---|---|
| Audience validation | oidc-required-claim=aud=kubernetes prevents token reuse |
| Token binding | Tokens are bound to kubelogin's redirect URI |
| Short-lived tokens | 15-minute access tokens limit exposure |
| MFA | Configurable in Authentik authentication flow |
Authentik Configuration¶
Required settings in Authentik: - Restrict redirect URIs to localhost and OOB only - Enable MFA for k8s-admins group (recommended) - Set appropriate token lifetimes - Monitor authentication logs
Group Prefix Explanation¶
The oidc: prefix on groups serves two purposes:
1. Namespace isolation: Distinguishes OIDC groups from local groups
2. Audit clarity: Makes it clear which ClusterRoleBindings are OIDC-based
Example: User in Authentik group k8s-admins appears as oidc:k8s-admins in Kubernetes.
Existing Infrastructure¶
| Component | Status | Notes |
|---|---|---|
| Authentik Groups | ✅ Exists | k8s-admins, k8s-developers, k8s-viewers in kubernetes-groups.tf |
| Vault Group Aliases | ✅ Exists | Retained for mTLS and other Vault PKI use cases |
| k3s-common role | ✅ Exists | Template needs OIDC args added |
| RBAC Bindings | ❌ Needed | New manifests in argocd/app-configs/k8s-oidc-rbac/ |
| Authentik OIDC Provider | ❌ Needed | New in tf/authentik/kubernetes-oidc.tf |
Implementation Tasks¶
Phase 1: Terraform (Authentik OIDC Provider)¶
- Create
tf/authentik/kubernetes-oidc.tfwith: - OAuth2 provider (public client)
- Groups scope mapping
- Application registration
- Apply via HCP Terraform
- Verify provider appears in Authentik admin
Phase 2: Ansible (k3s OIDC Configuration)¶
- Add variables to
k3s-common/defaults/main.yml - Update
k3s-config.yaml.j2template - Dry-run first:
ansible-playbook ... --check --diff - Apply with rolling restart (one node at a time)
- Verify OIDC args in k3s config:
# Check k3s server configuration ssh tpi-alpha-1 'sudo cat /etc/rancher/k3s/config.yaml | grep oidc' # Or check k3s server logs ssh tpi-alpha-1 'sudo journalctl -u k3s | grep -i oidc | tail -20'
Phase 3: ArgoCD (RBAC Manifests)¶
- Create
argocd/app-configs/k8s-oidc-rbac/directory structure - Add ClusterRoleBindings for each group
- Commit and push - ArgoCD syncs automatically
- Verify bindings:
kubectl get clusterrolebindings | grep oidc
Phase 4: Documentation and Testing¶
- Archive old documentation:
- Move
docs/kubernetes-access.mdtodocs/plans/archive/kubernetes-access-vault-pki.md - Create new
docs/kubernetes-access.mdfor OIDC - Update Notion:
- Services Catalog: Add Kubernetes OIDC entry
- Tech References: Add kubelogin
- Run test scenarios (see below)
Test Scenarios¶
RBAC Enforcement Tests¶
# As k8s-viewer: List pods (should succeed)
kubectl --context fzymgc-house-oidc get pods -A
# As k8s-viewer: Create deployment (should fail - 403 Forbidden)
kubectl --context fzymgc-house-oidc create deployment nginx --image=nginx
# As k8s-developer: Create deployment (should succeed)
# (Requires user in k8s-developers group)
kubectl --context fzymgc-house-oidc create deployment test-nginx --image=nginx
kubectl --context fzymgc-house-oidc delete deployment test-nginx
# As k8s-developer: Create ClusterRole (should fail - 403 Forbidden)
kubectl --context fzymgc-house-oidc create clusterrole test --verb=get --resource=pods
Token Handling Tests¶
# 1. Initial authentication
kubectl --context fzymgc-house-oidc get nodes
# Browser should open for authentication
# 2. Token reuse (within 15 min)
kubectl --context fzymgc-house-oidc get nodes
# Should work without browser
# 3. Token refresh (after 15 min, within 8 hours)
# Wait 15+ minutes, then:
kubectl --context fzymgc-house-oidc get nodes
# Should work via refresh token (no browser)
# 4. Full re-authentication (after 8 hours)
# Wait 8+ hours or clear cache:
rm -rf ~/.kube/cache/oidc-login
kubectl --context fzymgc-house-oidc get nodes
# Browser should open again
Group Membership Change Test¶
# 1. Check current groups
kubectl --context fzymgc-house-oidc auth whoami
# 2. Add user to new group in Authentik (e.g., promote from viewer to developer)
# 3. Clear token cache and re-authenticate
rm -rf ~/.kube/cache/oidc-login
kubectl --context fzymgc-house-oidc get nodes
# 4. Verify new group membership
kubectl --context fzymgc-house-oidc auth whoami
# Should show new group
Comparison: OIDC vs Vault PKI¶
| Aspect | OIDC (This Design) | Vault PKI (Original) |
|---|---|---|
| k3s Support | ✅ Fully supported | ❌ Blocked by k3s limitation |
| Auth Flow | Browser-based | Certificate-based |
| Credential Lifetime | 15 min access / 8 hr refresh | 15 min certs |
| Offline Access | ❌ Requires Authentik | ❌ Requires Vault |
| MFA Support | ✅ Via Authentik | ❌ Not applicable |
| User Experience | Browser popup | CLI-only |
| Audit Trail | Authentik logs | Vault audit logs |
| Token Reuse Prevention | ✅ Audience claim | N/A (cert per request) |
Risks and Mitigations¶
| Risk | Mitigation |
|---|---|
| Authentik unavailable | Break-glass via k3s admin kubeconfig (see above) |
| Token expiry during long operations | Use admin kubeconfig for streaming ops |
| Group sync delays | Groups are in JWT, instant effect on re-auth |
| Token theft | Short-lived tokens, audience validation, MFA |
| Misconfigured RBAC | Test all roles before production use |
Documentation Handling¶
Files to Archive¶
After implementation, move to docs/plans/archive/:
- docs/kubernetes-access.md → docs/plans/archive/kubernetes-access-vault-pki.md
- docs/plans/2025-12-26-k8s-vault-pki-access-implementation.md → already in archive
Files to Create/Update¶
docs/kubernetes-access.md- New OIDC-focused user guide- Notion Services Catalog - Add Kubernetes OIDC entry
- Notion Tech References - Add kubelogin entry
Decision Required¶
This design requires pivoting from the Vault PKI approach. The Vault PKI infrastructure (roles, policies, OIDC group aliases) can remain for other use cases (mTLS, application certs) but won't be used for kubectl authentication.
Recommendation: Proceed with OIDC implementation as it's the k3s-recommended approach and leverages existing Authentik infrastructure.