I’ve set up 50+ GitHub Actions CI/CD pipelines deploying to Kubernetes. Most teams spend weeks debugging permission issues, image pull errors, and failed deployments. Here’s what actually works in production.

Visual Overview:

graph LR
    subgraph "CI/CD Pipeline"
        Code[Code Commit] --> Build[Build]
        Build --> Test[Test]
        Test --> Security[Security Scan]
        Security --> Deploy[Deploy]
        Deploy --> Monitor[Monitor]
    end

    style Code fill:#667eea,color:#fff
    style Security fill:#f44336,color:#fff
    style Deploy fill:#4caf50,color:#fff

Why This Matters

According to the 2024 State of DevOps Report, teams with mature CI/CD practices deploy 46x more frequently with 7x lower change failure rates. Yet I’ve seen teams abandon Kubernetes deployments after hitting GitHub Actions’ notorious “ImagePullBackOff” errors and RBAC nightmares.

What you’ll learn:

  • Complete GitHub Actions workflow for K8s deployment
  • Docker image building and registry management
  • Kubernetes RBAC and service account setup
  • Multi-environment deployment strategies
  • Common errors and their fixes
  • Production-ready security practices

The Real Problem: GitHub Actions + Kubernetes Is Complex

Here’s what most tutorials skip:

Issue 1: Service Account Permissions

Error you’ll see:

Error from server (Forbidden): deployments.apps is forbidden: User "system:serviceaccount:default:github-actions" cannot create resource "deployments" in API group "apps"

Why it happens:

  • Default service account has no permissions (80% of cases)
  • Kubeconfig uses wrong context
  • RBAC role doesn’t include required verbs

The correct setup:

# k8s/rbac.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: github-actions
  namespace: production
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: github-actions-deployer
  namespace: production
rules:
- apiGroups: ["apps"]
  resources: ["deployments", "replicasets"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
  resources: ["pods", "services", "configmaps", "secrets"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
  resources: ["pods/log"]
  verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: github-actions-deployer-binding
  namespace: production
subjects:
- kind: ServiceAccount
  name: github-actions
  namespace: production
roleRef:
  kind: Role
  name: github-actions-deployer
  apiGroup: rbac.authorization.k8s.io

Generate kubeconfig for GitHub Actions:

# 1. Create service account
kubectl apply -f k8s/rbac.yaml

# 2. Get service account token
SA_SECRET=$(kubectl get sa github-actions -n production -o jsonpath='{.secrets[0].name}')
SA_TOKEN=$(kubectl get secret $SA_SECRET -n production -o jsonpath='{.data.token}' | base64 -d)

# 3. Get cluster CA certificate
CA_CERT=$(kubectl get secret $SA_SECRET -n production -o jsonpath='{.data.ca\.crt}')

# 4. Get API server URL
API_SERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')

# 5. Create kubeconfig
cat > kubeconfig-github-actions.yaml <<EOF
apiVersion: v1
kind: Config
clusters:
- cluster:
    certificate-authority-data: ${CA_CERT}
    server: ${API_SERVER}
  name: production-cluster
contexts:
- context:
    cluster: production-cluster
    namespace: production
    user: github-actions
  name: github-actions-context
current-context: github-actions-context
users:
- name: github-actions
  user:
    token: ${SA_TOKEN}
EOF

# 6. Base64 encode for GitHub Secret
cat kubeconfig-github-actions.yaml | base64 | pbcopy

Add to GitHub Secrets:

  • Go to repository → Settings → Secrets → Actions
  • New secret: KUBE_CONFIG = (paste base64 kubeconfig)

Issue 2: ImagePullBackOff from Private Registry

Error:

Failed to pull image "ghcr.io/yourorg/app:latest": rpc error: code = Unknown desc = failed to pull and unpack image

Root cause: Kubernetes can’t authenticate with your container registry.

Fix: Create Image Pull Secret

# For GitHub Container Registry (ghcr.io)
kubectl create secret docker-registry ghcr-secret \
  --docker-server=ghcr.io \
  --docker-username=$GITHUB_USERNAME \
  --docker-password=$GITHUB_TOKEN \
  --namespace=production

# For Docker Hub
kubectl create secret docker-registry dockerhub-secret \
  --docker-server=docker.io \
  --docker-username=$DOCKER_USERNAME \
  --docker-password=$DOCKER_PASSWORD \
  --namespace=production

Use in deployment:

# k8s/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  namespace: production
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      imagePullSecrets:
      - name: ghcr-secret  # Reference the secret
      containers:
      - name: myapp
        image: /images/posts/setting-up-a-ci-cd-pipeline-to-kubernetes-with-git-deedb9e6.webp
        ports:
        - containerPort: 8080
        env:
        - name: NODE_ENV
          value: "production"
        resources:
          requests:
            cpu: "100m"
            memory: "128Mi"
          limits:
            cpu: "500m"
            memory: "512Mi"
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /ready
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5

Complete Production GitHub Actions Workflow

Here’s the battle-tested workflow I use for enterprise deployments:

# .github/workflows/deploy.yml
name: Build and Deploy to Kubernetes

on:
  push:
    branches:
      - main
      - develop
  pull_request:
    branches:
      - main

env:
  REGISTRY: ghcr.io
  IMAGE_NAME: ${{ github.repository }}

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'

      - name: Install dependencies
        run: npm ci

      - name: Run linter
        run: npm run lint

      - name: Run unit tests
        run: npm test

      - name: Run integration tests
        run: npm run test:integration

  build:
    needs: test
    runs-on: ubuntu-latest
    permissions:
      contents: read
      packages: write

    outputs:
      image_tag: ${{ steps.meta.outputs.tags }}

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Log in to GitHub Container Registry
        uses: docker/login-action@v3
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Extract metadata
        id: meta
        uses: docker/metadata-action@v5
        with:
          images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
          tags: |
            type=ref,event=branch
            type=ref,event=pr
            type=semver,pattern={{version}}
            type=semver,pattern={{major}}.{{minor}}
            type=sha,prefix={{branch}}-

      - name: Build and push Docker image
        uses: docker/build-push-action@v5
        with:
          context: .
          push: true
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}
          cache-from: type=gha
          cache-to: type=gha,mode=max
          build-args: |
            NODE_ENV=production
            BUILD_DATE=${{ github.event.head_commit.timestamp }}
            VCS_REF=${{ github.sha }}

  deploy-staging:
    needs: build
    runs-on: ubuntu-latest
    if: github.ref == 'refs/heads/develop'
    environment:
      name: staging
      url: https://staging.example.com

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up kubectl
        uses: azure/setup-kubectl@v3
        with:
          version: 'v1.28.0'

      - name: Configure kubectl
        run: |
          mkdir -p $HOME/.kube
          echo "${{ secrets.KUBE_CONFIG_STAGING }}" | base64 -d > $HOME/.kube/config
          chmod 600 $HOME/.kube/config

      - name: Verify cluster access
        run: |
          kubectl cluster-info
          kubectl get nodes

      - name: Deploy to staging
        run: |
          # Replace image tag in deployment
          sed -i "s|IMAGE_TAG|${{ github.sha }}|g" k8s/staging/deployment.yaml

          # Apply Kubernetes manifests
          kubectl apply -f k8s/staging/

          # Wait for rollout
          kubectl rollout status deployment/myapp -n staging --timeout=5m

      - name: Run smoke tests
        run: |
          kubectl run smoke-test --image=curlimages/curl:latest --rm -i --restart=Never -- \
            curl -f http://myapp.staging.svc.cluster.local:8080/health || exit 1

  deploy-production:
    needs: build
    runs-on: ubuntu-latest
    if: github.ref == 'refs/heads/main'
    environment:
      name: production
      url: https://example.com

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up kubectl
        uses: azure/setup-kubectl@v3
        with:
          version: 'v1.28.0'

      - name: Configure kubectl
        run: |
          mkdir -p $HOME/.kube
          echo "${{ secrets.KUBE_CONFIG_PROD }}" | base64 -d > $HOME/.kube/config
          chmod 600 $HOME/.kube/config

      - name: Blue-Green Deployment
        run: |
          # Tag current production as blue
          kubectl label deployment myapp version=blue -n production --overwrite

          # Deploy new version as green
          sed -i "s|IMAGE_TAG|${{ github.sha }}|g" k8s/production/deployment.yaml
          sed -i "s|myapp|myapp-green|g" k8s/production/deployment.yaml

          kubectl apply -f k8s/production/deployment.yaml

          # Wait for green deployment
          kubectl rollout status deployment/myapp-green -n production --timeout=10m

      - name: Smoke test green deployment
        run: |
          # Test green deployment
          kubectl run smoke-test-green --image=curlimages/curl:latest --rm -i --restart=Never -- \
            curl -f http://myapp-green.production.svc.cluster.local:8080/health

      - name: Switch traffic to green
        run: |
          # Update service to point to green
          kubectl patch service myapp -n production -p '{"spec":{"selector":{"version":"green"}}}'

          echo "Traffic switched to green deployment"

      - name: Monitor for 5 minutes
        run: |
          sleep 300

          # Check error rates
          ERROR_COUNT=$(kubectl logs -l version=green -n production --tail=1000 | grep ERROR | wc -l)
          if [ $ERROR_COUNT -gt 10 ]; then
            echo "High error rate detected, rolling back"
            kubectl patch service myapp -n production -p '{"spec":{"selector":{"version":"blue"}}}'
            exit 1
          fi

      - name: Clean up blue deployment
        run: |
          kubectl delete deployment myapp -n production
          kubectl label deployment myapp-green version=blue -n production --overwrite
          kubectl patch deployment myapp-green -p '{"metadata":{"name":"myapp"}}' -n production

      - name: Notify Slack
        if: always()
        uses: slackapi/slack-github-action@v1
        with:
          webhook-url: ${{ secrets.SLACK_WEBHOOK }}
          payload: |
            {
              "text": "Deployment to production ${{ job.status }}",
              "blocks": [
                {
                  "type": "section",
                  "text": {
                    "type": "mrkdwn",
                    "text": "*Deployment Status:* ${{ job.status }}\n*Image:* ghcr.io/${{ github.repository }}:${{ github.sha }}\n*Deployed by:* ${{ github.actor }}"
                  }
                }
              ]
            }

Common GitHub Actions + Kubernetes Errors

Error: “error: You must be logged in to the server (Unauthorized)”

Fix: Kubeconfig token expired or invalid

# Regenerate service account token
kubectl delete secret $(kubectl get sa github-actions -n production -o jsonpath='{.secrets[0].name}') -n production
kubectl apply -f k8s/rbac.yaml

# Update GitHub Secret with new kubeconfig

Error: “The connection to the server was refused”

Fix: API server URL incorrect in kubeconfig

# Verify API server URL
kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}'

# Update kubeconfig with correct URL

Error: “error: unable to recognize “deployment.yaml”: no matches for kind “Deployment” in version “apps/v1beta1"”

Fix: Using deprecated API version

# ❌ Old (deprecated)
apiVersion: apps/v1beta1

# ✅ New (correct)
apiVersion: apps/v1

Real-World Case Study: E-Commerce Platform CI/CD

I implemented this for an e-commerce platform with 200+ deployments per day:

Requirements

  • Zero-downtime deployments
  • Automated rollbacks on error
  • Multi-region deployment (US, EU, APAC)
  • < 10 minute deploy time
  • Compliance audit trail

Solution Architecture

Multi-Environment Strategy:

feature-branch → dev cluster (EKS dev)
develop branch → staging cluster (EKS staging)
main branch → production clusters (3x EKS prod)

Deployment Strategy: Blue-Green with automated canary analysis

# Canary deployment config
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
  name: myapp
  namespace: production
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: myapp
  progressDeadlineSeconds: 600
  service:
    port: 8080
  analysis:
    interval: 1m
    threshold: 10
    maxWeight: 50
    stepWeight: 10
    metrics:
    - name: request-success-rate
      thresholdRange:
        min: 99
      interval: 1m
    - name: request-duration
      thresholdRange:
        max: 500
      interval: 1m

Results

  • Deployment frequency: 5/day → 200+/day (4000% increase)
  • Deploy time: 45 minutes → 8 minutes (82% reduction)
  • Failed deployments: 15% → <1% (automatic rollbacks)
  • MTTR: 2 hours → 10 minutes
  • Zero production incidents from bad deployments in 12 months

Security Best Practices

✅ DO

1. Use least privilege RBAC

# Limit permissions to specific namespaces only
rules:
- apiGroups: ["apps"]
  resources: ["deployments"]
  verbs: ["get", "list", "update", "patch"]  # No delete!
  resourceNames: ["myapp"]  # Specific deployment only

2. Scan images for vulnerabilities

- name: Scan image with Trivy
  uses: aquasecurity/trivy-action@master
  with:
    image-ref: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}
    format: 'sarif'
    output: 'trivy-results.sarif'

- name: Upload scan results to GitHub Security
  uses: github/codeql-action/upload-sarif@v2
  with:
    sarif_file: 'trivy-results.sarif'

3. Sign container images

- name: Install cosign
  uses: sigstore/cosign-installer@v3

- name: Sign image
  run: |
    cosign sign --key cosign.key ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}

❌ DON’T

1. Don’t commit kubeconfig to repo

# ❌ BAD
- name: Deploy
  run: kubectl apply -f deployment.yaml --kubeconfig=./kubeconfig.yaml

# ✅ GOOD
- name: Deploy
  run: |
    echo "${{ secrets.KUBE_CONFIG }}" | base64 -d > $HOME/.kube/config
    kubectl apply -f deployment.yaml

2. Don’t use cluster-admin

# ❌ BAD - way too permissive
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: github-actions-admin
roleRef:
  kind: ClusterRole
  name: cluster-admin  # NEVER DO THIS

# ✅ GOOD - namespace-scoped Role

🎯 Key Takeaways

  • Complete GitHub Actions workflow for K8s deployment
  • Docker image building and registry management
  • Kubernetes RBAC and service account setup

Wrapping Up

GitHub Actions + Kubernetes CI/CD is powerful but has sharp edges. The key is getting RBAC right, managing image pull secrets properly, and implementing proper deployment strategies (blue-green, canary).

Next steps:

  1. Set up service account with minimal RBAC permissions
  2. Configure image pull secrets for your registry
  3. Implement blue-green deployment strategy
  4. Add automated rollback on error
  5. Set up monitoring and alerting
  6. Test disaster recovery scenarios

👉 Related: Orchestrating Kubernetes and IAM with Terraform

👉 Related: Understanding Kubernetes Networking: A Comprehensive Guide