Problem
Users get HTTP 502 from a Kubernetes app even though DNS and the public load balancer are reachable.
Use this workflow when users see a 502 through ingress or a load balancer and the root cause could be service routing, empty endpoints, unhealthy pods, or a recent deploy.
The app workflow is read-first: Clanker Cloud gathers cluster evidence, explains the likely fault line, and keeps remediation as a reviewed next step.
Answer first: a 502 usually means the edge path is alive but the backend path is broken. Check ingress, service endpoints, pod readiness, and recent rollout events together.
Users get HTTP 502 from a Kubernetes app even though DNS and the public load balancer are reachable.
Copy the app query below, then adjust context names, profiles, namespaces, and provider scopes for your environment.
Read-only investigation. The app reads cluster state through the local runtime and the open-source CLI engine underneath it. No kubectl apply, delete, restart, or rollout command runs unless you create and approve a separate maker/action plan.
Open the reviewed plan only after the cause is clear: restore the missing secret, roll back the rollout, or fix readiness configuration and re-check endpoints.
Users get HTTP 502 from a Kubernetes app even though DNS and the public load balancer are reachable.
Clanker Cloud app:
1. Open Kubernetes or Overview.
2. Select context prod-eks and namespace checkout.
3. Ask:
Why is checkout returning 502 through ingress? Check ingress rules, service endpoints, pod readiness, recent events, and the last rollout in namespace checkout.clanker k8s health --context prod-eks -o json
# Same investigation prompt in the Clanker Cloud app:
Why is checkout returning 502 through ingress? Check ingress rules, service endpoints, pod readiness, recent events, and the last rollout in namespace checkout.Clanker Cloud app connected to the affected cluster, kubeconfig trusted locally, namespace checkout, ingress hostname, service name checkout-api, deployment name checkout-api, and the approximate time the 502 started.
Finding: ingress checkout.example.com routes /checkout to service checkout-api:8080, but the service has zero ready endpoints. Pods from rollout checkout-api-7f9c are NotReady because readiness probes fail on /healthz after DB_URL was changed in the last deploy. Suggested next step: roll back the deployment or restore the secret value, then re-check endpoints before touching ingress.Read-only investigation. The app reads cluster state through the local runtime and the open-source CLI engine underneath it. No kubectl apply, delete, restart, or rollout command runs unless you create and approve a separate maker/action plan.
Open the reviewed plan only after the cause is clear: restore the missing secret, roll back the rollout, or fix readiness configuration and re-check endpoints.
AWS spend is up sharply this week and the team needs to know which resources, services, and changes explain it.
The team needs to know which Cloudflare routes reach EKS workloads and whether any public paths skip expected authentication or WAF controls.
Claude Code can inspect the repo, but it cannot see the running cluster, failing pods, ingress state, or cloud context without a controlled tool surface.
Read-only investigation. The app reads cluster state through the local runtime and the open-source CLI engine underneath it. No kubectl apply, delete, restart, or rollout command runs unless you create and approve a separate maker/action plan.
Yes. The examples lead with the Clanker Cloud app because that is the product workflow. The public Clanker CLI powers the local runtime and remains the equivalent path for terminals, automation, and MCP clients.
Browse the proof-oriented examples for Kubernetes, cost, Cloudflare, MCP, and review-before-apply workflows.