Users get HTTP 502 from a Kubernetes app even though DNS and the public load balancer are reachable.
Kubernetes 502 debugging
A buyer or operator has users seeing 502s and wants to know whether Clanker Cloud can find the broken backend path without guessing.
This page is short on purpose: problem, app query, required context, sample output, and safety boundary before the CTA.
Use Clanker Cloud when a 502 requires ingress, service endpoint, pod readiness, and rollout context in one read-first pass.
Problem, query, context, output, and safety boundary
Clanker Cloud app:
1. Open Kubernetes or Overview.
2. Select context prod-eks and namespace checkout.
3. Ask:
Why is checkout returning 502 through ingress? Check ingress rules, service endpoints, pod readiness, recent events, and the last rollout in namespace checkout.clanker k8s health --context prod-eks -o json
# Same investigation prompt in the Clanker Cloud app:
Why is checkout returning 502 through ingress? Check ingress rules, service endpoints, pod readiness, recent events, and the last rollout in namespace checkout.Clanker Cloud app connected to the affected cluster, kubeconfig trusted locally, namespace checkout, ingress hostname, service name checkout-api, deployment name checkout-api, and the approximate time the 502 started.
Finding: ingress checkout.example.com routes /checkout to service checkout-api:8080, but the service has zero ready endpoints. Pods from rollout checkout-api-7f9c are NotReady because readiness probes fail on /healthz after DB_URL was changed in the last deploy. Suggested next step: roll back the deployment or restore the secret value, then re-check endpoints before touching ingress.Read-only investigation. The app reads cluster state through the local runtime and the open-source CLI engine underneath it. No kubectl apply, delete, restart, or rollout command runs unless you create and approve a separate maker/action plan.
Open the reviewed plan only after the cause is clear: restore the missing secret, roll back the rollout, or fix readiness configuration and re-check endpoints.
Buyer query
A buyer or operator has users seeing 502s and wants to know whether Clanker Cloud can find the broken backend path without guessing.
Read-only first
Read-only investigation. The app reads cluster state through the local runtime and the open-source CLI engine underneath it. No kubectl apply, delete, restart, or rollout command runs unless you create and approve a separate maker/action plan.
Proof artifact
The workflow below includes the app query, local context required, example output, supported providers, and next step.
Next step
Open the reviewed plan only after the cause is clear: restore the missing secret, roll back the rollout, or fix readiness configuration and re-check endpoints.
Read the supporting pages
Full example workflow
Open the longer proof page with the same app-first workflow pattern.
Local credentials
Read what stays local, what can go to model providers, and how read-only/maker/apply differ.
Example library
Browse all proof workflows for cost, security, Kubernetes, MCP, and review-before-apply.
Common questions
Does this use case require write access first?
Read-only investigation. The app reads cluster state through the local runtime and the open-source CLI engine underneath it. No kubectl apply, delete, restart, or rollout command runs unless you create and approve a separate maker/action plan.
Why is this separate from the example page?
The use-case page answers the high-intent buyer query directly. The example page is the deeper proof artifact with the same concrete workflow format.
Run the read-first workflow
Download the desktop app, connect the existing local context, and start with inspection before reviewed plans.
