Skip to main content
Back to blog

Scan Infrastructure Security Vulnerabilities with Clanker Cloud (Kubernetes 2026)

One-pass severity-graded security scanning for Kubernetes and multi-cloud infrastructure using Clanker Cloud Deep Research. Live today.

There is a specific kind of dread that arrives the morning after a fast ship. You merged everything, the deploy went green, the demo looked clean — and now, twelve hours later, you are wondering whether any of it is actually secure. That feeling is not unique to solo founders or small teams. It hits anyone who moves fast. In 2026, with AI-assisted development compressing the time from idea to deployed service, the security audit almost always gets skipped.

This article covers how to scan infrastructure for security vulnerabilities using Clanker Cloud — what runs today via Deep Research, what the Cybersecurity Agents roadmap feature adds, and how the local-first architecture keeps your credentials and findings on your own machine.


The Vibe Coding Security Gap

Vibe coding to production is now a real workflow, not a joke. AI agents scaffold services, write Terraform, push Helm values, and wire up IAM roles faster than any human could review the output. The speed is genuine. The security blind spots are also genuine.

The gap is not that developers are careless. The gap is tooling. Running trivy image my-service:latest catches known CVEs in a container image, but it tells you nothing about whether that container's service is exposed to 0.0.0.0/0 through a misconfigured security group. It does not tell you which S3 buckets in your account have public read access, which Kubernetes pods are running without resource limits, or which IAM roles have AdministratorAccess attached to them. Container image scanning and infrastructure security scanning are two different problems.

Infrastructure security scanning requires live context: what is actually deployed, what is actually accessible, what is actually connected to what. That is what Clanker Cloud Deep Research does.


What a Real Infrastructure Security Scan Looks Like

A meaningful infrastructure security scan is not a static analysis of config files. It is a live query against your actual running estate. The questions that matter are:

  • Which database endpoints are publicly accessible right now?
  • Which Kubernetes services are exposed as NodePort or LoadBalancer without an authentication layer in front of them?
  • Which S3 buckets have public read ACLs or bucket policies that permit unauthenticated access?
  • Which pods are running as privileged containers?
  • Are there any secrets stored in environment variables rather than a secrets manager?

These questions require reading from multiple providers simultaneously — AWS, GCP, Azure, Kubernetes clusters, Cloudflare — and correlating the findings. Without a unified tool, answering any one of them means writing a custom kubectl command or a multi-step CLI pipeline, and answering all of them on a regular cadence is simply not realistic for a team without a dedicated security function.


Deep Research as a Security Scanner: One Pass, Severity-Graded

Clanker Cloud's Deep Research feature fans out across every connected provider, runs parallel analysis with multiple AI models, and returns a single severity-ranked report. The site copy is accurate: it scans your entire estate in one pass.

The findings are graded with standard severity levels:

  • CRITICAL — Public database endpoint exposed
  • HIGH — Single-AZ cache, no failover
  • MEDIUM — API gateway has no rate limiting
  • MEDIUM — Uncompressed S3 backups growing fast

You get a structured list of real problems, ranked by impact, with no need to write a single query. For teams with multi-cloud deployments — AWS and GCP, or AWS and Kubernetes on EKS — this is the difference between a scan that actually runs and one that lives in the backlog.

The INSPECT step in the four-step workflow is the mechanism behind this: Clanker Cloud scans resources, traces dependencies, and inspects topology without bouncing between consoles. Security visibility is built into the same workflow you already use for incident response and cost analysis.


Plain-English Security Queries

Beyond the one-pass scan, you can query your infrastructure directly for specific security conditions. These run in plain English through the ASK interface. No manual jq pipelines required.

Exposure and access queries:

show me all publicly accessible endpoints in my AWS account
which S3 buckets have public read access enabled
are any RDS instances accessible from 0.0.0.0/0
which services have no authentication layer

Kubernetes-specific queries:

show me all K8s services exposed as NodePort or LoadBalancer
which pods are running without resource limits
show me all IAM roles with AdministratorAccess
are there any secrets hardcoded in environment variables

Each of these queries replaces a chain of CLI commands. The equivalent manual approach for just the Kubernetes-specific checks looks like this:

# Privileged containers
kubectl get pods --all-namespaces -o json | \
  jq '.items[] | select(.spec.containers[].securityContext.privileged == true) | .metadata.name'

# Anonymous cluster role bindings
kubectl get clusterrolebindings -o json | \
  jq '.items[] | select(.subjects[]?.name == "system:anonymous")'

# Exposed NodePort services
kubectl get services --all-namespaces -o json | \
  jq '.items[] | select(.spec.type == "NodePort") | {name: .metadata.name, namespace: .metadata.namespace, port: .spec.ports[].nodePort}'

# Network policies (check for namespaces with none)
kubectl get networkpolicies --all-namespaces

Running all four of those, interpreting the output, and connecting the findings to the rest of your cloud estate is a multi-hour exercise. In Clanker Cloud, it is a single question.


Kubernetes Security Posture in Practice

Kubernetes security posture covers several distinct problem categories, and each one requires different data to assess.

RBAC gaps are common in fast-moving clusters. Roles created during development often accumulate more permissions than production requires — including bindings that grant cluster-admin to service accounts that only need read access. The system:anonymous binding is a category of its own: if it exists, any unauthenticated request to the API server has the permissions that role carries.

Privileged containers are another common finding. A container running with securityContext.privileged: true has root-equivalent access to the host node. It is sometimes necessary for node-level operations, but in data services and application pods it is almost always a misconfiguration.

Exposed services require cross-cutting analysis. A NodePort service is accessible on every node's IP address at the assigned port. A LoadBalancer service provisions a public cloud load balancer. Neither is inherently wrong, but both need to be intentional and documented. An exposed service in a dev namespace that was promoted to production without a security review is a common incident source.

Missing NetworkPolicies mean that by default, all pods in a namespace can communicate with all other pods across the cluster. In a multi-tenant or multi-service deployment, that is a lateral movement risk.

Clanker Cloud surfaces all of these through plain-English queries, tied to the live state of your clusters. For teams managing multiple Kubernetes clusters — EKS, GKE, AKS, or self-managed — across multiple cloud accounts, this is a significant operational advantage. The ai-devops-for-teams workflow covers how to structure this across larger organizations.


Cybersecurity Agents: Continuous Autonomous Scanning (Roadmap)

Deep Research handles the on-demand scan. The roadmap adds something different: autonomous continuous scanning that runs without you triggering it.

The Cybersecurity Agents feature is named explicitly on the Clanker Cloud site with the following copy: "Sleep at night after vibe coding all day. Autonomous security agents that continuously scan for misconfigurations."

The distinction matters. A one-time scan catches the state of your infrastructure at the moment you run it. A new deployment, a Terraform apply, a manually edited security group rule — any of these can introduce a misconfiguration after your last scan. Continuous autonomous scanning means the agent is watching the state of your estate between your changes, not just after you remember to check.

For teams that ship frequently — multiple deploys per day is normal for AI-assisted workflows — the gap between deploys is exactly when new misconfigurations are introduced. The Cybersecurity Agents feature closes that gap without requiring human-initiated scans.

This is on the Clanker Cloud roadmap. Deep Research security scanning is live today.


Local-First Architecture and Security Scanning

Clanker Cloud is a local-first desktop app — macOS, Windows, and Linux. Your credentials never leave your machine. This is directly relevant to security scanning.

Most cloud security scanners operate as SaaS services. To use them, you grant their platform access to your cloud accounts — which means your AWS access keys, GCP service account credentials, or Kubernetes kubeconfigs are stored and processed on their infrastructure. That is a non-trivial attack surface for the very credentials you are using to audit your security posture.

With Clanker Cloud, the scan runs locally. Your credentials stay on your machine. The AI analysis happens through your own model keys via BYOK, with the API call going directly from your machine to the model provider. The findings are stored locally. There is no third-party data plane that has seen your infrastructure topology.

For teams in regulated industries or with strict data residency requirements, this architecture is often the deciding factor. The for-ai-agents integration details how this local-first model extends to autonomous agent workflows.


BYOK for Security Analysis

Clanker Cloud supports bring-your-own-key (BYOK) for all model interactions. For security scanning, model selection has real implications for the depth of analysis.

For complex threat modeling and deep security analysis, Claude Opus 4.6 (claude-opus-4-6) or GPT-5.4 Thinking are appropriate choices. These models handle multi-step reasoning across large infrastructure contexts — correlating an exposed RDS instance with the IAM roles that have access to it, or reasoning about blast radius from a compromised service account.

For sensitive environments where data cannot leave the machine at all, Gemma 4 via Ollama (gemma4:31b or gemma4:26b) runs fully locally. No data leaves the machine, not even to a model API. This is relevant for air-gapped environments or when the infrastructure you are scanning contains data that is itself restricted. Hermes via Ollama (hermes3:70b) is another local option with strong reasoning capabilities for infrastructure analysis.

For agent-driven security workflows, Codex integrates with the Clanker Cloud MCP interface, allowing autonomous agents to query infrastructure state and surface security findings as part of a larger automated pipeline.

For standard continuous scanning, Claude Sonnet 4.6 or GPT-5.4 Pro balance cost against capability for recurring scans where you are not doing novel threat modeling on each run.

See the full documentation for BYOK configuration. Model keys are configured once and used across all Clanker Cloud features, including security scanning.


Exporting Findings for Compliance

Deep Research findings export as JSON or Markdown. Both formats are directly usable for compliance evidence.

JSON export is appropriate for programmatic processing — feeding findings into a ticketing system, a compliance platform, or a custom dashboard. The severity-graded structure maps cleanly to ticket priority levels.

Markdown export is appropriate for direct inclusion in compliance documentation: SOC 2 evidence packages, ISO 27001 risk assessment appendices, internal security review documents. A severity-ranked finding with a description and timestamp is the format auditors expect for demonstrating that periodic security assessments were conducted.

The combination of a structured one-pass scan and a local-first architecture means the audit trail stays in your control. You are not pulling a report from a third-party vendor's portal — you are running the scan, owning the output, and including it in your compliance package on your terms.

To start scanning your infrastructure, connect your providers via the account page and run Deep Research from the desktop app. The demo walkthrough shows the full workflow.


FAQ

Does Clanker Cloud Deep Research replace a dedicated cloud security posture management (CSPM) tool?

It covers the same category of findings — misconfigurations, exposure, missing controls — through a different interface. Deep Research runs on-demand via plain-English queries and returns severity-ranked findings across multi-cloud and Kubernetes environments. Whether it replaces a dedicated CSPM depends on your compliance requirements and the depth of continuous monitoring you need. The Cybersecurity Agents roadmap feature adds the continuous scanning dimension.

Which cloud providers and Kubernetes distributions does security scanning support?

Clanker Cloud connects to AWS, GCP, Azure, Kubernetes (EKS, GKE, AKS), Cloudflare, Hetzner, DigitalOcean, and GitHub. Security scanning through Deep Research covers any connected provider.

Can I scan for Kubernetes RBAC misconfigurations specifically?

Yes. You can query for specific RBAC conditions — IAM roles with AdministratorAccess, cluster role bindings to system:anonymous, service accounts with excessive permissions — in plain English. The INSPECT step traces dependencies and surfaces the effective permission graph without requiring manual kubectl queries.

Are my credentials and scan results sent to Clanker Cloud's servers?

No. Clanker Cloud is a local-first desktop app. Credentials are stored locally and never leave your machine. Scan results are local. If you use BYOK with a cloud model API (Claude, GPT-5.4), the API call goes directly from your machine to the model provider. For fully air-gapped scanning, Gemma 4 or Hermes via Ollama processes everything locally.

Can I use security findings as evidence for SOC 2 or ISO 27001 audits?

Yes. Export findings as Markdown or JSON. The severity-graded format — CRITICAL, HIGH, MEDIUM — with a finding description and timestamp provides the structured audit evidence that SOC 2 and ISO 27001 frameworks require for demonstrating periodic security assessments.