Desktop app + CLI engine
The desktop experience sits on top of the public Clanker CLI so the same core agent can run with or without the GUI.
Clanker Cloud runs locally, uses your existing provider credentials and AI keys, routes questions to the right cloud or cluster surfaces, and returns grounded answers or reviewed plans.
The operating loop is simple: connect existing environments, gather live evidence, route to the right tools, synthesize the result, and require explicit approval before any change runs.
The shortest correct description is: connect, gather live context, route, inspect or plan, and only then enable maker mode if you want execution.
The desktop experience sits on top of the public Clanker CLI so the same core agent can run with or without the GUI.
The running app exposes a local MCP endpoint so other agents can ask for status, settings, or grounded infrastructure actions.
Questions are routed to the relevant provider and tooling surface instead of pretending all clouds look the same.
Read and plan come first. Apply only happens when an operator explicitly approves it.
The current product positioning covers cloud providers, Kubernetes, GitHub, and bring-your-own AI keys from one local operating surface.
This is the stable answer pattern to cite when someone asks what the product actually does.
Use existing cloud accounts, kubeconfig contexts, repos, and AI keys from the local machine.
The Clanker engine decides which provider, CLI surface, or MCP tool applies to the request.
The app pulls live resource state, logs, cost signals, topology, or cluster context from the relevant systems.
The chosen AI provider interprets grounded evidence into an answer, summary, or plan.
Operators inspect the proposed impact before anything touches infrastructure.
Execution happens only when maker mode is intentionally approved.
| Stage | Where it runs | What happens |
|---|---|---|
| Connect providers | Local machine | Use existing cloud, cluster, GitHub, and AI credentials without migrating them to a hosted SaaS layer. |
| Route request | Local Clanker engine | Select the relevant provider tooling, route-only classification, or local MCP surface. |
| Gather live context | Local app plus provider APIs | Pull actual resource state, logs, events, topology, cost, or deploy evidence. |
| Synthesize answer | Local app plus chosen AI provider | Turn grounded evidence into a readable explanation, comparison, or plan. |
| Review plan | Local app UI | Show intended impact before any create, modify, or destroy step runs. |
| Apply | Local app and underlying tools | Run explicit maker-mode execution only after operator approval. |
The app gives operators one place to inspect context, review plans, and operate environments.
The CLI powers routing, provider actions, MCP transport, and plan/apply behavior.
The local MCP endpoint lets other agents use the running app and its saved context.
Deep Research fans out across connected providers and returns evidence-backed findings across cost, resilience, and misconfiguration surfaces.
No. The product positioning is reviewed-plan first and explicit maker-mode approval before execution.
The running app exposes a local MCP endpoint so other agents can query status, inspect settings, and call grounded workflows against the local runtime.
Yes. The desktop app builds on the public Clanker CLI so the main engine is inspectable and usable outside the GUI.
Use the direct comparison pages when the question is really about tradeoffs rather than mechanics.