Credentials stay local
Use existing cloud accounts, kubeconfig contexts, and repo access from the machine running the app instead of handing them to a hosted SaaS vendor.
Clanker Cloud is designed so cloud credentials, cluster contexts, and operator control stay on the machine running the app instead of moving into a hosted copilot layer.
The security model is practical: use existing provider access locally, bring your own AI keys, gather live evidence first, review plans second, and only approve execution intentionally.
The core security claim is simple: privilege stays local, evidence is gathered before action, and changes require explicit approval.
Use existing cloud accounts, kubeconfig contexts, and repo access from the machine running the app instead of handing them to a hosted SaaS vendor.
AI provider traffic uses your own keys so teams keep direct provider relationships, pricing control, and model choice.
Other agents connect to a local MCP surface exposed by the running app rather than a remote control plane.
Read and plan flows come before execution, and maker mode requires explicit operator approval.
The current product positioning covers cloud providers, Kubernetes, GitHub, and bring-your-own AI keys from one local operating surface.
| Surface | Where it lives | What it means |
|---|---|---|
| Cloud credentials and kubeconfig | Local machine | Provider access stays with the operator instead of a hosted copilot vendor. |
| AI provider keys | Local machine | Teams keep direct model billing and provider choice. |
| Live infrastructure evidence | Provider APIs queried from the local app | Answers and plans are grounded in real environment state. |
| MCP endpoint | Localhost runtime | Other agents connect to the app over a local transport boundary. |
| Execution approval | Operator in the app | Changes require deliberate maker-mode approval. |
Useful when production access already exists and the priority is faster grounded operations without moving that access to another vendor.
Helpful when procurement or security review gets harder once a hosted service sits in the middle of privileged workflows.
The local MCP surface keeps agent integrations close to the operator runtime and existing credentials.
No. The product positioning is that cloud credentials and cluster contexts stay on the local machine running the app.
No. The core workflow runs locally and uses the operator’s existing provider access and AI keys.
Through the local MCP endpoint exposed by the running app or CLI, so integrations talk to a local transport boundary instead of a remote vendor control plane.
Read the workflow explainer or the agent-specific page for the operational details behind this trust model.