Self-hosted
All credentials and data stay on your servers, not inside a third-party hosted control plane.
TeamCopilot is designed so your team can adopt AI workflows without giving up control of credentials, approvals, audit trails, or execution boundaries. Raw secrets are not injected into the LLM chat history.
All credentials and data stay on your servers, not inside a third-party hosted control plane.
Only people who have been granted access to specific skills and workflows can use them.
Any updates to skills or workflows must be approved by engineers on your team before the agent can use them.
Chat sessions are stored on your servers and users cannot delete them through the UI, which keeps the full record auditable.
TeamCopilot does not inject raw secrets into the model context. The agent sees secret names and placeholders, while trusted runtime layers resolve the real values only at execution time. The UI also keeps secret values masked.
The AI never auto-runs workflows you define. It first asks for permission in the UI and only proceeds if the user approves and has permission to run that workflow.
Define custom workflows for fixed tasks, use the AI agent to help code them, and after approval let others use them in their chat sessions. Because workflows are all code, they are a powerful way to guarantee determinism.
Extensible guardrails
You can define custom hooks that deny specific agent actions before they happen. For example, if you never want the AI to use SSH, you can add a hook that detects ssh in bash commands and rejects it. You can also define your own AI instructions that get injected into each chat session in USER_INSTRUCTIONS.md inside your workspace.