Agent Boundaries for Teams
Set clear read/write and tool limits for agentic coding across IDEs, CLIs, and shared tools.

Agentic coding is moving faster than most team rules. One developer can now ask a model to draft code, run tools, inspect files, and keep iterating without leaving the editor or terminal. That is useful. It also makes old assumptions brittle. A review process built for human-written diffs does not always catch tool misuse, hidden side effects, or overbroad access.
The problem is not the model. It is the boundary. Teams need to decide what an agent may read, what it may change, which tools it may call, and when a human must step in. That matters whether the agent lives in an IDE, a CLI, or a chat surface. The governance pattern should travel with the work, not with one vendor.
This is most relevant for engineering leads, staff engineers, and platform teams. If you are training developers to use agents, you need a shared baseline. If you are reviewing agent-written code, you need guardrails that are simple enough to apply under time pressure. If you are wiring in MCP or other tool connectors, you need explicit boundaries around data access and action scope.
The related training topic is agentic coding governance. The core question is the same across tools: what is allowed, what is observable, and what must be reviewed before merge.
Walkthrough
Start with a small policy, not a large policy. Write down three lists: allowed tools, allowed repositories or folders, and disallowed actions. Keep it short enough that a new hire can read it in one minute. If the list is too long, people will ignore it.
Separate read access from write access. Many failures come from agents that can inspect too much or modify too broadly. A safe default is read-mostly access with narrow write paths. If the task needs broader access, require an explicit human approval step.
Define tool boundaries by task type. A code agent does not need the same permissions as a research agent or a docs agent. MCP-style connectors are useful because they standardize access to external systems, but that also means the team must treat each connector as a governed integration, not a convenience plugin.
Make review rules visible in the repo. A short AGENTS.md or similar file can tell an agent what to do and tell reviewers what to expect. Keep it close to the code so it is hard to miss.
# AGENTS.md
## Allowed
- Edit files under `src/` and `tests/`
- Run unit tests
- Read design docs in `docs/`
## Not allowed
- Change production secrets
- Open network connections unless the task explicitly requires it
- Modify CI configuration without review
## Review requirement
- Any change touching auth, billing, or data export needs human approval before merge
Use a compact task contract for agent work. The contract should say the goal, the files in scope, the verification step, and the stop condition. This reduces wandering. It also makes it easier to judge whether the agent completed the task or just produced plausible output.
## Task
Refactor the retry helper to remove duplicate backoff logic.
## Scope
- `src/retry.ts`
- `tests/retry.test.ts`
## Verify
- Run the retry test file
- Confirm no behavior change for max attempts
## Stop
- Stop after tests pass and summarize any edge cases you found
Train reviewers to look for boundary violations, not just syntax. A clean diff can still be wrong if the agent reached into the wrong subsystem, used an unapproved tool, or changed behavior outside the task scope. Review checklists should include scope, permissions, tests, and rollback risk.
Keep one human in the loop for high-risk paths. Authentication, billing, data deletion, and production deployment deserve stricter controls than a UI refactor or test cleanup. The point is not to block agents. The point is to make escalation predictable.
A useful way to think about this is the Document step: write the boundary once, in the repo, where both the agent and the reviewer can see it. That is usually cheaper than re-explaining the rule in chat every time.
Tradeoffs and limits
Tighter boundaries reduce surprise, but they also reduce speed. If you make every tool call require approval, teams will route around the system. If you make every repo writable, you lose the point of governance. The balance depends on task risk, team maturity, and how much damage a mistake can cause.
MCP and similar connector layers help with standardization, but they do not solve trust by themselves. A connector can expose a database, a ticketing system, or a deployment action. That is useful only if the team has already decided who may use it, for what purpose, and with what audit trail.
There is also a training cost. People need examples of good prompts, good task scopes, and good review habits. Without that, the policy becomes shelfware. The best teams keep the first version small, then revise it after a few real incidents.
Finally, not every agent workflow needs the same level of control. A local refactor assistant and a production change agent should not share the same permissions. Treat governance as a gradient, not a single switch.


