WebMCP Tool Design: How to Write Tools AI Agents Actually Use
How to design WebMCP tools that AI agents call correctly every time. Covers naming conventions, JSON Schema best practices, readOnlyHint, error handling, and the tool contract patterns that reduce hallucinations.
WebMCP Tool Design: How to Write Tools AI Agents Actually Use
By Matheus Reis, Co-founder at Kn8 · Published April 7, 2026 · Updated April 21, 2026 · 10 min read
Tags: WebMCP · Developer Guide · AI Agents · JSON Schema · Best Practices
The single most important thing about WebMCP tool design: agents do not read code. They read names, descriptions, and schemas. A tool with a clear description and a precise input schema will be called correctly every time. A tool with a vague description or an open-ended schema will be misused, skipped, or hallucinated around. This guide covers every design decision that determines whether your tools work reliably with AI agents.
Why Tool Design Matters More Than Tool Implementation
The implementation of a WebMCP tool — the code inside execute() — is the easy part. Your existing frontend logic already does the work. What determines reliability is the contract you declare: the name, the description, and the input schema that agents use to decide whether to call your tool and how to call it correctly.
A poorly designed contract produces three failure modes:
- The agent skips your tool. If the description is too vague or ambiguous, the agent cannot determine that your tool matches the user's intent. It falls back to DOM scraping instead.
- The agent calls your tool with wrong parameters. If the schema is too permissive —
type: "string"with no constraints — the agent generates values that fail your validation rules. - The agent hallucinates parameters. If required parameters are not explicitly marked, the agent may call the tool without them and receive an unhelpful error.
Every one of these failures is a design problem, not an implementation problem. And every one is preventable.
The Five Elements of a WebMCP Tool Contract
Every tool you register through navigator.modelContext.registerTool() has five elements that define its contract:
navigator.modelContext.registerTool({
name: "...", // 1. The tool's identifier
description: "...", // 2. Natural language intent declaration
inputSchema: { ... }, // 3. JSON Schema for inputs
readOnlyHint: false, // 4. State mutation flag
execute: async (params) => { ... } // 5. Implementation (covered last)
});
The first four elements are what agents read before they ever call your tool. They deserve most of your design attention.
1. Naming: Action-Oriented, Specific, and Unambiguous
Tool names must communicate intent unambiguously to an LLM. The conventions that work:
Use camelCase verb + noun. The verb communicates the action class; the noun communicates the target.
✓ createInvoice
✓ searchCustomers
✓ updateProjectStatus
✓ listActiveWorkflows
✓ exportReportAsCsv
Never name tools after UI elements. UI names are implementation details, not intents.
✗ clickSubmitButton → createPurchaseOrder
✗ openDropdown → selectPricingPlan
✗ sidebarSearch → searchProducts
✗ dashboardWidget → getAccountSummary
Be specific enough to avoid disambiguation. If you have two tools that both "get" things, name them precisely:
✗ getUser (which user? by what identifier?)
✓ getUserById (clear)
✓ getUserByEmail (clear, different tool)
Keep names under 40 characters. Long names increase the cognitive load for the agent's tool-selection step and reduce matching reliability.
| Pattern | Example | Why | |---|---|---| | verb + resource | createProject | Clear action + target | | verb + resource + qualifier | listActiveUsers | Scoped, avoids ambiguity | | verb + resource + format | exportDataAsCsv | Explicit output format | | check + condition | checkPaymentStatus | State query, clearly read-only |
2. Descriptions: Write for an LLM, Not a Human
The description is the single most important field in your tool contract. It is the text an agent reads to decide whether your tool matches the user's intent.
Four things every description must contain:
- What the tool does (verb phrase, specific)
- What context it operates in (which workspace, which session, which data)
- What it returns (the shape or type of the result)
- Any important preconditions or constraints
Compare these two descriptions for the same tool:
✗ "Searches for customers"
✓ "Search the customer database for accounts matching a name, email,
or account ID. Returns up to 20 matching records including status,
plan tier, and account owner. Only returns customers within the
currently active workspace."
The first description tells an agent almost nothing. The second gives it enough context to:
- Confirm this tool matches "find me the account for Acme Corp"
- Understand the output it will receive
- Know it is scoped to the current workspace (avoiding wrong-tenant errors)
Write the description in present tense, second-person imperative — the same voice you would use in internal API documentation. Avoid marketing language, passive voice, and vague qualifiers like "efficiently" or "easily."
State what the tool does NOT do when that prevents confusion:
description: "Create a new project in the current workspace. Does NOT
send invitations to team members — use invite_team_member for that."
This prevents the agent from calling createProject when the user says "set up a new project and add my team."
3. Input Schemas: Constrain Everything, Leave Nothing Open
Your input schema is the agent's instruction manual for calling your tool. Permissive schemas produce bad inputs. Constrained schemas produce correct ones.
The baseline schema structure:
inputSchema: {
type: "object",
properties: {
paramName: {
type: "string",
description: "What this parameter does and what values are valid",
// Add constraints below
}
},
required: ["paramName"],
additionalProperties: false // Reject unknown fields
}
Always set additionalProperties: false. This prevents agents from inventing parameters that do not exist in your API.
Use enums for any parameter with a fixed value set:
status: {
type: "string",
enum: ["active", "paused", "archived"],
description: "Filter by account status. Defaults to 'active' if not specified."
}
An agent that sees an enum never guesses. It selects from the provided options. This is the single highest-leverage constraint you can add.
Use format for standardized types:
email: { type: "string", format: "email" }
date: { type: "string", format: "date" } // YYYY-MM-DD
url: { type: "string", format: "uri" }
uuid: { type: "string", format: "uuid" }
Use pattern for domain-specific formats:
accountId: {
type: "string",
pattern: "^ACC-[0-9]{6}$",
description: "Account identifier in ACC-XXXXXX format (e.g., ACC-004821)"
}
Add minimum/maximum for numeric ranges:
limit: {
type: "integer",
minimum: 1,
maximum: 100,
default: 20,
description: "Number of results to return. Defaults to 20."
}
Include default values for optional parameters. When the agent does not provide an optional parameter, the default communicates what will happen — preventing the agent from assuming it must always supply every field.
Describe every parameter in its own description field. Do not rely on parameter names alone. An agent reading sortBy does not know what values are valid or what the sort affects without a description.
4. The readOnlyHint Flag: Get It Right
readOnlyHint: true tells the browser two things:
- This tool does not modify application state
- The browser's confirmation prompt can be skipped
Getting this wrong has real consequences:
// CORRECT: pure query, no side effects
navigator.modelContext.registerTool({
name: "getAccountSummary",
readOnlyHint: true, // ✓ Correct — safe to skip confirmation
execute: async ({ accountId }) => {
return await api.accounts.summary(accountId);
}
});
// INCORRECT: marks a write operation as read-only
navigator.modelContext.registerTool({
name: "deleteProject",
readOnlyHint: true, // ✗ Wrong — this modifies state permanently
execute: async ({ projectId }) => {
await api.projects.delete(projectId);
}
});
The deleteProject example above will execute without any user confirmation — because the browser took the readOnlyHint at face value. The user never gets to see what the agent is about to do.
Rule: If execute() makes a POST, PUT, PATCH, or DELETE request, or writes to any local state, readOnlyHint must be false or omitted (defaults to false).
5. Error Handling: Return Readable Messages, Not Thrown Exceptions
When your tool fails, the agent will relay the error message to the user. An unhandled exception produces a cryptic stack trace. A well-handled error produces a clear explanation of what went wrong and what the user can do.
The required pattern:
execute: async ({ email, role }) => {
try {
const result = await api.team.addMember(email, role);
return {
content: [{
type: "text",
text: `Member ${email} added with role "${role}". They will receive an invitation email.`
}]
};
} catch (error) {
// Return a human-readable message — the agent will surface this
if (error.code === "MEMBER_EXISTS") {
return {
content: [{
type: "text",
text: `${email} is already a member of this workspace. Use update_member_role to change their permissions.`
}]
};
}
return {
content: [{
type: "text",
text: `Could not add ${email}: ${error.message}. If the problem persists, visit the Team Settings page directly.`
}]
};
}
}
What good error messages include:
- What failed (specific, not generic)
- Why it failed (if the reason is known)
- What the user or agent can do next (alternative tool, fallback URL, or contact)
Never throw from execute(). Unhandled exceptions bubble up as browser errors rather than agent-readable responses.
Complete Example: A Well-Designed Tool
Here is a production-ready tool design applying all five principles:
navigator.modelContext.registerTool({
name: "searchProjects",
description: "Search projects in the current workspace by name, status, or owner. " +
"Returns up to 50 matching projects with their ID, name, status, owner, " +
"last-updated date, and member count. Does not return archived projects unless " +
"status 'archived' is explicitly specified.",
inputSchema: {
type: "object",
properties: {
query: {
type: "string",
description: "Search text matched against project name and description. Leave empty to list all projects.",
maxLength: 200
},
status: {
type: "string",
enum: ["active", "paused", "completed", "archived"],
description: "Filter by project status. Defaults to 'active'.",
default: "active"
},
owner: {
type: "string",
format: "email",
description: "Filter to projects owned by this email address (optional)."
},
limit: {
type: "integer",
minimum: 1,
maximum: 50,
default: 20,
description: "Maximum number of results to return. Defaults to 20."
}
},
required: [],
additionalProperties: false
},
readOnlyHint: true, // ✓ Query only — no state mutation
execute: async ({ query = "", status = "active", owner, limit = 20 }) => {
try {
const results = await api.projects.search({ query, status, owner, limit });
if (results.length === 0) {
return {
content: [{
type: "text",
text: `No ${status} projects found${query ? ` matching "${query}"` : ""}. ` +
`Try a different status or broader search term.`
}]
};
}
const summary = results.map(p =>
`• ${p.name} (${p.status}) — owned by ${p.owner}, ${p.memberCount} members, ` +
`updated ${p.updatedAt}`
).join("\n");
return {
content: [{
type: "text",
text: `Found ${results.length} project(s):\n\n${summary}`
}]
};
} catch (error) {
return {
content: [{
type: "text",
text: `Search failed: ${error.message}. Try refreshing the page or visit Projects directly.`
}]
};
}
}
});
Tool Design Anti-Patterns to Avoid
Anti-pattern 1: Monolithic tools. Resist the urge to build one tool that does everything. manageProject with 15 optional parameters is harder for agents to use correctly than createProject, updateProject, and archiveProject as separate tools. Agents match tools to intent — smaller, focused tools match more reliably.
Anti-pattern 2: Undeclared required parameters. If your API requires a workspaceId, declare it in required. An agent that omits it will receive a 400 error and no useful guidance on how to fix it.
Anti-pattern 3: Free-text parameters where enums exist. planType: { type: "string" } invites agents to pass "premium" when your API expects "pro", or "free trial" when it expects "trial". Add the enum.
Anti-pattern 4: Shared tool registration across routes. Tools registered on one page should reflect the state available on that page. Do not register a getCustomerDetails tool on pages where no customer is in context — the tool will succeed but return empty or wrong data.
Anti-pattern 5: Missing feature detection guard. Always check for API availability before registering:
if ("modelContext" in navigator) {
navigator.modelContext.registerTool({ ... });
}
Without this guard, your code will throw on browsers that do not support WebMCP — which is currently all of them except Chrome 146+ with the flag enabled.
Frequently Asked Questions
How long should a WebMCP tool description be?
Aim for 2–4 sentences: what the tool does, what context it operates in, what it returns, and any important constraints. Descriptions under one sentence are too vague for reliable tool selection. Descriptions over 8 sentences add noise without adding precision. The goal is enough information for the agent to match intent to tool without ambiguity.
Should I register one general tool or many specific tools?
Specific tools outperform general ones for two reasons. First, agents match tools to intent — a tool named searchCustomersByEmail matches "find the customer account for jane@acme.com" more reliably than manageCustomers with a mode parameter. Second, specific tools have smaller input schemas, which means fewer opportunities for wrong parameters. Start specific; merge only if you find tools that are never called individually.
How do I handle tools that need parameters from the current page state?
Inject page state as default values in the execute handler rather than requiring the agent to provide them. If the user is viewing a specific project, read its ID from your application state inside execute, not from the agent's input. Only ask the agent for parameters it genuinely needs to supply.
Can I register different tools on different pages of my application?
Yes, and you should. WebMCP tools are ephemeral to the current page. Register tools that make sense for the current context: a project page registers project-specific tools, a billing page registers billing tools. Avoid registering tools that refer to data not currently in scope — it leads to confusing or empty results.
What happens if my tool's schema changes after agents have been trained on it?
Tool schemas are discovered at runtime, not cached. When an agent visits your page, it reads the current schema. If you add a required parameter, agents will be prompted to supply it. If you rename a tool, agents that have cached the old name will fall back to DOM-based interaction until they re-discover the page. Treat schema changes with the same care as breaking API changes: add optional parameters first, communicate changes, and deprecate old tool names gradually if possible.
Is there a maximum number of tools I can register per page?
The WebMCP spec does not define a hard limit, but practical limits apply. The agent reads all registered tool names and descriptions to select the right one. More than 20–30 tools on a single page increases selection ambiguity and can reduce reliability. If you have many tools, register only those relevant to the current page context, not the full product surface.
Key Takeaways
- Tool names must be action-oriented camelCase verbs — never UI element names
- Descriptions must answer: what does it do, in what context, what does it return, and what are the constraints
- Input schemas should use enums, formats, patterns, and ranges to constrain every parameter
additionalProperties: falseprevents agents from inventing parametersreadOnlyHint: trueonly on tools with zero state mutation — misusing it bypasses user confirmation- Always wrap
execute()in try/catch and return human-readable error messages - Register page-specific tools, not product-wide tool catalogs
References and Sources
- W3C Web Machine Learning Community Group. WebMCP Draft Specification. February 2026. https://webmachinelearning.github.io/webmcp/
- Datacamp. WebMCP Tutorial: Building Agent-Ready Websites With Chrome's New Standard. March 2026. https://www.datacamp.com/tutorial/webmcp-tutorial
- JSON Schema. Draft-07 Specification. https://json-schema.org/draft-07/json-schema-release-notes
Further Reading
- What is WebMCP? — Kn8 Blog
- AI Agents in B2B SaaS — Kn8 Blog
- WebMCP Cheat Sheet — Webfuse
- W3C WebMCP Spec — W3C
- awesome-webmcp — GitHub
Request access to Kn8 to get WebMCP tools running in your application today, with schema validation and observability built in.