Principles

The Twelve Principles of Agent Experience

Each principle defines a core expectation agents bring to every product interaction. Violate them and agents fail, misrepresent, or abandon your product.

Framework
Dimensions

The 5 AX Score dimensions with scoring rubrics (0-20 each).

View dimensions →
Reference
Patterns

10 anti-patterns that break agent experience, with fixes.

View patterns →
01
Agents are users

"Treat agents as a real user persona alongside your human users."

AI agents are not tools. They are users of your product with their own constraints, goals, and failure modes. They deserve the same design attention you give human users.

When you build for agents, you build a product that is clearer, more structured, and more predictable for everyone. Good agent experience improves human experience too.

The question is not whether agents will use your product. They already do. The question is whether you designed for it or left it to chance.

In practice
  • Include "AI agent" as a persona in your product design process.
  • Test your product with at least two different AI models before launch.
  • Track how agents describe your product and compare it to your intent.
  • Assign ownership of agent experience to a specific team or role.
02
Structure is the interface

"Your API responses, HTML structure, and data formats are the agent user interface."

Agents do not see your visual design. They cannot appreciate your color palette, your whitespace, or your carefully chosen icons. They read your markup, parse your API responses, and extract meaning from your data structure.

A beautiful page with poor semantic HTML is invisible to agents. A plain API response with clear field names and consistent types is a delight.

Structure is not decoration. For agents, it is the entire experience.

In practice
  • Use semantic HTML elements: headings, lists, tables, and landmarks.
  • Return typed, consistent JSON from every API endpoint.
  • Add Schema.org markup to your key pages.
  • Publish an llms.txt that describes your product in plain text.
03
Context beats prompting

"A self-describing site beats clever instructions every time."

Users often try to fix poor agent experience by writing better prompts. This is like fixing a confusing website by telling users to read the FAQ first. It shifts the burden from the product to the user.

Products that embed context into their responses, error messages, and metadata require less external prompting to work correctly. The product itself becomes the prompt.

Build products that explain themselves. Do not rely on external instructions to fill gaps your product should fill.

In practice
  • Every API response includes a description field explaining the data.
  • Error messages state what went wrong, why, and what to do next.
  • Page titles describe page content, not marketing taglines.
  • Your llms.txt provides enough context that an agent can use your product without additional prompting.
04
Open ecosystems win

"Products that expose capabilities through open standards get adopted by more agents."

Proprietary protocols create friction. Every custom integration requirement is a barrier that agents must negotiate. Open standards like REST, JSON, Schema.org, and llms.txt remove that friction.

Agents work across many products simultaneously. Products that use the same standards as everyone else get integrated first. Products that require custom adapters get skipped.

Openness is a competitive advantage. The more accessible your product is to agents, the more often it gets recommended.

In practice
  • Use REST APIs with standard HTTP methods and status codes.
  • Publish an OpenAPI specification for your API.
  • Use Schema.org markup rather than custom metadata formats.
  • Follow the llms.txt specification rather than inventing your own agent index.
05
Every action needs feedback

"Agents cannot observe visual changes. Every action must return an explicit response."

Humans see a spinner and know to wait. They see a green checkmark and know it worked. Agents have no such visual cues. If an action does not return a response, the agent has no idea what happened.

Every API call, every form submission, every state change must return structured feedback: what happened, what the new state is, and what to do next.

Silent success is failure for agents. They need confirmation, not just the absence of an error.

In practice
  • Every API endpoint returns a response body, even for successful writes.
  • Status changes include before and after states in the response.
  • Long-running operations return a status URL that agents can poll.
  • Webhooks notify agents of background state changes without polling.
06
Recovery is mandatory

"When agents hit failures, they need a structured path forward."

Every product has failure states. What separates good agent experience from bad is whether agents can recover without human intervention.

A 404 that says "page not found" is a dead end. A 404 that includes links to similar resources and the sitemap is a recovery path. A 429 without a Retry-After header is a wall.

Design specifically for the agent's need to continue autonomously. Every error should include a code, a description, and a next action.

In practice
  • Error responses include a machine-readable code, a plain description, and a suggested next action.
  • 404 responses link to the nearest valid resource.
  • Authentication failures distinguish expired, invalid, and insufficient-scope credentials.
  • Rate limit responses include Retry-After headers with specific timing.
07
Discovery is part of the product

"If agents cannot find your product, nothing else matters."

Discoverability is not just a marketing concern. It is a product feature. When a user asks an AI for tool recommendations, your product either appears or it does not.

Products that are invisible to AI agents lose recommendations they will never know about. No one tells you when an agent did not mention you.

Invest in making your product findable by agents with the same seriousness you invest in making it usable by humans.

In practice
  • Publish a detailed llms.txt at your domain root.
  • Use Schema.org markup to define your product category explicitly.
  • Audit your AI visibility quarterly across ChatGPT, Claude, Perplexity, and Gemini.
  • Ensure your homepage clearly states what your product does and who it is for.
08
Auth is experience

"Authentication that blocks agents is a product failure, not a security feature."

Browser-only OAuth flows are the single most common cause of agent abandonment. Agents can navigate browsers, but it is slow, fragile, and frequently blocked by bot detection.

Token-based authentication with explicit permission scopes is the agent-native pattern. It lets agents authenticate without visual browser interaction.

Security and agent access are not opposites. Scoped tokens with clear permissions are more secure than browser sessions and more accessible to agents.

In practice
  • Offer API token authentication alongside browser-based OAuth.
  • Support read-only, write, and admin permission scopes.
  • Document your token endpoint in your llms.txt.
  • Let users generate and revoke tokens from their dashboard.
09
Memory and events for long work

"Agents working across sessions need persistent state and event notifications."

Many agent tasks span multiple sessions. An agent might start a deployment, wait for approval, then resume hours later. Without persistent state, the agent loses context between sessions.

Polling wastes resources and creates lag. Webhooks let agents react to state changes as they happen. Event-driven architecture is the foundation of reliable agent work.

Products that support long-running work must expose status endpoints and event notifications. Otherwise agents are limited to tasks that complete in a single interaction.

In practice
  • Long-running operations return a status URL that agents can check.
  • Webhooks notify agents of state changes without requiring polling.
  • Event payloads include a type field and schema version.
  • Session state persists across agent reconnections.
10
Trust must be computable

"Agents need machine-readable signals of trustworthiness."

Humans assess trust through brand recognition, design quality, and social proof. Agents cannot evaluate any of these signals. They need structured, verifiable indicators of reliability.

Transparent products declare what they do and what they do not do. They publish terms of service at stable URLs. They expose data sources and methodology alongside their claims.

Products that overclaim or obscure their limitations are not just unhelpful to agents. They are actively unreliable. Agents propagate inaccurate claims onward.

In practice
  • Your llms.txt includes a limitations section before a capabilities section.
  • Terms of service are at a stable URL in plain text.
  • Data sources and methodology are declared where relevant.
  • Review data includes methodology, not just a star count.
11
Autonomy must be bounded

"Products should define clear boundaries for what agents can do without human approval."

The defining characteristic of an AI agent is autonomous action. But not all actions should be autonomous. Some are safe. Others require human approval. The product must make this distinction explicit.

Products that require confirmation for every step break autonomous operation. Products that allow destructive actions without confirmation create risk. The right design is explicit permission tiers.

Classify every endpoint as safe, write, or destructive. Require confirmation parameters for destructive operations. Let scoped tokens enforce the boundaries programmatically.

In practice
  • Endpoints are classified as safe, write, or destructive.
  • Destructive actions require explicit confirmation parameters.
  • Scoped API tokens let users grant exactly the permissions needed.
  • Add a dry-run or preview mode to write operations.
12
Accessibility for agents

"Just as accessibility serves all humans, agent accessibility serves all AI systems."

Physical accessibility ensures products work for people with diverse abilities. Agent accessibility ensures products work for AI systems with diverse architectures, context windows, and capabilities.

Agent accessibility means structured content that any model can parse. It means documented APIs that any framework can call. It means clear error messages that any agent can interpret.

Products that are accessible to agents today will be compatible with agents that do not exist yet. Accessibility is future-proofing.

In practice
  • Use semantic HTML with proper heading hierarchy and landmark elements.
  • Ensure all interactive elements have text labels, not just icons.
  • Provide text alternatives for visual content like charts and graphs.
  • Test with multiple AI models to ensure cross-platform compatibility.