Agent Design Canvas

A comprehensive framework for designing AI agents

Think through all the key aspects of agent design including triggers, knowledge, outputs, tools, risks, human collaboration, and success metrics.

About the Agent Design Canvas

The Agent Design Canvas is a practical tool created by the Abundly team to help you systematically design AI agents. Whether you're planning your first agent or refining an existing one, this canvas provides a structured approach to thinking through all the critical aspects.

Perfect for both individual brainstorming and group workshops, the canvas ensures:

  • You don't miss any important considerations when designing your AI agent
  • You have a structured way of discussing and aligning on the agent within a group
  • You can spot weaknesses during the design process instead of wasting time on a setup that would never work

See our agent design canvas below. We have outlined what kind of questions should be answered in each section of the canvas.

Agent Design Canvas

Purpose

• What is the scope of the job that this agent is tasked to do?

• Who would benefit, and how?

Triggers

• What event starts the agent?

• Is it manual, scheduled, or event-driven?

Input

• What data or documents are provided?

• What format should inputs be in?

Action Plan (Human & AI)

Create numbered steps showing the workflow, clearly indicating which steps the AI agent performs and where humans are involved. Include review points, approvals, and handoffs between human and AI. Each step should specify who does what.

Example:

1. AI: Analyzes incoming document

2. AI: Extracts key information

3. Human: Reviews and validates findings

4. AI: Generates draft response

5. Human: Approves and sends

Interfaces

• How will human(s) and AI interact (chat, shared doc, approve, joint doc, other)?

• What are the functional requirements on the interface(s)?

Output & Success

• What are the deliverables?

• What format should they be (templates, examples)?

• How do we know it became better/worse (evals)?

• What metrics define success?

Knowledge & State

• What external documents or databases to connect with?

• What background context & domain knowledge is needed?

• What types of working data does the agent maintain and modify?

Capabilities

• What integrations are needed?

• Other tools needed to perform actions well?

Example: Manuscript Screening Agent

This is an example of an agent design canvas for an agent which screens incoming manuscripts for a publisher. The agent helps to sort and filter among all the incoming manuscripts in order to find the best ones for a human colleague to review further.

Agent Design Canvas

Manuscript Screener
Publishing Team

Purpose

Screen incoming manuscripts and identify the ones with the most commercial potential that will be further screened by the (human) team.

Triggers

Incoming email from the public manuscript submission email address

Input

Each incoming email is accompanied by the email body itself, plus a manuscript and cover letter as attachments

Action Plan (Human & AI)

1. Agent receives email with manuscript and reads the incoming email, cover letter and first 5 pages of the manuscript

2. Agent reviews the content based on quality of writing, and sets a traffic light score (yes, maybe, no) on the email

3. Agent summarizes the recommendation and the motivation behind it, and emails it to the human user

4. Human user reviews the agent's assessment and takes the best manuscripts for further review

Interfaces

Email: Human will receive reports via email from the agent. No further requirements.

Output & Success

Agent's review of the incoming manuscripts is aligned with how the human user would evaluate them. Agent helps the human user save 5 minutes per manuscript that is discarded. Target is around 40% pass rate.

Knowledge & State

Agent needs examples of what is great, just-good-enough and not-really-good-enough levels of quality.

Capabilities

Receive email, send email, access/read documents

This example is a fairly straight forward agent. For more complex workflows, any of these sections could mean a significant amount of time spent for figuring out the right solution.

One common mistake that people make is trying to put everything they can think of into the agent's context. The problem is that this leads to a low signal-to-noise ratio (or a high noise-to-signal ratio), which can lead to a confused agent and often low quality output. Try to the limit the context to what the agent actually needs to know in order to fulfill its duty, and remove anything that is only good-to-know.

Get the canvas: You can use the interactive HTML version to fill in digitally, or download the PDF version to print and fill in manually during workshops.