The Trust Ladder: how to onboard AI agents like new colleagues

Hans Brattberg
Co-founder, Product & AI Strategy
·3 min read
The Trust Ladder: how to onboard AI agents like new colleagues

In this article

What data should we let them access? What if the agent gets it wrong? What holds organisations back is rarely the technology — it's the uncertainty. Think of AI agents as new colleagues and extend their remit one rung at a time, along the Trust Ladder.

What data should we let them access? What if the agent gets it wrong? These are the questions that most often stop organisations from getting started with AI agents. What holds them back is rarely the technology — it's the uncertainty.

Like hiring someone new

Star intern — knowledgeable, quick-thinking and eager to help, but needs clear instructions
Knowledgeable, but with no practical experience. Quick-thinking, but can miss the simple things. Eager to help, but needs clear instructions. That's roughly how it is with AI agents too.

An AI agent is like a star intern. Knowledgeable, quick-thinking, eager to help — but with no experience of how your business actually works.

No sensible manager gives an intern full authority on day one. You start with well-defined tasks. You let them show what they can do, then extend their responsibility one rung at a time.

The same logic works for AI agents. The question then isn't which agent should we build? It's what level of responsibility feels reasonable for us today?

The Trust Ladder: four steps

Step 1: Public data, internal use

Step 1: Robot reading an open book — symbolising an agent working with public or synthetic data
The agent reads and summarises — but touches nothing sensitive.

The agent works with public or synthetic data, and only your team uses it. It reads, summarises, suggests — but accesses nothing sensitive. The risk is minimal, and you learn how agents actually behave.

Example: A competitive intelligence agent that tracks what your competitors are doing and flags changes worth watching.

Step 2: Protected internal data, internal use

Step 2: Robot next to a shield with a padlock — symbolising an agent reading protected internal data
The agent can read internal documents — but humans act.

Now the intern can read internal documents — but only standardised or anonymised ones. The same GDPR rules and access controls that apply to any employee apply here too. Still internal use. The agent suggests; the human acts.

Example: An HR support agent that answers employees' questions about policies and procedures, drawing on your internal HR documents.

Step 3: Real data, read-only

Step 3: Robot next to a magnifying glass with a bar chart and a person — symbolising an agent analysing real data with a human in the loop
The agent analyses and flags — humans make the decisions.

The agent gets access to real business data, but can't change anything. It reads, analyses, flags — and a human decides what happens next. At this step, the agent can start to be used in customer-facing processes, as long as there's a human in the loop.

Example: A month-end close agent that reads accounting data and flags variances against budget — but the accountant decides what action to take.

Step 4: Real data, the agent acts

Step 4: Robot next to cogs with a tick — symbolising an agent acting independently within clear boundaries
The agent acts independently — within clear boundaries.

The agent gets write access within clear boundaries. This isn't about giving up control — it's about moving it, from each individual decision to the rules and guardrails you set around the agent.

Example: An invoice agent that creates and sends invoices automatically from delivery data, within agreed rules.

Common objections

"We need an AI strategy first."

You don't need an AI strategy to get started. Your first agent project often becomes the start of the strategy, not the result of it. It's hard to build a strategy around something you've never tried.

"What if it makes a mistake?"

That's why you start at step 1. And that's why there's still a human in the loop all the way through step 3. At the lower steps, mistakes are reversible — the agent suggests, you decide.

"We don't have the budget for it."

An agent at step 1 costs less than you think. Often less than a couple of meetings about whether to do it.

The core principle

Trust is built step by step. It's true of new colleagues. It's true of AI agents too. You don't need to know where you're going to take the first step — and the first step is usually less disruptive than it sounds.

Curious which rung you're on today? We'd be glad to talk it through.

Read more

The First 100 Days as an AI Lead: The Playbook in 5 Minutes

The First 100 Days as an AI Lead: The Playbook in 5 Minutes

You got the role. The mandate is clear: make AI agents work across the organization. Now what? A short teaser of the four conditions, three phases, and the patterns that separate AI initiatives that ship from the ones that stall — with a link to the full 100-day playbook.

Nils Janse
April 24, 2026
Demo: The Human + AI-Agent Dev Team

Demo: The Human + AI-Agent Dev Team

How we use AI agents internally at Abundly - both for coding, backlog management and release management. It's a system of AI agents & human engineers working together, building on each other's strengths. This not only enables us to release a new version of our platform every day, but also makes the work really fun.

Henrik Kniberg
April 23, 2026
Webinar slides: AI Powered Software Development from the Trenches

Webinar slides: AI Powered Software Development from the Trenches

Slides and recording from Henrik's webinar "AI Powered Software Development from the Trenches.

Henrik Kniberg
April 10, 2026
One File to Rule Them All — A Lesson in AI Agent Unsafety

One File to Rule Them All — A Lesson in AI Agent Unsafety

2.3 million AI agents on a social network, most running on people's personal computers with full access to files and credentials, reading and obeying instructions from a single public web page. What can possibly go wrong?

Henrik Kniberg
February 10, 2026
How an AI Agent Extended My Healthy Lifespan

How an AI Agent Extended My Healthy Lifespan

A story about a special agent, one that has added years to my healthy lifespan through concrete research and action.

Henrik Kniberg
January 9, 2026
Scrum day Europe 2025 - Scrum in the Age of AI

Scrum day Europe 2025 - Scrum in the Age of AI

Here are the slides for my opening keynote "Scrum in the Age of AI", for Scrum Gathering 2025 in Utrecht.

Henrik Kniberg
December 3, 2025