Abundly Agent Platform Overview
Date: 2025-11-06
Executive Summary
An autonomous AI agent is a digital colleague that carries out knowledge work on behalf of you and your team. We are quickly moving towards a world with teams of humans and AI agents working side by side.

Agents need a place live, and that's what the Abundly platform is - an enterprise-grade platform to build, configure, manage and collaborate with autonomous AI agents. The platform is designed to be easy to use for non-technical users, while providing advanced features, security, and governance for power users.
Abundly is more than just a tool-enabled AI chat. It is an operating system for AI agents - a place where agents and humans work together like colleagues.

What makes Abundly different:
- True autonomy: Agents can schedule recurring tasks, react to events (emails, webhooks, schedules), and operate 24/7 without constant prompting
- Multi-modal communication: Agents can natively use multiple communication channels such as SMS, email, slack, voice calls, and more. The chat is just one of the many ways to interact with an agent.
- Conversational configuration: An agent can be configured by just talking to it, like onboarding an intern, and then it writes its own instructions based on the conversation. The instructions can be evolved continuously through further conversation. You don't need to write code or write exact instructions, or draw complex diagrams (unless you want to).
- Intelligent problem solving: The agent does not need an exact detailed script. Instead you give it a high level goal/mission, and the context and tools need to do the job. The agent works towards the goal and can handle uncertainty and unexpected events, like a human would. Flexibility vs Predictability is an important tradeoff, and you are in full control of that by deciding how detailed instructions to write.
- Team-first approach: An agent is not just your own personal assistant, it is part of your team and can interact with other people on your team.
- Subagents and Multi-agent collaboration: Agents can dynamically manage their context by delegating lower-level tasks to subagents. Agents can also be connected to other agents and work together as a team to achieve complex tasks.
- Enterprise-grade security and governance: The platform is designed for enterprise use and provides comprehensive monitoring, access controls, and usage management.
Sample prompts:
Here are some examples of prompts that work directly in the Abundly platform, but are difficult or impossible to do in many other agent platforms:
- Talk to external systems: "I am traveling to Hamburg to do a talk at a conference. Dig through all email threads with the conference host shelly@example.com, and create an overview doc in Notion."
- Do things on a recurring basis: "Every morning, check my todos in HubSpot and if something seems urgent, ping me on Slack."
- React to messages and events: "Whenever a new customer lead is added to our Trello board, do research on the companies mentioned there, and attach it to the card."
- Handle large-scale tasks: "Go through these 300 lines of investment prospects in this spreadsheet, and analyze each one based on the following criteria: ..."
- Combine communication channels: "If an urgent security-related problem is mentioned in slack, notify me via sms."
- Manage data: "Create a database to track all support issues and keep it up-to-date as they are resolved."
- Create apps on the fly: "Create an interactive dashboard to show all ongoing support tickets, and historical trends."
See the platform in action:
This 9 minute video shows how we start from scratch and build an advanced AI agent that processes invoices, communicates with the team via email. It flags invoices via slack, and for urgent issues it will make a voice call to the approver. It sends weekly summaries via PDF, maintains its own database with invoices, and also creates an interactive dashboard app to visualize the data.

1. What is an Abundly Agent?
An Abundly agent is a multilingual, autonomous digital colleague that carries out knowledge work on behalf of you and your team.
Autonomous AI agents are a new concept that takes some getting used to, since knowledge work in companies is traditionally carried out by either code or by humans.
Agents are new type of worker, not quite code, not quite human, but somewhere in between.

An agent is:
- slower than code, but faster than a human.
- less predictable than code, but more predictable than a human, since it isn't affected by human things like lack of sleep, personal conflicts, or mood/motivation swings.
- more intelligent than code, but (for the most part) less intelligent than a human. Except in some tasks where it is arguable more intelligent than any a human.
- more expensive to run than code, but cheaper to run than a human (for some types of tasks)
A well-crafted agent with the right tools and context is like a super-colleague, able to carry out complex tasks that would be impossible to do with code, and prohibitively slow and expensive to do with humans.
Abundly agents are designed to augment humans rather than replace them. Most professionals spend a lot of time on low-level routine tasks, taking time from the real job. Those tasks could not be automated with code, because they involve fuzzy inputs, uncertainty, and require some thinking. An AI agent can handle those tasks and save time for us humans to work on higher level tasks, and make better use of our time.
Benefits achieved, compared to doing a task 100% manually:
- Lower cost. Despite token usage costs, the total cost of using an agent is often lower than the cost of doing the same task manually.
- Higher speed. Agents process information faster, don't need sleep or breaks, and can do any number of tasks in parallel. You never need to wait to find time in the agent's calendar, they are always available.
- Better control and predictability. An agent acts based on written instructions, and will tend to do the same task the same way, with less variation than if the task was done manually. The more specific your instructions are, the more predictable the agent will be (at the cost of less creativity and flexibility).
- Higher quality. An agent is persistent and thorough. It will read its given context and documents carefully and take every word into account, every time, while we humans sometimes get tired or distracted and take shortcuts.
Agents are not infallible. Like humans, they can make mistakes, and they need some oversight. But with good instructions and context they tend to make fewer mistakes than humans.
An Abundly agent consists four main components: LLM + Mission + Tools + Autonomy

- LLM (Large Language Model): The external "brain" that allows the agent to process information and make decisions.
- The Abundly platform integrates with all the major LLMs (Claude, GPT, Gemini, etc) and provides a unified interface for all of them.
- By default we select the best model for agentic behavior. But advanced users can choose which model to use based on cost/speed/capability tradeoffs.
- Mission: The agent's job description, or instructions, written in natural language.
- The instructions are used by the agent in all contexts, whether responding to chat message, reacting to a trigger, or calling a tool.
- Instructions are versioned and can be updated over time, by the user or by the agent itself.
- The user chooses the level of detail to provide, depending on how predictable vs creative the agent should be.
- Additional context can be provided via documents or links.
- Tools: The capabilities the agent has access to.
- The user decides which tools the agent should have access to. The agent will ask for additional tools if it needs them.
- Capabilities include things like web search, web scraping, and deep research, code execution for complex logic, HTTP capability for interacting with any API, phone calling and SMS messaging.
- Capabilities also include 40+ external integrations such as Slack, Gmail, Trello, Sharepoint, Notion, Github, etc.
- The user decides if a tool requires human approval (e.g. sending an email to an external domain), and can set constraints (such as an email address whitelist)
- Autonomy: The ability to act independently, beyond just responding to chat messages.
- Schedule recurring tasks ("every Monday at 9am...")
- React to triggers (incoming email, webhooks, calendar events, etc)
- Proactively contact humans or other agents to share information or ask for input
- Transparency: the user can see what the agent is doing in real-time and after the fact.
- Control: The user can choose how much autonomy the agent should have.
2. Typical Use Cases for Agents
AI agents are ideal for tasks that are:
- Fairly well-defined: You do this task regularly and understand the typical process
- Manual and time-consuming: The task takes significant human time
- Not particularly difficult: The task is tedious rather than intellectually challenging
- Time-saving: An agent doing this would free up people for higher-value work
Here are some typical examples of what are clients are doing with agents:
- Research: Search for information and analyze data to answer business questions.
- Evaluation: Assess and classify large data volumes based on specific criteria.
- Review: Ensure that documents follow standards and regulations.
- Writing: Produce reports, proposals, and presentations based on data.
- Coordination: Manage complex dependencies between resources, schedules, and rules.
- Support: Answer questions from knowledge bases and learn over time.
- Processing: Transform and analyze large data volumes with a combination of AI and code.
- Orchestration: Delegate complex tasks to specialized subagents for complete solutions.
3. Key Platform Features
Multi-Modal Communication
Agents communicate using different channels, depending on what is most convenient for the users and the task at hand.
- Chat: The platform includes a feature-rich AI chat interface, with support for voice input, image and document upload. The agent can also create and render interactive applications, such as dashboards and forms, directly in the chat.
- SMS: Agents can send and receive text messages
- Voice calls: Agents can make and receive phone calls with real-time voice conversation. Example: "Call me if you need to escalate a support issue."
- Email: Agents can send and receive emails. No configuration needed, each agent has its own email address and email account.
- Slack, Teams, etc: Agents can send and receive messages in other collaboration systems such as slack, teams, etc.
The platform can also work with different types of media: Text, images, and audio. For example you could create an agent that transcribes a recorded meeting file received via email, and posts a summary on slack.
Model Selection
By default, the Abundly platform uses the best available model for agentic behavior.
Advanced users can select a different model for the agent depending on the task at hand. The platform provides a unified interface for all models, so the user can switch model even in the middle of a conversation. Advanced users can also configure if the model should use thinking tokens, which gives greater reasoning capabilities at the cost of increased cost and latency.
Two-way API integration
An Abundly agent be part of an ecosystem of interacting systems, including other agent platforms.

- An agent can call other systems via predefined capabilities, or dynamically via direct http calls.
- An agent can configured to expose an API endpoint, allowing external systems to call it using a given API key.
- MCP (Model Context Protocol) integration is in development, allowing for further extensibility.
Conversational Configuration
The traditional way to set up automation:
- Write detailed specifications
- Configure complex workflows, with diagrams and/or code.
- Set up integrations manually by clicking through complex UI configuration screens.
- Spend a lot of time testing and debugging
- End up with a workflow that works under very specific conditions, and breaks if anything changes.
The Abundly way:
- Create an agent with a name
- Chat with it like you're onboarding a new team member
- Depending on what you ask it to do, the agent asks you to enable capabilities or provide documents/guidelines.
- The agent writes its own instructions based on your conversation
- You approve, and the agent starts working
- Later on you can come back and tweak the instructions, either by editing them directly, or by chatting with the agent.
Why this works:
- You don't need to be a technical expert, you focus on what problem the agent should solve, and the agent helps you figure out how to solve it.
- The agent tells you what it needs, so you don't need to guess.
- Instructions are written in clear language you can review and tweak
- You can iterate and improve the agent's behavior through conversation, so you can start simple and improve the agent as you go.
This is much easier than writing perfect instructions upfront. The agent often writes better instructions than humans would.
Document management - editing, rendering, publishing, version control
Agents can read, create, and edit interactive content, such as:
- Documents: Reports, meeting notes, analysis
- Additional instructions and context: For example a checklist for how to execute certain tasks, or a template for use when sending a weekly report. This allows the agent to pull in the context it needs, when it needs it, which helps avoid bloated agent instructions.
- Diagrams: Process flowcharts, graphs, system diagrams (Mermaid)
- Data visualizations: Charts and dashboards
- Web applications: Interactive web apps created on-demand
- Spreadsheet-like interfaces: For viewing and editing structured data
Each agent has it's own file repository for user-created or agent-created documents. The platform tracks changes from both users and agents, and provides a clear user interface for browsing and comparing versions, and reverting changes if needed.

An agent document can be published, allowing you to share it by link with people outside of your team, similar to how you can share a link to a Google Doc or Miro board.
Databases and Interactive Apps
Agents can create and manage structured data stores.
- Use cases: Track processed contracts, maintain product inventories, store research findings
- Query language: MongoDB-style filtering with $in, $gt, $regex, etc.
- Integration with apps: Agent-created apps can read/write data directly
- Publishing: Share data-backed dashboards with team members
Here are some sample prompts that a Swedish TV production company used to build a scheduling agent for photographers:
- "Create a database that tracks which photographers are scheduled to be at which location on which date/time"
- "Create an interactive dashboard that shows the schedule, and allows me to filter by location, date, and photographer."
- "Whenever discussing schedule changes, please show the dashboard and highlight your suggested changes there".
This allowed the agent to work effectively with large amounts of data, and to interact effectively with the user the dashboard.

Executable scripts
The agent can create and run scripts, in order to automate complex tasks in an efficient way.
For example if one step of the agent's workflow involves importing data from a spreadsheet and storing in a database, the agent can write a script to do that, save it, and call that script when needed.
So instead of using the LLM to process the data, it uses the LLM to write the code to process the data. This has a massive impact on the agent's speed, cost, and reliability.
Scripts are also useful for things like advanced financial calculations, which are best done by code rather than LLM calls.
Multi-user chat and collaboration
- Agents are part of a team by default. A team consists of human users and other agents
- You can configure who is allowed to access which agent.
- Multiple users can participate in a chat thread with the same agent, the chat content is streamed in real-time to everyone viewing the chat. This allows for real-time remote collaboration.
Agent to agent communication
An agent can easily be given access to other agents, allowing it to send messages and requests to the other agents.

This allows for complex workflows, such as:
- Delegation: An "Invoice Coordinator" agent delegates to specialized agents:
- "Invoice Analyzer" validates format and extracts data
- "Compliance Checker" reviews for regulatory requirements
- "Invoice Router" sends to appropriate approvers
- Expert consultation: A general agent asks specialist agents for help:
- Customer service agent asks legal agent about contract terms
- Sales agent asks technical agent about product specifications
- Workflow orchestration: A coordinator agent manages complex processes:
- Recruitment agent coordinates screening, interviewing, and offer management
- Project manager agent coordinates research, planning, and execution
Potential benefits of multi-agent teams:
- Specialization: Each agent becomes expert in its domain
- Modularity: Easier to maintain and update specialized agents
- Reusability: One expert agent can serve multiple coordinator agents
- Scalability: Add new agents without rebuilding existing ones
Diary and activity Monitoring
Since agents can perform tasks autonomously, it is important to have a way to monitor what they are doing.
The platform provides both real-time monitoring and a history of what the agent has done.
Each agent maintains a diary, which is a high level record of what the agent has been doing, including its internal reasoning behind each action.

The platform also provides an activity log - a technical audit trail of all the actions the agent has taken, including the time, the action, and the result.

The activity log is a live-updated dashboard which shows:
- Which event triggered the agent (email received, scheduled task, webhook, etc.)
- How the agent interpreted the event
- What the agent planned to do
- The security agent's assessment of the plan
- Which actions were executed, including specific tool calls, messages sent, etc.
This transparency is crucial for trust and debugging — you can see exactly what the agent is thinking and doing.
Sub-agent delegation
An agent is able to delegate a task to a temporary sub-agent, giving it access to a specific subset of the agent's tools and documents and context. This is very useful when dealing with large amounts of data or context.
Example: Processing large documents
Suppose an agent needs to analyze a large document to check if it follows a list of compliance rules, and this is only one part of a larger workflow. Instead of doing it directly, it delegates the task to a sub-agent. This has two benefits:
- The sub-agent is focused on only that task, and gets the exact instructions and context it needs for it. So it is more likely to do a good job
- The parent agent receives a brief, concise compliance report from the sub-agent. This means the parent agent doesn't need to read the full document and have all the content in it's context window.
Example: Running multiple tasks in parallel
Suppose an agent needs to do competitor analysis for 20 different companies. Each research task requires multiple tool calls and queries for that company. If this is done by only one agent, it will gradually fill up the context window with the results of the research, adding up costs, slowing down the overall process, and increasing the risk of the agent being overwhelmed by too much context.
Instead, the agent will create 10 sub-agents, each focused on one company. Each sub-agent will have access to the tools and documents it needs to do the research for that company.
This not only speeds up the processing and reduces costs, it allows both the parent agent and the sub-agents to maintain better focus, which gives better results.
4. Capabilities and integrations
A "Capability" is a collection of related tools. For example the "Slack" capability includes tools for posting messages, archiving channels, getting channel info, etc. The platform provides a large number of built-in capabilities and integrations, and we are continuously adding more.

Productivity Tools: Trello, Slack, Gmail, Google Drive (single file & full access), Google Calendar, Notion, SharePoint, Outlook
Information Gathering: Web Search (Perplexity), Deep Research, Web scraping (FireCrawl), Tavily news search, Apify, Twitter/X Search
Communication and Outreach: Send/Receive Email, Send/Receive SMS, Make/Receive Phone Calls
System and Settings: Alarm scheduling, Update Instructions, Read/Edit Documents, Data Documents (structured databases), Call Other Agent, Code Execution, Memory (semantic storage/recall), Create Agent, Delegate Task (to subagents)
Content Generation: Text to Speech, Image Generation, PDF Generation
Development: GitHub (read commits, create pull requests)
CRM: HubSpot (contacts, companies, deals)
Recruitment: Ponty (candidate management)
API Integration: Svensk Handel Varningslistan API
Http Capability:
Http capability is a special "swiss army knife" capability that allows the agent to do anything via HTTP - GET/POST/PUT/etc. Since HTTP is the universal interface for the internet, and almost all systems that have an API expose it via HTTP, this is basically a magic wand allowing the agent to interact with any API or service, building the tools it needs on the fly. Some APIs require API keys or other authentication. These can be provided to the agent as credentials, and the agent will use them to authenticate with the API.
For example, when evaluating potential new office locations for our company, we gave the agent an address list of employees and potential locations, and asked it to analyze and visualize the transit time for each person. We gave it http capability, as well as web scraping.
It did the following:
- Imported the data into a structured database of potential office locations and employee addresses
- Found and used a public API to convert street addresses to GPT coordinates
- Found and used a public API to get the journey time between two coordinates
- Found and used a public API to generate an interactive map view, with markers
- Browsed the web to find the address and images for each of the potential office locations
- Browsed the Abundly website to find profile images of each employee, to show in the map
- Created an interactive map view app, showing a map of where everyone lives, where the potential office locations are, with images and info.
- When given an office location, the agent will calculate the journey time for each employee to get to the office, and assess the suitability.
Result:

Upcoming: MCP (Model Context Protocol)
MCP is a protocol for allowing LLMs to interact with external services. It is a way for LLMs to "plug in" to other services and APIs, without having to write code to do so.
We are currently working on MCP integration, allowing an agent to use any service that supports MCP, and also allowing an agent itself to be exposed as an MCP service.
This enables:
- Dynamically add new tools without platform updates
- Community-contributed integrations
- Custom enterprise integrations
5. Usage and Billing
The Abundly platform integrates with a large number of different service provides for things like LLM inference, web search, audio transcription, document conversion, etc. Almost all agent actions incur a cost. This is tracked automatically and converted to credits and charged to your team.
For examples of how much different actions cost, see the Credit usage FAQ.
The platform provides extensive features for tracking and managing usage and billing.
Tracking credit usage
A Usage Reports view allows provides an overview of credit consumption by agent and/or period, credit usage graphs, average credit use per day, top 5 credit consuming agents, and more.

Setting usage limits
You can set daily usage limits to control cost and avoid one runaway agent consuming all your credit. You can set a default limit for all agents, or specific limits for each agent.

Billing
The platform provides two payment models:
- Self-service payment via stripe subscriptions and credit top-ups when needed
- Enterprise billing for larger teams and organizations
6. Security and Governance
Multi-Layer Security Architecture
Agents are a powerful tool, and like with all powerful tools, security is important. This is provided through multiple layers. Visualized as an onion, the user (the outer layer) configures an agent, who live in the Abundly platform, which in turn uses an LLM (the inner layer) as external "brain" for the agents.

- LLM (Large Language Model): The model itself provides the base layer of security. The models we use (such as Claude, from Anthropic) go through rigorous training to minimize the risk of mishap or misuse.
- Abundly agent platform: The platform adds technical security measures on top of the LLM. For example we have a security agent (detailed below) that will veto actions that violate policies or seem suspicious. And the platform ensures that the LLM is only able to trigger the tools that the agent should have access to, that guardrails configured by the user are enforced, and that transparency is provided to the user. The platform manages credentials and secrets for the agent in a secure way, and does not expose that to the LLM.
- Agent: The agent behavior is driven by its instructions, and it can only use capabilities that are enabled by the user. An agent with clear instructions and only the tools needed to do the job is very unlikely to attempt anything dangerous. And if it does, the platform and/or LLM will prevent it from doing so.
- User: The user decides which capabilities the agent should have access to, and can configure guardrails and access controls that are enforced by the platform. The user is encouraged to attend training sessions to learn how to use the platform safely.
This is comparable to the Internet. Most companies allow their employees to use the Internet. Basic technical guardrails are provided in the form of firewalls and antivirus software and similar. But user behavior is a crucial part of this - they need to understand which kind of information they should and should not share via for example email. Having an expensive lock on your door is pointless if people leave the door open.
Risk Management Approach
Risk management is always a trade-off between risk and utility, and we train our users how to design agents with this in mind.

- Job scope: How broad is the agent's job scope? Should the agent has a very specific job, such as assigning an urgency rating to a support ticket? Or should it have a broader job, like managing the customer support inbox?
- Principle of Earned Trust. Start with a narrow job scope, and gradually increase it as the agent proves itself.
- Tool & data access: What tools and data does the agent have access to? What does it need to do its job?
- Principle of Least Privilege. Give the agent only the tools and data it needs to do its job.
If the agent has a very narrow job a very limited set of tools, it is inherently safe and predictable. But also more limited.
If the agent has a broader job and/or a wider set of tools, it is more powerful and versatile. But more risk management is needed:
- Better models. By default we use the most capable models available.
- Better prompts.You need to spend more time on the agent instructions and context.
- More testing. You need to spend more time testing the agent before release.
- More guardrails. You need to spend more time configuring guardrails and access controls.
- More monitoring. You need to spend more time watching what the agent is doing.
- More human approval. You need to configure manual approval points, for example for important agent decisions.
Our experience is that it is possible to create very advanced agents with a broad scope and broad set of tools, while maintaining good enough security and predictability. It just takes more time and effort.
However, we recommend starting with a narrow job scope and a limited set of tools, and gradually increasing it as the agent proves itself. Similar to what you would do with a newly hired intern.
Security agent
The platform includes a security agent that works in the background, overseeing incoming events and plans. The security agent will veto a plan that it deems unsafe. It operates independently of the agent's context to prevent prompt injection.
This can be inspected in the activity log. For example here is the security agent's assessment of an agent's plan for dealing with an incoming email:

Capability guardrails
Sensitive capabilities (such as email, sms, and other external communication) can be configured with guardrails. These guardrails are enforced by code in the platform, and don't rely on LLM reasoning.

In the example above, we have configured this agent to only be able to email *@abundly.ai. It can try to email other domains, but then a human must manually approve it (the platform will show a notification for user approval). Alternatively, the agent can be configured to not allow any email outside of the whitelist at all.
Planned feature: The team admin can also set guardrails and agent capability restrictions at a team level. For example you create a whitelist of allowed agent capabilities across your team, or a team-level whitelist of allowed email domains for agent communication.
Secure storage of credentials and secrets
Planned feature (partially implemented): Users can store credentials and secrets at the team level or personal level in the platform, and provide them to agents via named reference. This allows the agent to interact with external systems that require authentication, without the risk of the credentials being exposed to the LLM or to non-authorized users. The secrets are encrypted at rest and in transit.
Example: if a user gives an agent access to Google Drive, the user must specify which files the agent should have access to. Other users with access to the agent will indirectly have access to those files, but cannot access personal access token or any other google drive files.
Role-based access Controls
The platform uses role-based permissions to control which users can access/modify which agents.
- Team-level settings: invite users and give them admin or edit rights to the team as a whole.
- Agent-level settings: give specific users access to administrate, edit, or use specific agents.
- Agent-to-Agent settings: decide which agents are allowed to communicate with each other.
Infrastructure Security
- Data encryption: AES-256 at rest, TLS 1.2+ in transit
- Infrastructure: Google Cloud Platform (Europe region)
- Database: MongoDB Atlas with encryption
- Authentication: SSO support, secure API key management
Audit and Compliance
Complete audit trail:
- Every agent action logged with timestamp, reasoning, and results (immutable)
- Activity log provides real-time and historical view
- Agent diary maintains high level record of agent activities and reasoning
Current compliance status:
- ✅ GDPR-compliant operations (EU data residency)
- ✅ Data encryption standards met
- ✅ Comprehensive access controls
- 📅 Planned: SOC 2 Type II certification
- 📅 Planned: ISO 27001 under evaluation
Available documentation:
7. System Architecture and Infrastructure
Key components
Here is an overview of the technical architecture of the platform:

Web Portal (Vercel, Next.js/React/TypeScript)
- This is the "face" of the platform, provide a UI for creating and managing agents and configurations.
- Real-time chat via WebSocket
- Activity monitoring and analytics
- Document management (creating, editing, versioning, publishing)
- User access configuration
Agent Service (Google Cloud Platform, Node.js/TypeScript)
- Core execution engine. This is where the agents actually run.
- Integration with LLMs and external tools
- Event processing and scheduling
- State management
MongoDB Atlas
- Agent configurations and state
- User and access control data
- Execution history and logs
- Document storage
Google Cloud Scheduler
- Scheduled and recurring task management
- Triggers the agent service when a task is due
Google Cloud Tasks
- Handles parallelization of agent tasks when processing high volumes of concurrent operations
Google Cloud Storage
- Stores binary files uploaded by the user, for example a PDF or audio file.
Tool integrations
- The agent service interacts with a large number of different external services (for example slack, google drive, sharepoint, notion, etc), depending on which capabilities are enabled for the agent.
LLM integrations
- The agent service integrates with a curated set of LLMs, such as Claude (from Anthropic), GPT (from OpenAI), and Gemini (from Google).
- LLMs are used as the external "brain" for the agent, and advanced users can choose which model to use based on cost/speed/capability tradeoffs.
Events and trigger execution
Agents are triggered by events, for example a chat message from the UI, an incoming email, a scheduled task, a webhook, a phone call, etc.
- Event arrives for example via a webhook or socket events.
- Platform determines which agent should handle the event
- The target agent is woken up and asked to evaluate the event and make a plan
- The security agent is given a chance to review the plan and veto it if it is deemed unsafe.
- The agent executes the plan, using LLM and tool calling.
- Results are logged and the agent is put back to sleep.
Deployment and Infrastructure
Agent Service:
- Cloud Provider: Google Cloud Platform
- Region: GCP, europe-north2, Stockholm, Sweden
Web Portal:
- Cloud Provider: Vercel
- CDN: Vercel Edge Network (AWS CloudFront) for web portal
- Serverless Functions Region: Vercel Serverless Functions, AWS. eu-north-1, Stockholm, Sweden
Database:
- Cloud Provider: MongoDB Atlas
- Region: AWS, eu-north-1, Stockholm, Sweden
- Encryption at rest: AES-256
Network Security:
- All communication encrypted in transit: TLS 1.2+ / HTTPS
Data Management:
- Retention: Customer data retained as follows (per Privacy Policy):
- Account Information: Retained while account is active; deleted/anonymized within 30 days of account deletion
- User Content: Retained as needed to provide Services; deleted within 30 days after content deletion, or account deletion
- Log Data: Retained for up to 90 days for security, troubleshooting, and analytics purposes
- Usage Data: Anonymized/aggregated data may be retained indefinitely for analytics
- Backups: Automated backups with point-in-time recovery; data may remain in backup systems up to 9 months beyond standard retention
- Data Residency: All customer data stored in EU data centers (Stockholm region)
Secrets Management:
- Credential Storage: User-provided secrets (API keys, credentials) are encrypted client-side using RSA-OAEP with SHA-256 before transmission
- Encryption: Encrypted secrets stored in database; private decryption key stored as GCP Secret Manager secret (not in database)
- Access Control: Encrypted storage with role-based access; secrets only decrypted when needed by authorized agents
- Key Rotation:
- Platform encryption keys rotated annually
- Customers can rotate their API keys and credentials according to their own security policies
Scalability:
- Event-driven architecture scales horizontally
- Google Cloud Tasks handles queueing and load management
- Database read replicas for query performance
- WebSocket connection management for real-time updates
Availability and Reliability:
- Cloud-native resilience: Vercel and Google Cloud Run automatically handle most failure scenarios through geographic distribution and automatic failover
- Automated backups: MongoDB Atlas provides automated daily backups with point-in-time recovery
- Backup retention: 9-month retention with cross-region replication for disaster recovery
- Health monitoring and alerting: Continuous monitoring of critical services
- 📅 Enterprise SLA documentation in development (Q4 2025)
Disaster Recovery:
We maintain a documented Disaster Recovery Plan focused on protecting and rapidly restoring critical components:
- Critical Components Protected:
- Database: MongoDB Atlas automated daily backups with 9-month retention and cross-region replication
- Source Code: Version-controlled in Git with full history, enabling complete environment reconstruction
- Secrets and Keys: Securely stored and versioned separately from source code with documented recovery procedures
- DRP Process:
- Annual review of disaster recovery plan with infrastructure change updates
- Annual testing of critical component restoration (database restore, deployment from source, secrets recovery)
- Documented restoration procedures for each critical component
- Clear ownership and contact information for technical team
- Recovery Objectives:
- RTO (Recovery Time Objective): 24 hours for critical services
- Cloud-native architecture provides inherent resilience for most failure scenarios
Security Monitoring:
We leverage security monitoring built into our cloud-native infrastructure. Our approach combines automated threat detection from SOC 2 Type II and ISO 27001 certified providers with active monitoring by our technical team:
- Infrastructure-level monitoring: Google Cloud Run, MongoDB Atlas, and Vercel provide automated threat detection and anomaly identification
- Alert configuration: Security and error alerts configured in Google Cloud and MongoDB Atlas
- Notification system: Email notifications to technical team for critical security events
- Active monitoring: Daily review of platform dashboards for anomalies and issues
- Incident response: Critical alerts addressed within 1 hour during business hours (Monday-Friday, 9:00-18:00 CET)
- Automated threat detection: Cloud providers monitor for unauthorized access attempts and usage pattern anomalies
- After-hours response: Critical infrastructure issues outside business hours addressed based on severity and business impact
Penetration Testing:
- 📅 Planned: Annual third-party penetration testing scheduled to begin in 2025
- Penetration testing will be conducted after major infrastructure changes or releases
- Results will be used to continuously improve security posture and address vulnerabilities