Self-Hosted AI Agent OS

AI that does the work.
Not just the talking.

Give one goal. Laudagi plans, executes, and returns real outputs — across tools, files, and 41 channels. On your infrastructure, under your control.

Self-hosted 41 channels 20+ agents Kill-switch always active
41 Channels built-in
20+ Specialized agents
111 Skills & tools
100% Self-hosted
What you can do

Real work across real systems

Give one goal. Get a finished result — artifacts, logs, and all — across tools, files, channels, and APIs.

01

Run autonomous missions

One goal decomposes into a DAG. Agents route steps, execute tools, and ask for approval only when you configure it.

Goal → DAG → result Multi-agent routing
02

Operate across 41 channels

WhatsApp, Telegram, Discord, Slack, Signal, email, and 35 more. One normalized layer — same logic everywhere.

41 channels Protocol-normalized
03

Get deliverables, not drafts

Runs return artifacts, structured files, and execution logs — saved to your workspace, inspectable and replayable.

Workspace artifacts Full audit trail
04

Control what autonomy means

Dial from full-approval to max-autonomous per agent. Kill-switch halts everything. Cost caps per run and per day.

Autonomy levels Kill-switch active
How it works

Mission to result — one system

01

Define mission

Describe the outcome. System plans a DAG and assigns agent steps.

02

Agents execute

Tools run - bash, browser, APIs, web search. Parallel where possible.

03

Channels receive

Progress streams to WhatsApp, Slack, Discord - wherever you monitor.

04

Memory persists

Governed vector memory augments future runs. BM25 + semantic hybrid.

05

You stay in control

Approval gates, kill-switch, cost caps. Steer or halt at any point.

What makes it different

Not a chatbot.
An operating system.

Laudagi runs missions, uses real tools, coordinates agent teams, remembers across sessions, and returns outputs you keep.

Typical AI tools
Answer the question
Stop after one response
Leave the next steps to you
Lose continuity across tasks
One model, one interface
No visibility into what happened
Laudagi
Execute the mission end-to-end
Work across multi-step DAGs with memory
Return artifacts, files, and audit logs
Governed memory persists across runs
20+ agents, 41 channels, coordinated teams
Approval gates, cost caps, full audit log

The gap between prompts and real results is the OS.

System Architecture

One OS. Many layers.

A layered runtime — orchestration sits above execution, which runs inside persistent sessions, backed by governed memory and a full approval surface.

Orchestration
Missions

Goal decomposed into a DAG. Supervisor monitors, recovers stuck steps, streams progress.

Workflows

Pre-defined chains with visual canvas editor. Versioned, trigger-driven, reusable.

Loops

Recurring autonomous cycles. Staggered scheduling, budget allocation, self-seeding from KPIs.

Teams

20+ specialized agents with scoped roles, isolated knowledge, and coordinated routing.

↓ execution unit ↓
Execution
Session

Persistent thread preserving context and governed memory across multiple runs.

Run

The auditable unit. Plans, executes, checkpoints, and resolves one objective.

Tool loop

Bash, browser, file I/O, APIs, web search, 111 skills — invoked as the run needs them.

Result

Artifacts, logs, summaries, and deliverables returned to the operator and workspace.

Governed vector memory (BM25 + semantic) Approval gates at any layer Cost caps · kill-switch · audit log WebSocket JSON-RPC gateway
Proof of Execution

One goal. Observable execution.
Real result.

Every step logged. Every output inspectable. Not a chat bubble — a finished deliverable.

The Goal
Research three competitors, extract positioning, generate a landing-page brief, and save outputs to workspace.
One request. Multi-step execution.
run trace · live
RUN planning → DAG
done
TOOL web_search ×3
done
TOOL file_read / compare
done
TOOL approval_gate
reviewed ✓
TOOL file_write · artifacts
done
RESULT brief + log + folder completed ✓
Result Set
Competitor positioning matrix
Landing page outline + copy angles
Execution log with every step
Workspace artifact folder

One request in. Observable execution out.

Reliability

Operator control at every layer

Autonomy levels from full-approval to max-autonomous. Kill-switch always active. Every action audit-logged.

Recovers automatically

Transient errors trigger exponential backoff and requeue. The run keeps moving without manual intervention.

error → backoff → requeue → continue

Approval gates you configure

Global or per-agent policies. Approve from the dashboard with full action context. Allowlist for future auto-approval.

autonomy: off | low | medium | high | max

Kill-switch and cost caps

Kill-switch halts all agents instantly. Per-run and per-day caps enforced at the gateway. DLP scanning on every action.

kill-switch: active · cost-cap: enforced · audit: every action
Your system. Your control.

Fully self-hosted.
Zero lock-in.

Your data, your keys, your approval policies, your infrastructure. No cloud dependency required.

bash — deploy locally
# Clone and configure
$git clone https://github.com/laudagi/laudagi.git
$cd laudagi && cp .env.example .env
# Start the runtime
$docker compose up -d
Your keys API keys stay in your .env. Never leave your server.
Your data Agent memory, runs, artifacts, and logs live on your infrastructure.
Your approvals Governance policies run locally — not in a cloud control plane.
No lock-in Open deployment, portable data, no mandatory SaaS tier.
Web UI CLI Chrome extension iOS Docker Compose
Ready to operate

The AI OS your
operation runs on.

Self-hosted, governed, multi-agent. Real outputs — not partial answers trapped in a chat bubble.