clawzero
Ultra-fast, stable AI agent CLI built in Rust. Inspired by OpenClaw.
Features
- Inline TUI — Claude Code-style inline terminal UI that grows in-place (no full-screen takeover)
- Streaming-first — Real-time responses via SSE streaming
- Multi-provider — Switch between Anthropic / OpenAI / OpenRouter / Ollama and more via config alone
- Extensible provider design — Two protocol implementations cover all major providers; adding a new provider requires zero code changes
- Agent loop — Autonomous task execution via Think → ToolCall → Observe cycle
- Built-in tools — bash execution, file read/write/edit, memory read/write (6 tools)
- Session persistence — JSONL-based conversation history with resume support
- Context window management — Automatic token estimation and message compaction when nearing context limits
- Memory system — Persistent MEMORY.md files (global + project-local) injected into system prompt
- Plugin tools — Define custom bash/HTTP tools via TOML config
- Cloud auth — Vertex AI (OAuth2 via gcloud) and AWS Bedrock (SigV4) authentication
- Gateway — Run as Slack / Discord bot or Web UI with
clawzero gateway - Web UI — Browser-based chat interface via
clawzero gateway webui
Quick Start
cargo install --path .
export ANTHROPIC_API_KEY="sk-ant-..."
clawzero "Hello, world!"
See Installation and Quick Start for details.
Installation
Install script (recommended)
Download and install the latest pre-built binary:
curl -fsSL https://raw.githubusercontent.com/betta-lab/clawzero/main/install.sh | sh
By default, the binary is installed to ~/.local/bin. You can change this with the INSTALL_DIR environment variable:
INSTALL_DIR=/usr/local/bin curl -fsSL https://raw.githubusercontent.com/betta-lab/clawzero/main/install.sh | sh
Supported targets:
x86_64-unknown-linux-gnuaarch64-unknown-linux-gnux86_64-apple-darwinaarch64-apple-darwin
Manual download
Download from GitHub Releases:
# Example: Linux x86_64
curl -LO https://github.com/betta-lab/clawzero/releases/latest/download/clawzero-v0.1.0-x86_64-unknown-linux-gnu.tar.gz
tar xzf clawzero-*.tar.gz
sudo mv clawzero-*/clawzero /usr/local/bin/
Build from source
cargo install --path .
Optional feature flags
# Enable Slack gateway
cargo install --path . --features slack
# Enable Discord gateway
cargo install --path . --features discord
# Enable AWS Bedrock authentication
cargo install --path . --features bedrock
# Enable all features
cargo install --path . --features "slack,discord,bedrock"
Prerequisites
- Rust 1.80+ (edition 2024)
- mise (recommended) —
mise installsets up the toolchain
Quick Start
1. Set up your API key
The easiest way to get started is with clawzero init:
clawzero init
This interactively prompts for your API keys and generates ~/.config/clawzero/config.toml.
Alternatively, set your API key via environment variable:
export ANTHROPIC_API_KEY="sk-ant-..."
See Environment Variables for other providers.
2. Run your first prompt
# One-shot mode — sends a prompt, shows the result, and exits
clawzero "Write a fibonacci function in Rust"
3. Start an interactive session
# Interactive REPL with TUI (default when stdin is a TTY)
clawzero chat
4. Use a different model
# Use OpenAI GPT-4o
export OPENAI_API_KEY="sk-..."
clawzero --model openai/gpt-4o "Hello"
# Use a local Ollama model
clawzero --model ollama/llama3 "Hello"
5. Explore further
- CLI reference — All commands and flags
- Configuration — Config files, providers, and defaults
- Tools — Built-in tools the agent can use
CLI
Commands
Initialize configuration
Set up clawzero interactively — select providers, configure API keys, choose a default model, and optionally set up gateways. Generates ~/.config/clawzero/config.toml:
clawzero init
The interactive wizard guides you through:
- Provider selection — Choose from Anthropic, OpenAI, OpenRouter, Ollama (local), Vertex AI (Google Cloud), and Bedrock (AWS)
- Per-provider configuration — API keys (masked input), base URLs, regions, and project IDs as needed
- Default model selection — Pick from models available in your selected providers
- Gateway configuration (optional) — Set up Slack, Discord, or WebUI bot gateways
If an API key is left empty, the config will reference the corresponding environment variable (e.g. ANTHROPIC_API_KEY) instead.
One-shot mode
Send a prompt, get a response, and exit:
clawzero "Write a fibonacci function in Rust"
Interactive chat
Start an interactive REPL session:
clawzero chat
Model selection
Override the default model with --model:
clawzero --model openai/gpt-4o "Hello"
clawzero --model ollama/llama3 chat
The model format is provider/model-name. You can also set the default via the CLAWZERO_MODEL environment variable or in your config file.
Show config
Display the current configuration:
clawzero config
Session management
# List all sessions
clawzero sessions list
# Resume a session (subcommand)
clawzero sessions resume <session-id>
# Resume a session (flag — works with any command)
clawzero --resume <session-id> "Continue from where we left off"
See Session Management for details.
Gateway
Start platform bots:
# Start all configured gateways
clawzero gateway
# Start a specific platform
clawzero gateway slack
clawzero gateway discord
clawzero gateway webui
See Gateway Overview for details.
Global flags
| Flag | Description |
|---|---|
--model <provider/model> | Override default model |
--resume <session-id> | Resume an existing session |
--no-tui | Disable TUI, use plain text mode |
--version | Show version |
--help | Show help |
TUI
clawzero features a Claude Code-style inline terminal UI. The TUI grows in-place without taking over the full screen — confirmed output scrolls into terminal history while the live viewport shows only active content.
Keybindings
| Key | Action |
|---|---|
| Enter | Send message |
| Ctrl+J | Insert newline |
| Ctrl+A / Home | Move cursor to beginning of line |
| Ctrl+E / End | Move cursor to end of line |
| Ctrl+K | Delete from cursor to end of line |
| Ctrl+W | Delete word before cursor |
| Ctrl+C | Quit |
/exit, /quit | Quit |
Auto-detection
The TUI is enabled by default when stdin is a TTY. Piped input automatically uses plain text mode:
# TUI mode (default for interactive use)
clawzero chat
# Plain text mode (piped input)
echo "hello" | clawzero
Disabling the TUI
Use --no-tui to fall back to plain text mode:
clawzero --no-tui chat
Features
- Streaming text display — Responses stream in real-time with Markdown rendering
- Tool call cards — Visual cards showing tool name, parameters, and status
- Spinner animation — Animated indicator during thinking and tool execution
- Multi-line input — Use Ctrl+J to insert newlines in your prompt
- Scrollback — Past output scrolls into terminal history and can be viewed with your terminal’s scrollback
Session Management
clawzero automatically saves conversations as sessions using JSONL format. Each session is stored as append-only JSONL with crash-safe flush.
Listing sessions
clawzero sessions list
Resuming a session
Resume a previous session to continue the conversation:
# Using the subcommand
clawzero sessions resume <session-id>
# Using the --resume flag (works with any command)
clawzero --resume <session-id> "Continue from where we left off"
clawzero --resume <session-id> chat
Storage
Sessions are stored in ~/.local/share/clawzero/sessions/ as .jsonl files. Each file contains the full conversation history including messages, tool calls, and tool results.
Configuration Overview
clawzero uses TOML configuration files with a layered loading strategy.
Config file locations
| File | Scope | Priority |
|---|---|---|
~/.config/clawzero/config.toml | Global (user-wide) | Lower |
clawzero.toml | Project-local (current directory) | Higher |
Project-local settings override global settings.
Defaults
[defaults]
model = "anthropic/claude-sonnet-4-20250514"
max_tokens = 8192
max_turns = 25
context_limit = 200000
| Field | Default | Description |
|---|---|---|
model | anthropic/claude-sonnet-4-20250514 | Default model in provider/model format |
max_tokens | 8192 | Maximum tokens per response |
max_turns | 25 | Maximum agent loop turns |
context_limit | 200000 | Token limit for context window management |
The model can also be overridden via the CLAWZERO_MODEL environment variable or the --model CLI flag.
Full config example
[defaults]
model = "anthropic/claude-sonnet-4-20250514"
max_tokens = 8192
max_turns = 25
context_limit = 200000
[providers.anthropic]
protocol = "anthropic"
base_url = "https://api.anthropic.com"
api_key_env = "ANTHROPIC_API_KEY"
[providers.openai]
protocol = "openai"
base_url = "https://api.openai.com"
api_key_env = "OPENAI_API_KEY"
[gateway.slack]
app_token_env = "SLACK_APP_TOKEN"
bot_token_env = "SLACK_BOT_TOKEN"
[gateway.discord]
bot_token_env = "DISCORD_BOT_TOKEN"
[gateway.webui]
host = "127.0.0.1"
port = 3000
See Providers for all provider configurations and Environment Variables for available env vars.
Providers
clawzero supports multiple LLM providers through two protocol implementations. Adding a new provider requires only a config entry — no code changes.
Anthropic
API Key
[providers.anthropic]
protocol = "anthropic"
base_url = "https://api.anthropic.com"
api_key_env = "ANTHROPIC_API_KEY"
Claude Code setup-token
If you have a Claude Code setup-token (sk-ant-oat01-...), you can use it directly.
[providers.anthropic]
protocol = "anthropic"
base_url = "https://api.anthropic.com"
api_key = "sk-ant-oat01-..."
Run clawzero init and select “Claude Code setup-token” when prompted for the Anthropic authentication method.
OpenAI
[providers.openai]
protocol = "openai"
base_url = "https://api.openai.com"
api_key_env = "OPENAI_API_KEY"
OpenRouter
[providers.openrouter]
protocol = "openai"
base_url = "https://openrouter.ai/api"
api_key_env = "OPENROUTER_API_KEY"
Available models via clawzero init:
anthropic/claude-opus-4.6,anthropic/claude-sonnet-4.5,anthropic/claude-haiku-4.5google/gemini-2.5-pro,google/gemini-2.5-flashdeepseek/deepseek-r1,deepseek/deepseek-v3.2meta-llama/llama-3.3-70b-instructminimax/minimax-m2.5moonshotai/kimi-k2.5z-ai/glm-5
Ollama (local)
[providers.ollama]
protocol = "openai"
base_url = "http://localhost:11434"
api_key = ""
No API key required for local Ollama.
Vertex AI
Uses gcloud CLI for OAuth2 token authentication:
[providers.vertex-claude]
protocol = "anthropic"
base_url = "https://us-central1-aiplatform.googleapis.com"
auth = "vertex"
project_id = "my-gcp-project"
region = "us-central1"
Requires gcloud CLI to be installed and authenticated (gcloud auth print-access-token). Set GCLOUD_PROJECT env var or configure project_id in config.
AWS Bedrock
Requires the bedrock feature flag:
cargo install --path . --features bedrock
[providers.bedrock-claude]
protocol = "anthropic"
base_url = "https://bedrock-runtime.us-east-1.amazonaws.com"
auth = "bedrock"
region = "us-east-1"
Requires AWS credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) and optionally AWS_REGION.
Provider config fields
| Field | Type | Description |
|---|---|---|
protocol | "anthropic" or "openai" | API protocol to use |
base_url | string | API base URL |
api_key | string | API key (direct value) |
api_key_env | string | Env var name for API key |
auth | "vertex" or "bedrock" | Cloud authentication method |
project_id | string | GCP project ID (Vertex AI) |
region | string | Cloud region (Vertex AI / Bedrock) |
extra_headers | table | Additional HTTP headers |
models | array | Restrict available models |
Model format
Models are specified in provider/model format:
clawzero --model anthropic/claude-opus-4-6 "Hello"
clawzero --model openai/gpt-4o "Hello"
clawzero --model ollama/llama3 "Hello"
clawzero --model openrouter/meta-llama/llama-3-70b "Hello"
Environment Variables
| Variable | Description |
|---|---|
ANTHROPIC_API_KEY | Anthropic API key |
OPENAI_API_KEY | OpenAI API key |
CLAWZERO_MODEL | Override default model (e.g. openai/gpt-4o) |
GCLOUD_PROJECT | GCP project ID (for Vertex AI, if not in config) |
AWS_ACCESS_KEY_ID | AWS credentials (for Bedrock) |
AWS_SECRET_ACCESS_KEY | AWS credentials (for Bedrock) |
AWS_REGION | AWS region (for Bedrock, default: us-east-1) |
SLACK_APP_TOKEN | Slack Socket Mode app token (xapp-...) |
SLACK_BOT_TOKEN | Slack bot token (xoxb-...) |
DISCORD_BOT_TOKEN | Discord bot token |
Environment variables can be referenced in config files via api_key_env, app_token_env, and bot_token_env fields. See Providers and Gateway for details.
Built-in Tools
clawzero provides 6 built-in tools that the agent can use autonomously during the Think → ToolCall → Observe cycle.
bash
Execute a bash command and return stdout/stderr.
| Parameter | Type | Required | Description |
|---|---|---|---|
command | string | Yes | The bash command to execute |
timeout_ms | integer | No | Timeout in milliseconds (default: 120000) |
Example use: Running build commands, git operations, installing packages.
file_read
Read the contents of a file. Returns the file contents with line numbers.
| Parameter | Type | Required | Description |
|---|---|---|---|
path | string | Yes | The file path to read |
offset | integer | No | Line number to start reading from (1-based, default: 1) |
limit | integer | No | Maximum number of lines to read (default: 2000) |
Example use: Reading source code, configuration files, logs.
file_write
Write content to a file. Creates the file if it doesn’t exist, or overwrites it.
| Parameter | Type | Required | Description |
|---|---|---|---|
path | string | Yes | The file path to write to |
content | string | Yes | The content to write |
Example use: Creating new files, writing generated code.
file_edit
Edit a file by replacing a specific text string with new text. The old_text must be unique in the file.
| Parameter | Type | Required | Description |
|---|---|---|---|
path | string | Yes | The file path to edit |
old_text | string | Yes | The exact text to find and replace (must be unique in the file) |
new_text | string | Yes | The text to replace it with |
Example use: Modifying existing code, fixing bugs, refactoring.
memory_read
Read persistent memory (MEMORY.md). Returns both global and project-local memory content.
No parameters.
Example use: Recalling project conventions, previous decisions, stored context.
memory_write
Write to persistent memory (MEMORY.md). Use this to store information that should persist across sessions.
| Parameter | Type | Required | Description |
|---|---|---|---|
scope | string | Yes | Where to write: global (user-wide) or project (project-local) |
content | string | Yes | The markdown content to write to MEMORY.md (replaces entire file) |
Example use: Saving project conventions, recording decisions, noting important context.
Plugin Tools
Define custom tools in your config file (~/.config/clawzero/config.toml or clawzero.toml). Plugin tools extend the agent’s capabilities without code changes.
HTTP plugin
Make HTTP requests with template substitution:
[[tools]]
name = "weather"
description = "Get current weather for a city"
type = "http"
url = "https://api.weather.example/v1/current?city={{city}}"
method = "GET"
[tools.input_schema.properties.city]
type = "string"
description = "City name"
[tools.input_schema]
required = ["city"]
Bash plugin
Execute bash commands with template substitution:
[[tools]]
name = "deploy"
description = "Deploy to staging"
type = "bash"
command = "cd {{project_dir}} && make deploy-staging"
[tools.input_schema.properties.project_dir]
type = "string"
description = "Project directory path"
Template substitution
Parameters defined in input_schema are substituted into url (HTTP) or command (bash) using {{parameter_name}} syntax.
Input schema
The input_schema follows JSON Schema format:
[tools.input_schema.properties.param_name]
type = "string" # "string", "integer", "number", "boolean"
description = "..."
[tools.input_schema]
required = ["param_name"]
Plugin types
| Type | Fields | Description |
|---|---|---|
http | url, method | Make an HTTP request, return the response body |
bash | command | Execute a bash command, return stdout/stderr |
Memory System
clawzero provides a persistent memory system using MEMORY.md files. Memory content is automatically injected into the system prompt, giving the agent access to stored context across sessions.
Memory scopes
| Scope | Location | Use case |
|---|---|---|
| Global | ~/.config/clawzero/MEMORY.md | User-wide preferences, conventions |
| Project | .clawzero/MEMORY.md (project root) | Project-specific context, decisions |
The project root is detected by walking up from the current directory looking for .git or Cargo.toml.
How it works
- On startup, clawzero reads both global and project MEMORY.md files
- Their contents are concatenated (with headers) and injected into the system prompt
- The agent can read memory at any time using the
memory_readtool - The agent can update memory using the
memory_writetool
Tools
memory_read
Returns both global and project memory content, prefixed with # Global Memory and # Project Memory headers.
memory_write
Writes to either global or project memory. The scope parameter selects which file to update. Writing replaces the entire file content.
scope: "global" or "project"
content: "The markdown content to write"
Best practices
- Use global memory for personal preferences and conventions
- Use project memory for project-specific context (architecture decisions, key paths, patterns)
- Keep memory concise — it’s injected into every system prompt
Gateway Overview
The gateway system allows clawzero to run as a bot on multiple platforms simultaneously. Each platform connection maintains session-per-thread isolation with persistent session mapping.
Architecture
clawzero gateway
├─ SlackGateway ──→ AgentFactory + SessionMap ──→ Agent (per thread)
├─ DiscordGateway ─→ AgentFactory + SessionMap ──→ Agent (per thread)
└─ WebuiGateway ──→ AgentFactory + SessionMap ──→ Agent (per connection)
Starting gateways
# Start all configured gateways concurrently
clawzero gateway
# Start a specific platform
clawzero gateway slack
clawzero gateway discord
clawzero gateway webui
Only gateways with valid configuration (tokens set) will start. Missing tokens are reported as warnings.
Shared components
- AgentFactory — Creates Agent instances with shared configuration (model, tools, system prompt)
- SessionMap — Persistent mapping from platform thread IDs to session IDs, stored as JSON (
~/.local/share/clawzero/session_map.json) - BotEventHandler — Converts AgentEvent stream to text with rate-limited message updates (avoids API throttling)
Session-per-thread
Each platform thread (Slack thread, Discord channel) gets its own session. The session is created on first message and resumed on subsequent messages in the same thread. This provides persistent conversation context per thread.
Configuration
Gateway tokens can be set via environment variables or directly in config:
[gateway.slack]
app_token_env = "SLACK_APP_TOKEN"
bot_token_env = "SLACK_BOT_TOKEN"
[gateway.discord]
bot_token_env = "DISCORD_BOT_TOKEN"
[gateway.webui]
host = "127.0.0.1"
port = 3000
See Slack, Discord, and Web UI configuration for platform-specific setup.
Slack Gateway
clawzero connects to Slack via Socket Mode (WebSocket) and responds using the Web API.
Prerequisites
- Create a Slack app at api.slack.com/apps
- Enable Socket Mode in the app settings
- Generate an App-Level Token with
connections:writescope → this is yourSLACK_APP_TOKEN(xapp-...) - Under OAuth & Permissions, add the following bot scopes:
chat:writeapp_mentions:readchannels:historygroups:historyim:historympim:history
- Install the app to your workspace and copy the Bot User OAuth Token → this is your
SLACK_BOT_TOKEN(xoxb-...) - Under Event Subscriptions, subscribe to:
message.channelsmessage.groupsmessage.imapp_mention
Configuration
Environment variables
export SLACK_APP_TOKEN="xapp-1-..."
export SLACK_BOT_TOKEN="xoxb-..."
Config file
[gateway.slack]
app_token_env = "SLACK_APP_TOKEN"
bot_token_env = "SLACK_BOT_TOKEN"
Or with direct values:
[gateway.slack]
app_token = "xapp-1-..."
bot_token = "xoxb-..."
Running
# Slack only
clawzero gateway slack
# All configured gateways
clawzero gateway
Build
Slack support requires the slack feature flag:
cargo install --path . --features slack
Behavior
- Each Slack thread gets its own agent session
- The bot responds in-thread with streaming message updates
- Message updates are rate-limited to avoid Slack API throttling
- Reactions (emoji) indicate processing status
Discord Gateway
clawzero connects to Discord using the serenity library and responds to messages.
Prerequisites
- Create a Discord application at discord.com/developers/applications
- Under Bot, create a bot and copy the Token → this is your
DISCORD_BOT_TOKEN - Enable Message Content Intent under Bot → Privileged Gateway Intents
- Generate an invite URL under OAuth2 → URL Generator:
- Scopes:
bot - Bot Permissions:
Send Messages,Read Message History
- Scopes:
- Invite the bot to your server using the generated URL
Configuration
Environment variables
export DISCORD_BOT_TOKEN="your-discord-bot-token"
Config file
[gateway.discord]
bot_token_env = "DISCORD_BOT_TOKEN"
Or with a direct value:
[gateway.discord]
bot_token = "your-discord-bot-token"
Running
# Discord only
clawzero gateway discord
# All configured gateways
clawzero gateway
Build
Discord support requires the discord feature flag:
cargo install --path . --features discord
Behavior
- Each Discord channel gets its own agent session
- The bot responds with streaming message edits
- Message updates are rate-limited to avoid Discord API throttling
Architecture Overview
Module structure
src/
├── agent/ # Agent loop (Think → ToolCall → Observe)
│ ├── loop.rs # Core loop with session saving
│ ├── factory.rs # AgentFactory (shared Agent creation)
│ ├── context.rs # Conversation context + compaction
│ ├── event.rs # AgentEvent (UI notification)
│ ├── token.rs # Token estimation (chars/4 heuristic)
│ └── compaction.rs # DropOldest message compaction strategy
├── cli/ # CLI / REPL / TUI
│ ├── args.rs # clap arg definitions
│ ├── repl.rs # Plain text mode (interactive, one-shot, resume)
│ └── tui/ # ratatui-based inline TUI (Viewport::Inline)
│ ├── mod.rs # run_tui_repl(), run_tui_oneshot()
│ ├── app.rs # App state machine
│ ├── event.rs # TuiEvent loop
│ ├── ui.rs # Live viewport layout
│ ├── markdown.rs # Markdown → ratatui spans
│ └── widgets/ # Chat, status, input widgets
├── config/ # Configuration loading
│ ├── types.rs # AppConfig, GatewayConfig, ProviderConfig
│ └── loader.rs # TOML + env var merging
├── gateway/ # Multi-platform bot gateway
│ ├── session_map.rs # ThreadKey → SessionID mapping
│ ├── event_handler.rs # AgentEvent → text with rate limiting
│ ├── slack/ # Slack (Socket Mode + Web API)
│ ├── discord/ # Discord (serenity EventHandler)
│ └── webui/ # Web UI (axum + WebSocket)
├── memory/ # Persistent memory (MEMORY.md)
│ └── store.rs # Global + project memory
├── model/ # Provider-agnostic types
│ ├── message.rs # Message, ContentBlock, Role
│ ├── request.rs # CompletionRequest
│ ├── response.rs # StreamEvent, StopReason, Usage
│ └── tool_schema.rs # ToolDefinition
├── provider/ # LLM provider abstraction
│ ├── traits.rs # Provider trait, EventStream
│ ├── http.rs # Shared HTTP client + SSE parser
│ ├── registry.rs # "provider/model" resolution
│ ├── auth/ # Vertex AI OAuth2, Bedrock SigV4
│ └── protocol/ # Anthropic + OpenAI implementations
├── session/ # JSONL session persistence
│ ├── types.rs # SessionEntry, SessionMetadata
│ └── store.rs # Session store + writer
├── tool/ # Tool system
│ ├── traits.rs # Tool trait, ToolRegistry
│ ├── builtin/ # 6 built-in tools
│ └── plugin/ # Bash / HTTP plugin tools
└── error.rs # ClawError
Design principles
- Two protocols cover all providers: Anthropic Messages API and OpenAI Chat Completions API implementations handle every major provider. OpenRouter, Ollama, vLLM, etc. are OpenAI-compatible.
- Config-driven: Adding a new provider requires only a
[providers.xxx]entry in TOML — no code changes. - Pin<Box<dyn Future>>: Provider and Tool traits use
Pin<Box<dyn Future>>instead ofasync fnfor dyn compatibility (even in Rust 2024 edition,async fnin traits is not dyn-compatible). - Thin HTTP abstraction: reqwest + eventsource-stream with full control. No heavy framework dependencies.
- No Gateway trait: Each platform is an async function, not a trait implementation. Shared via
AgentFactoryandSessionMaponly. - Embedded UI: Web UI is a single HTML file compiled into the binary via
include_str!. Zero external assets, zero build step.
Provider System
Two-protocol design
clawzero uses two protocol implementations to cover all major LLM providers:
| Protocol | Implementation | Providers |
|---|---|---|
| Anthropic Messages API | protocol/anthropic.rs | Anthropic, Vertex AI, Bedrock |
| OpenAI Chat Completions API | protocol/openai.rs | OpenAI, OpenRouter, Ollama, vLLM |
This design means adding a new OpenAI-compatible provider (e.g., a new local inference server) requires only a config entry:
[providers.my-server]
protocol = "openai"
base_url = "http://localhost:8080"
api_key = ""
Provider registry
The provider registry (provider/registry.rs) resolves model strings in provider/model format:
- Parse
"anthropic/claude-sonnet-4-20250514"→ provider name"anthropic", model"claude-sonnet-4-20250514" - Look up provider config from the
[providers]table - Select protocol implementation based on
protocolfield - Wire up authentication if
authis set - Return a
Box<dyn Provider>ready for streaming completion
Provider trait
#![allow(unused)]
fn main() {
trait Provider {
fn complete<'a>(
&'a self,
request: &'a CompletionRequest,
) -> Pin<Box<dyn Future<Output = Result<EventStream>> + Send + 'a>>;
}
}
The trait uses Pin<Box<dyn Future>> instead of async fn for dyn compatibility. The lifetime 'a ties both &self and the request reference to ensure the stream can borrow from both.
Authentication
Cloud providers use the AuthHook trait for authentication:
| Auth type | Method | Provider |
|---|---|---|
vertex | OAuth2 via gcloud auth print-access-token | Vertex AI |
bedrock | AWS SigV4 request signing | AWS Bedrock |
Auth hooks modify the HTTP request before sending (adding authorization headers, signing the request).
SSE streaming
All providers use Server-Sent Events (SSE) for streaming:
reqwest response → bytes_stream → eventsource-stream → StreamEvent mapping
Each protocol implementation maps provider-specific SSE events to the common StreamEvent type (TextDelta, ToolCall, Usage, Stop).
Contributing
Thank you for your interest in contributing to clawzero! This guide covers the development workflow and conventions.
Prerequisites
- Rust (latest stable)
- mise — manages Rust and mdBook versions
- mdBook (latest, installed via mise)
Setup
git clone https://github.com/betta-lab/clawzero.git
cd clawzero
cargo build
mise automatically provides the correct tool versions.
Test-Driven Development
This project follows TDD (Test-Driven Development):
- Red — Write a failing test first
- Green — Write the minimum code to make the test pass
- Refactor — Clean up while keeping tests green
# Unit tests
cargo test --lib --bins
# Full test suite (includes e2e, requires API keys)
cargo test
Code Quality
All PRs must pass CI checks:
cargo fmt --check # Formatting
cargo clippy -- -D warnings # Linting (warnings = errors)
cargo test --lib --bins # Tests
cargo build --release # Release build
mdbook build docs # Documentation build
Documentation
README.md and docs/ are the Single Source of Truth for documentation. Both must be updated before committing any user-facing changes.
- All content in
docs/must be written in English - Preview locally:
mdbook serve docs
Project Structure
src/
├── main.rs # CLI entry point
├── agent/ # Core agent loop & session management
├── cli/ # CLI, REPL, TUI interface
├── config/ # Configuration (TOML + env)
├── provider/ # Multi-provider abstraction
├── tool/ # Tool system (builtin + plugins)
├── gateway/ # Slack / Discord / Web UI bots
├── memory/ # Persistent memory system
├── model/ # Provider-agnostic types
├── session/ # JSONL session persistence
└── error.rs # Error handling
Pull Request Process
- Fork the repository and create a feature branch from
main - Write tests first (TDD), then implement
- Run all checks:
cargo fmt --check && cargo clippy -- -D warnings && cargo test --lib --bins && mdbook build docs - Update documentation if changes affect user-facing behavior
- Submit a PR against
main
Feature Flags
| Flag | Description |
|---|---|
slack | Slack gateway (tokio-tungstenite) |
discord | Discord gateway (serenity 0.12) |
bedrock | AWS Bedrock provider |
cargo build --features slack,discord
Extending clawzero
Adding a Provider
Providers use a protocol-based abstraction:
AnthropicProtocol— for Anthropic-compatible APIsOpenAiProtocol— for OpenAI-compatible APIs
Register new providers in the config-driven registry with "provider/model" format (e.g., anthropic/claude-opus-4-6).
Adding a Tool
Built-in tools implement the Tool trait (src/tool/). The trait uses Pin<Box<dyn Future>> for dyn compatibility.
For simpler integrations, consider Plugin Tools — custom bash/HTTP tools via TOML config, no Rust code required.
Benchmarking
clawzero includes a reproducible Docker-based benchmark environment for performance comparison against Claude Code and OpenClaw.
Quick Start
# Run all tools × all scenarios
docker compose -f bench/docker-compose.yml run bench
# Measure clawzero startup time only
docker compose -f bench/docker-compose.yml run bench --tools clawzero --scenarios startup
# Specify iteration count
docker compose -f bench/docker-compose.yml run bench --iterations 10
Prerequisites
- Docker / Docker Compose
ANTHROPIC_API_KEYenvironment variable (required for API scenarios)OPENAI_API_KEYenvironment variable (optional, for OpenClaw)
Metrics
| Metric | Measurement Method |
|---|---|
| Startup time (cold start) | Wall-clock time of --help execution via hyperfine |
| TTFT (Time to First Token) | Time until first stdout byte via custom wrapper |
| E2E completion time | Wall-clock time of prompt execution via hyperfine |
| Memory usage (peak RSS) | Maximum resident set size via /usr/bin/time -v |
| Token throughput | Output characters / E2E time |
Scenarios
| Scenario | Description | API Call |
|---|---|---|
startup | --help execution time | No |
simple | Response to "What is 1+1?" | Yes |
tool_use | File read + line count | Yes |
File Structure
bench/
├── Dockerfile # Multi-stage build
├── docker-compose.yml # Environment variables and volume mounts
├── run.sh # Main benchmark runner
├── adapters/
│ ├── clawzero.sh # clawzero invocation adapter
│ ├── claude-code.sh # Claude Code invocation adapter
│ └── openclaw.sh # OpenClaw invocation adapter
├── measure_ttft.sh # TTFT measurement helper
├── fixtures/
│ └── bench_input.txt # Test file for tool_use scenario
└── results/ # Output directory (.gitignore)
run.sh Options
--tools <t1,t2,...> Tools to benchmark (default: clawzero,claude-code,openclaw)
--scenarios <s1,s2,...> Scenarios to run (default: startup,simple,tool_use)
--iterations <N> Iteration count (default: $BENCH_ITERATIONS or 5)
--results-dir <path> Output directory (default: bench/results)
Environment Variables
| Variable | Description | Default |
|---|---|---|
ANTHROPIC_API_KEY | Anthropic API key | (required) |
OPENAI_API_KEY | OpenAI API key | (optional) |
BENCH_ITERATIONS | Iteration count | 5 |
BENCH_MODEL | Model used by clawzero | anthropic/claude-sonnet-4-5-20250929 |
Results
Results are saved to bench/results/<timestamp>/:
results.json— All metrics in JSON format<tool>_<scenario>_hyperfine.json— hyperfine raw data<tool>_<scenario>_time.txt—/usr/bin/timeoutput<tool>_<scenario>_ttft.csv— TTFT CSV data
A summary table is printed to the console when execution completes.
Adding New Adapters
To add a new tool, create bench/adapters/<name>.sh and define the following functions:
TOOL_NAME="my-tool"
cmd_startup() {
my-tool --help
}
cmd_simple() {
my-tool "What is 1+1?"
}
cmd_tool_use() {
my-tool "Read /tmp/bench_input.txt and count the lines"
}
Specify --tools my-tool to automatically load the adapter.