# AtoCore Capture + Context Plugin for OpenClaw Two-way bridge between OpenClaw agents and AtoCore: **Capture (since v1)** - watches user-triggered assistant turns - POSTs `prompt` + `response` to `POST /interactions` - sets `client="openclaw"`, `reinforce=true` - fails open on network or API errors **Context injection (Phase 7I, v2+)** - on `before_agent_start`, fetches a context pack from `POST /context/build` - prepends the pack to the agent's prompt so whatever LLM runs underneath (sonnet, opus, codex, local model — whichever OpenClaw delegates to) answers grounded in what AtoCore already knows - original user prompt is still what gets captured later (no recursion) - fails open: context unreachable → agent runs as before ## Config ```json { "baseUrl": "http://dalidou:8100", "minPromptLength": 15, "maxResponseLength": 50000, "injectContext": true, "contextCharBudget": 4000 } ``` - `baseUrl` — defaults to `ATOCORE_BASE_URL` env or `http://dalidou:8100` - `injectContext` — set to `false` to disable the Phase 7I context injection and make this a pure one-way capture plugin again - `contextCharBudget` — cap on injected context size. `/context/build` respects it too; this is a client-side safety net. Default 4000 chars (~1000 tokens). ## Notes - Project detection is intentionally left empty — AtoCore's extraction pipeline handles unscoped interactions and infers the project from content. - Extraction is **not** part of this plugin. Interactions are captured; batch extraction runs via cron on the AtoCore host. - Context injection only fires for user-triggered turns (not heartbeats or system-only runs). - Timeouts: context fetch is 5s (short so a slow AtoCore never blocks a user turn); capture post is 10s.