Closes three gaps the user surfaced: (1) OpenClaw agents run blind
without AtoCore context, (2) mobile/desktop chats can't be captured
at all, (3) wiki UI hadn't kept up with backend capabilities.
Phase 7I — OpenClaw two-way bridge
- Plugin now calls /context/build on before_agent_start and prepends
the context pack to event.prompt, so whatever LLM runs underneath
(sonnet, opus, codex, local model) answers grounded in AtoCore
knowledge. Captured prompt stays the user's original text; fail-open
with a 5s timeout. Config-gated via injectContext flag.
- Plugin version 0.0.0 → 0.2.0; README rewritten.
UI refresh
- /wiki/capture — paste-to-ingest form for Claude Desktop / web / mobile
/ ChatGPT / other. Goes through normal /interactions pipeline with
client="claude-desktop|claude-web|claude-mobile|chatgpt|other".
Fixes the rotovap/mushroom-on-phone gap.
- /wiki/memories/{id} (Phase 7E) — full memory detail: content, status,
confidence, refs, valid_until, domain_tags (clickable to domain
pages), project link, source chunk, graduated-to-entity link, full
audit trail, related-by-tag neighbors.
- /wiki/domains/{tag} (Phase 7F) — cross-project view: all active
memories with the given tag grouped by project, sorted by count.
Case-insensitive, whitespace-tolerant. Also surfaces graduated
entities carrying the tag.
- /wiki/activity — autonomous-activity timeline feed. Summary chips
by action (created/promoted/merged/superseded/decayed/canonicalized)
and by actor (auto-dedup-tier1, auto-dedup-tier2, confidence-decay,
phase10-auto-promote, transient-to-durable, tag-canon, human-triage).
Answers "what has the brain been doing while I was away?"
- Home refresh: persistent topnav (Home · Activity · Capture · Triage
· Dashboard), "What the brain is doing" snippet above project cards
showing recent autonomous-actor counts, link to full activity.
Tests: +10 (capture page, memory detail + 404, domain cross-project +
empty + tag normalization, activity feed + groupings, home topnav,
superseded-source detail after merge). 440 → 450.
Known next: capture-browser extension for Claude.ai web (bigger
project, deferred); voice/mobile relay (adjacent).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
OpenClaw scheduled tasks (DXF email watcher, calendar reminder pings)
fire agent sessions with prompts that begin '[cron:<id> ...]'. These
were all getting captured as AtoCore interactions — 45 out of 50
recent interactions today were cron noise, not real user turns.
Filter at the plugin level: before_agent_start ignores any prompt
starting with '[cron:'. The gateway has been restarted with the
updated plugin.
Impact: graduation, triage, and context pipelines stop seeing noise
from OpenClaw's own internal automation. Only real user turns (via
chat channels) feed the brain going forward.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Completes the Phase 1 master brain keystone: every LLM interaction
across the ecosystem now pulls context from AtoCore automatically.
Three adapters, one HTTP backend:
1. OpenClaw plugin pull (handler.js):
- Added before_prompt_build hook that calls /context/build and
injects the pack via prependContext
- Existing capture hooks (before_agent_start + llm_output)
unchanged
- 6s context timeout, fail-open on AtoCore unreachable
- Deployed to T420, gateway restarted, "7 plugins loaded"
2. atocore-proxy (scripts/atocore_proxy.py):
- Stdlib-only OpenAI-compatible HTTP middleware
- Drop-in layer for Codex, Ollama, LiteLLM, any OpenAI-compat client
- Intercepts /chat/completions: extracts query, pulls context,
injects as system message, forwards to upstream, captures back
- Fail-open: AtoCore down = passthrough without injection
- Configurable via env: UPSTREAM, PORT, CLIENT_LABEL, INJECT, CAPTURE
3. (from prior commit c49363f) atocore-mcp:
- stdio MCP server, stdlib Python, 7 tools exposed
- Registered in Claude Code: "✓ Connected"
Plus quick win:
- Project synthesis moved from Sunday-only to daily cron so wiki /
mirror pages stay fresh (Step C in batch-extract.sh). Lint stays
weekly.
Plus docs:
- docs/universal-consumption.md: configuration guide for all 3 adapters
with registration/env-var tables and verification checklist
Plus housekeeping:
- .gitignore: add .mypy_cache/
Tests: 303/303 passing.
This closes the consumption gap: the reinforcement feedback loop
can now actually work (memories get injected → get referenced →
reinforcement fires → auto-promotion). Every Claude, OpenClaw,
Codex, or Ollama session is automatically AtoCore-grounded.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- openclaw-plugins/atocore-capture/handler.js: simplified version
using before_agent_start + llm_output hooks (survives gateway
restarts). The production copy lives on T420 at
/tmp/atocore-openclaw-capture-plugin/openclaw-plugins/atocore-capture/
- DEV-LEDGER: updated orientation (live_sha b687e7f, capture clients)
and session log for 2026-04-16
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>