OpenClaw scheduled tasks (DXF email watcher, calendar reminder pings)
fire agent sessions with prompts that begin '[cron:<id> ...]'. These
were all getting captured as AtoCore interactions — 45 out of 50
recent interactions today were cron noise, not real user turns.
Filter at the plugin level: before_agent_start ignores any prompt
starting with '[cron:'. The gateway has been restarted with the
updated plugin.
Impact: graduation, triage, and context pipelines stop seeing noise
from OpenClaw's own internal automation. Only real user turns (via
chat channels) feed the brain going forward.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Completes the Phase 1 master brain keystone: every LLM interaction
across the ecosystem now pulls context from AtoCore automatically.
Three adapters, one HTTP backend:
1. OpenClaw plugin pull (handler.js):
- Added before_prompt_build hook that calls /context/build and
injects the pack via prependContext
- Existing capture hooks (before_agent_start + llm_output)
unchanged
- 6s context timeout, fail-open on AtoCore unreachable
- Deployed to T420, gateway restarted, "7 plugins loaded"
2. atocore-proxy (scripts/atocore_proxy.py):
- Stdlib-only OpenAI-compatible HTTP middleware
- Drop-in layer for Codex, Ollama, LiteLLM, any OpenAI-compat client
- Intercepts /chat/completions: extracts query, pulls context,
injects as system message, forwards to upstream, captures back
- Fail-open: AtoCore down = passthrough without injection
- Configurable via env: UPSTREAM, PORT, CLIENT_LABEL, INJECT, CAPTURE
3. (from prior commit c49363f) atocore-mcp:
- stdio MCP server, stdlib Python, 7 tools exposed
- Registered in Claude Code: "✓ Connected"
Plus quick win:
- Project synthesis moved from Sunday-only to daily cron so wiki /
mirror pages stay fresh (Step C in batch-extract.sh). Lint stays
weekly.
Plus docs:
- docs/universal-consumption.md: configuration guide for all 3 adapters
with registration/env-var tables and verification checklist
Plus housekeeping:
- .gitignore: add .mypy_cache/
Tests: 303/303 passing.
This closes the consumption gap: the reinforcement feedback loop
can now actually work (memories get injected → get referenced →
reinforcement fires → auto-promotion). Every Claude, OpenClaw,
Codex, or Ollama session is automatically AtoCore-grounded.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- openclaw-plugins/atocore-capture/handler.js: simplified version
using before_agent_start + llm_output hooks (survives gateway
restarts). The production copy lives on T420 at
/tmp/atocore-openclaw-capture-plugin/openclaw-plugins/atocore-capture/
- DEV-LEDGER: updated orientation (live_sha b687e7f, capture clients)
and session log for 2026-04-16
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>