Codex's review caught that the Claude Code slash command shipped in
Session 2 was a parallel reimplementation of routing logic the
existing scripts/atocore_client.py already had. That client was
introduced via the codex/port-atocore-ops-client merge and is
already a comprehensive operator client (auto-context,
detect-project, refresh-project, project-state, audit-query, etc.).
The slash command should have been a thin wrapper from the start.
This commit fixes the shape without expanding scope.
.claude/commands/atocore-context.md
-----------------------------------
Rewritten as a thin Claude Code-specific frontend that shells out
to the shared client:
- explicit project hint -> calls `python scripts/atocore_client.py
context-build "<prompt>" "<project>"`
- no explicit hint -> calls `python scripts/atocore_client.py
auto-context "<prompt>"` which runs the client's detect-project
routing first and falls through to context-build with the match
Inherits the client's stable behaviour for free:
- ATOCORE_BASE_URL env var (default http://dalidou:8100)
- fail-open on network errors via ATOCORE_FAIL_OPEN
- consistent JSON output shape
- the same project alias matching the OpenClaw helper uses
Removes the speculative `--capture` capture path that was in the
original draft. Capture/extract/queue/promote/reject are
intentionally NOT in the shared client yet (memory-review
workflow not exercised in real use), so the slash command can't
expose them either.
docs/architecture/llm-client-integration.md
-------------------------------------------
New planning doc that defines the layering rule for AtoCore's
relationship with LLM client contexts:
Three layers:
1. AtoCore HTTP API (universal, src/atocore/api/routes.py)
2. Shared operator client (scripts/atocore_client.py) — the
canonical Python backbone for stable AtoCore operations
3. Per-agent thin frontends (Claude Code slash command,
OpenClaw helper, future Codex skill, future MCP server)
that shell out to the shared client
Three non-negotiable rules:
- every per-agent frontend is a thin wrapper (translate the
agent's command format and render the JSON; nothing else)
- the shared client never duplicates the API (it composes
endpoints; new logic goes in the API first)
- the shared client only exposes stable operations (subcommands
land only after the API has been exercised in a real workflow)
Doc covers:
- the full table of subcommands currently in scope (project
lifecycle, ingestion, project-state, retrieval, context build,
audit-query, debug-context, health/stats)
- the three deferred families with rationale: memory review
queue (workflow not exercised), backup admin (fail-open
default would hide errors), engineering layer entities (V1
not yet implemented)
- the integration recipe for new agent platforms
- explicit acknowledgement that the OpenClaw helper currently
duplicates routing logic and that the refactor to the shared
client is a queued cross-repo follow-up
- how the layering connects to phase 8 (OpenClaw) and phase 11
(multi-model)
- versioning and stability rules for the shared client surface
- open follow-ups: OpenClaw refactor, memory-review subcommands
when ready, optional backup admin subcommands, engineering
entity subcommands during V1 implementation
master-plan-status.md updated
-----------------------------
- New "LLM Client Integration" subsection that points to the
layering doc and explicitly notes the deferral of memory-review
and engineering-entity subcommands
- Frames the layering as sitting between phase 8 and phase 11
Scope is intentionally narrow per codex's framing: promote the
existing client to canonical status, refactor the slash command
to use it, document the layering. No new client subcommands
added in this commit. The OpenClaw helper refactor is a
separate cross-repo follow-up. Memory-review and engineering-
entity work stay deferred.
Full suite: 160 passing, no behavior changes.