Codex's review caught that the Claude Code slash command shipped in Session 2 was a parallel reimplementation of routing logic the existing scripts/atocore_client.py already had. That client was introduced via the codex/port-atocore-ops-client merge and is already a comprehensive operator client (auto-context, detect-project, refresh-project, project-state, audit-query, etc.). The slash command should have been a thin wrapper from the start. This commit fixes the shape without expanding scope. .claude/commands/atocore-context.md ----------------------------------- Rewritten as a thin Claude Code-specific frontend that shells out to the shared client: - explicit project hint -> calls `python scripts/atocore_client.py context-build "<prompt>" "<project>"` - no explicit hint -> calls `python scripts/atocore_client.py auto-context "<prompt>"` which runs the client's detect-project routing first and falls through to context-build with the match Inherits the client's stable behaviour for free: - ATOCORE_BASE_URL env var (default http://dalidou:8100) - fail-open on network errors via ATOCORE_FAIL_OPEN - consistent JSON output shape - the same project alias matching the OpenClaw helper uses Removes the speculative `--capture` capture path that was in the original draft. Capture/extract/queue/promote/reject are intentionally NOT in the shared client yet (memory-review workflow not exercised in real use), so the slash command can't expose them either. docs/architecture/llm-client-integration.md ------------------------------------------- New planning doc that defines the layering rule for AtoCore's relationship with LLM client contexts: Three layers: 1. AtoCore HTTP API (universal, src/atocore/api/routes.py) 2. Shared operator client (scripts/atocore_client.py) — the canonical Python backbone for stable AtoCore operations 3. Per-agent thin frontends (Claude Code slash command, OpenClaw helper, future Codex skill, future MCP server) that shell out to the shared client Three non-negotiable rules: - every per-agent frontend is a thin wrapper (translate the agent's command format and render the JSON; nothing else) - the shared client never duplicates the API (it composes endpoints; new logic goes in the API first) - the shared client only exposes stable operations (subcommands land only after the API has been exercised in a real workflow) Doc covers: - the full table of subcommands currently in scope (project lifecycle, ingestion, project-state, retrieval, context build, audit-query, debug-context, health/stats) - the three deferred families with rationale: memory review queue (workflow not exercised), backup admin (fail-open default would hide errors), engineering layer entities (V1 not yet implemented) - the integration recipe for new agent platforms - explicit acknowledgement that the OpenClaw helper currently duplicates routing logic and that the refactor to the shared client is a queued cross-repo follow-up - how the layering connects to phase 8 (OpenClaw) and phase 11 (multi-model) - versioning and stability rules for the shared client surface - open follow-ups: OpenClaw refactor, memory-review subcommands when ready, optional backup admin subcommands, engineering entity subcommands during V1 implementation master-plan-status.md updated ----------------------------- - New "LLM Client Integration" subsection that points to the layering doc and explicitly notes the deferral of memory-review and engineering-entity subcommands - Frames the layering as sitting between phase 8 and phase 11 Scope is intentionally narrow per codex's framing: promote the existing client to canonical status, refactor the slash command to use it, document the layering. No new client subcommands added in this commit. The OpenClaw helper refactor is a separate cross-repo follow-up. Memory-review and engineering- entity work stay deferred. Full suite: 160 passing, no behavior changes.
5.0 KiB
description, argument-hint
| description | argument-hint |
|---|---|
| Pull a context pack from the live AtoCore service for the current prompt | <prompt text> [project-id] |
You are about to enrich a user prompt with context from the live AtoCore service. This is the daily-use entry point for AtoCore from inside Claude Code.
The work happens via the shared AtoCore operator client at
scripts/atocore_client.py. That client is the canonical Python
backbone for stable AtoCore operations and is meant to be reused by
every LLM client (OpenClaw helper, future Codex skill, etc.) — see
docs/architecture/llm-client-integration.md for the layering. This
slash command is a thin Claude Code-specific frontend on top of it.
Step 1 — parse the arguments
The user invoked /atocore-context with:
$ARGUMENTS
Treat the entire argument string as the prompt by default. If the
last whitespace-separated token looks like a registered project id or
alias (atocore, p04, p04-gigabit, gigabit, p05,
p05-interferometer, interferometer, p06, p06-polisher,
polisher, or any case-insensitive variant), pull it off and treat
it as an explicit project hint. The remaining tokens become the
prompt. Otherwise leave the project hint empty and the client will
try to auto-detect one from the prompt itself.
Step 2 — call the shared client
Use the Bash tool. The client respects ATOCORE_BASE_URL (default
http://dalidou:8100) and is fail-open by default — if AtoCore is
unreachable it returns a {"status": "unavailable"} payload and
exits 0, which is what the daily-use loop wants.
If the user passed an explicit project hint, call context-build
directly so AtoCore uses exactly that project:
python scripts/atocore_client.py context-build \
"<the prompt text>" \
"<the project hint>"
If no explicit project hint, call auto-context which will run
the client's detect-project routing first and only call
context-build once it has a match:
python scripts/atocore_client.py auto-context "<the prompt text>"
In both cases the response is the JSON payload from /context/build
(or, for the auto-context no-match case, a small
{"status": "no_project_match"} envelope).
Step 3 — present the context pack to the user
The successful response contains at least:
formatted_context— the assembled context block AtoCore would feed an LLMchunks_used,total_chars,budget,budget_remaining,duration_mschunks— array of source documents that contributed, each withsource_file,heading_path,score
Render in this order:
- A one-line stats banner:
chunks=N, chars=X/budget, duration=Yms - The
formatted_contextblock verbatim inside a fenced text code block so the user can read what AtoCore would feed an LLM - The
chunksarray as a small bullet list withsource_file,heading_path, andscoreper chunk
Three special cases:
{"status": "no_project_match"}(fromauto-context) → Tell the user: "AtoCore could not auto-detect a project from the prompt. Re-run with an explicit project id:/atocore-context <prompt> <project-id>(or call without a hint to use the corpus-wide context build)."{"status": "unavailable"}(fail-open from the client) → Tell the user: "AtoCore is unreachable at$ATOCORE_BASE_URL. Checkpython scripts/atocore_client.py healthfor diagnostics."- Empty
chunks_used: 0with no project state and no memories → Tell the user: "AtoCore returned no context for this prompt — either the corpus does not have relevant information for the detected project or the project hint is wrong. Try a different hint or a longer prompt."
Step 4 — what about capturing the interaction
Capture (Phase 9 Commit A) and the rest of the reflection loop
(reinforcement, extraction, review queue) are intentionally NOT
exposed by the shared client yet. The contracts are stable but the
workflow ergonomics are not, so the daily-use slash command stays
focused on context retrieval until those review flows have been
exercised in real use. See docs/architecture/llm-client-integration.md
for the deferral rationale.
When capture is added to the shared client, this slash command will
gain a follow-up /atocore-record-response companion command that
posts the LLM's response back to the same interaction. That work is
queued.
Notes for the assistant
- DO NOT bypass the shared client by calling curl yourself. The client is the contract between AtoCore and every LLM frontend; if you find a missing capability, the right fix is to extend the client, not to work around it.
- DO NOT silently change
ATOCORE_BASE_URL. If the env var points at the wrong instance, surface the error so the user can fix it. - DO NOT hide the formatted context pack from the user. Showing what AtoCore would feed an LLM is the whole point.
- The output goes into the user's working context as background; they may follow up with their actual question, and the AtoCore context pack acts as informal injected knowledge.