--- description: Pull a context pack from the live AtoCore service for the current prompt argument-hint: [project-id] --- You are about to enrich a user prompt with context from the live AtoCore service. This is the daily-use entry point for AtoCore from inside Claude Code. The work happens via the **shared AtoCore operator client** at `scripts/atocore_client.py`. That client is the canonical Python backbone for stable AtoCore operations and is meant to be reused by every LLM client (OpenClaw helper, future Codex skill, etc.) — see `docs/architecture/llm-client-integration.md` for the layering. This slash command is a thin Claude Code-specific frontend on top of it. ## Step 1 — parse the arguments The user invoked `/atocore-context` with: ``` $ARGUMENTS ``` Treat the **entire argument string** as the prompt by default. If the last whitespace-separated token looks like a registered project id or alias (`atocore`, `p04`, `p04-gigabit`, `gigabit`, `p05`, `p05-interferometer`, `interferometer`, `p06`, `p06-polisher`, `polisher`, or any case-insensitive variant), pull it off and treat it as an explicit project hint. The remaining tokens become the prompt. Otherwise leave the project hint empty and the client will try to auto-detect one from the prompt itself. ## Step 2 — call the shared client Use the Bash tool. The client respects `ATOCORE_BASE_URL` (default `http://dalidou:8100`) and is fail-open by default — if AtoCore is unreachable it returns a `{"status": "unavailable"}` payload and exits 0, which is what the daily-use loop wants. **If the user passed an explicit project hint**, call `context-build` directly so AtoCore uses exactly that project: ```bash python scripts/atocore_client.py context-build \ "" \ "" ``` **If no explicit project hint**, call `auto-context` which will run the client's `detect-project` routing first and only call `context-build` once it has a match: ```bash python scripts/atocore_client.py auto-context "" ``` In both cases the response is the JSON payload from `/context/build` (or, for the `auto-context` no-match case, a small `{"status": "no_project_match"}` envelope). ## Step 3 — present the context pack to the user The successful response contains at least: - `formatted_context` — the assembled context block AtoCore would feed an LLM - `chunks_used`, `total_chars`, `budget`, `budget_remaining`, `duration_ms` - `chunks` — array of source documents that contributed, each with `source_file`, `heading_path`, `score` Render in this order: 1. A one-line stats banner: `chunks=N, chars=X/budget, duration=Yms` 2. The `formatted_context` block verbatim inside a fenced text code block so the user can read what AtoCore would feed an LLM 3. The `chunks` array as a small bullet list with `source_file`, `heading_path`, and `score` per chunk Three special cases: - **`{"status": "no_project_match"}`** (from `auto-context`) → Tell the user: "AtoCore could not auto-detect a project from the prompt. Re-run with an explicit project id: `/atocore-context ` (or call without a hint to use the corpus-wide context build)." - **`{"status": "unavailable"}`** (fail-open from the client) → Tell the user: "AtoCore is unreachable at `$ATOCORE_BASE_URL`. Check `python scripts/atocore_client.py health` for diagnostics." - **Empty `chunks_used: 0` with no project state and no memories** → Tell the user: "AtoCore returned no context for this prompt — either the corpus does not have relevant information for the detected project or the project hint is wrong. Try a different hint or a longer prompt." ## Step 4 — what about capturing the interaction Capture (Phase 9 Commit A) and the rest of the reflection loop (reinforcement, extraction, review queue) are intentionally NOT exposed by the shared client yet. The contracts are stable but the workflow ergonomics are not, so the daily-use slash command stays focused on context retrieval until those review flows have been exercised in real use. See `docs/architecture/llm-client-integration.md` for the deferral rationale. When capture is added to the shared client, this slash command will gain a follow-up `/atocore-record-response` companion command that posts the LLM's response back to the same interaction. That work is queued. ## Notes for the assistant - DO NOT bypass the shared client by calling curl yourself. The client is the contract between AtoCore and every LLM frontend; if you find a missing capability, the right fix is to extend the client, not to work around it. - DO NOT silently change `ATOCORE_BASE_URL`. If the env var points at the wrong instance, surface the error so the user can fix it. - DO NOT hide the formatted context pack from the user. Showing what AtoCore would feed an LLM is the whole point. - The output goes into the user's working context as background; they may follow up with their actual question, and the AtoCore context pack acts as informal injected knowledge.