--- description: Pull a context pack from the live AtoCore service for the current prompt argument-hint: [project-id] --- You are about to enrich a user prompt with context from the live AtoCore service. This is the daily-use entry point for AtoCore from inside Claude Code. The work happens via the **shared AtoCore operator client** at `scripts/atocore_client.py`. That client is the canonical Python backbone for stable AtoCore operations and is meant to be reused by every LLM client (OpenClaw helper, future Codex skill, etc.) — see `docs/architecture/llm-client-integration.md` for the layering. This slash command is a thin Claude Code-specific frontend on top of it. ## Step 1 — parse the arguments The user invoked `/atocore-context` with: ``` $ARGUMENTS ``` You need to figure out two things: 1. The **prompt text** — what AtoCore will retrieve context for 2. An **optional project hint** — used to scope retrieval to a specific project's trusted state and corpus The user may have passed a project id or alias as the **last whitespace-separated token**. Don't maintain a hardcoded list of known aliases — let the shared client decide. Use this rule: - Take the last token of `$ARGUMENTS`. Call it `MAYBE_HINT`. - Run `python scripts/atocore_client.py detect-project "$MAYBE_HINT"` to ask the registry whether it's a known project id or alias. This call is cheap (it just hits `/projects` and does a regex match) and inherits the client's fail-open behavior. - If the response has a non-null `matched_project`, the last token was an explicit project hint. `PROMPT_TEXT` is everything except the last token; `PROJECT_HINT` is the matched canonical project id. - Otherwise the last token is just part of the prompt. `PROMPT_TEXT` is the full `$ARGUMENTS`; `PROJECT_HINT` is empty. This delegates the alias-knowledge to the registry instead of embedding a stale list in this markdown file. When you add a new project to the registry, the slash command picks it up automatically with no edits here. ## Step 2 — call the shared client for the context pack The server resolves project hints through the registry before looking up trusted state, so you can pass either the canonical id or any alias to `context-build` and the trusted state lookup will work either way. (Regression test: `tests/test_context_builder.py::test_alias_hint_resolves_through_registry`.) **If `PROJECT_HINT` is non-empty**, call `context-build` directly with that hint: ```bash python scripts/atocore_client.py context-build \ "$PROMPT_TEXT" \ "$PROJECT_HINT" ``` **If `PROJECT_HINT` is empty**, do the 2-step fallback dance so the user always gets a context pack regardless of whether the prompt implies a project: ```bash # Try project auto-detection first. RESULT=$(python scripts/atocore_client.py auto-context "$PROMPT_TEXT") # If auto-context could not detect a project it returns a small # {"status": "no_project_match", ...} envelope. In that case fall # back to a corpus-wide context build with no project hint, which # is the right behaviour for cross-project or generic prompts like # "what changed in AtoCore backup policy this week?" if echo "$RESULT" | grep -q '"no_project_match"'; then RESULT=$(python scripts/atocore_client.py context-build "$PROMPT_TEXT") fi echo "$RESULT" ``` This is the fix for the P2 finding from codex's review: previously the slash command sent every no-hint prompt through `auto-context` and returned `no_project_match` to the user with no context, even though the underlying client's `context-build` subcommand has always supported corpus-wide context builds. In both branches the response is the JSON payload from `/context/build` (or, in the rare case where even the corpus-wide build fails, a `{"status": "unavailable"}` envelope from the client's fail-open layer). ## Step 3 — present the context pack to the user The successful response contains at least: - `formatted_context` — the assembled context block AtoCore would feed an LLM - `chunks_used`, `total_chars`, `budget`, `budget_remaining`, `duration_ms` - `chunks` — array of source documents that contributed, each with `source_file`, `heading_path`, `score` Render in this order: 1. A one-line stats banner: `chunks=N, chars=X/budget, duration=Yms` 2. The `formatted_context` block verbatim inside a fenced text code block so the user can read what AtoCore would feed an LLM 3. The `chunks` array as a small bullet list with `source_file`, `heading_path`, and `score` per chunk Two special cases: - **`{"status": "unavailable"}`** (fail-open from the client) → Tell the user: "AtoCore is unreachable at `$ATOCORE_BASE_URL`. Check `python scripts/atocore_client.py health` for diagnostics." - **Empty `chunks_used: 0` with no project state and no memories** → Tell the user: "AtoCore returned no context for this prompt — either the corpus does not have relevant information or the project hint is wrong. Try a different hint or a longer prompt." ## Step 4 — what about capturing the interaction Capture (Phase 9 Commit A) and the rest of the reflection loop (reinforcement, extraction, review queue) are intentionally NOT exposed by the shared client yet. The contracts are stable but the workflow ergonomics are not, so the daily-use slash command stays focused on context retrieval until those review flows have been exercised in real use. See `docs/architecture/llm-client-integration.md` for the deferral rationale. When capture is added to the shared client, this slash command will gain a follow-up `/atocore-record-response` companion command that posts the LLM's response back to the same interaction. That work is queued. ## Notes for the assistant - DO NOT bypass the shared client by calling curl yourself. The client is the contract between AtoCore and every LLM frontend; if you find a missing capability, the right fix is to extend the client, not to work around it. - DO NOT maintain a hardcoded list of project aliases in this file. Use `detect-project` to ask the registry — that's the whole point of having a registry. - DO NOT silently change `ATOCORE_BASE_URL`. If the env var points at the wrong instance, surface the error so the user can fix it. - DO NOT hide the formatted context pack from the user. Showing what AtoCore would feed an LLM is the whole point. - The output goes into the user's working context as background; they may follow up with their actual question, and the AtoCore context pack acts as informal injected knowledge.