--- description: Pull a context pack from the live AtoCore service for the current prompt argument-hint: [project-id] --- You are about to enrich a user prompt with context from the live AtoCore service. This is the daily-use entry point for AtoCore from inside Claude Code. ## Step 1 — parse the arguments The user invoked `/atocore-context` with the following arguments: ``` $ARGUMENTS ``` Treat the **entire argument string** as the prompt text by default. If the last whitespace-separated token looks like a registered project id (matches one of `atocore`, `p04-gigabit`, `p04`, `p05-interferometer`, `p05`, `p06-polisher`, `p06`, or any case-insensitive variant), treat it as the project hint and use the rest as the prompt text. Otherwise, leave the project hint empty. ## Step 2 — call the AtoCore /context/build endpoint Use the Bash tool to call AtoCore. The default endpoint is the live Dalidou instance. Read `ATOCORE_API_BASE` from the environment if set, otherwise default to `http://dalidou:3000` (the gitea host) — wait, no, AtoCore lives on a different port. Default to `http://dalidou:8100` which is the AtoCore service port from `pyproject.toml` and `config.py`. Build the JSON body with `jq -n` so quoting is safe. Run something like: ```bash ATOCORE_API_BASE="${ATOCORE_API_BASE:-http://dalidou:8100}" PROMPT_TEXT='' PROJECT_HINT='' if [ -n "$PROJECT_HINT" ]; then BODY=$(jq -n --arg p "$PROMPT_TEXT" --arg proj "$PROJECT_HINT" \ '{prompt:$p, project:$proj}') else BODY=$(jq -n --arg p "$PROMPT_TEXT" '{prompt:$p}') fi curl -fsS -X POST "$ATOCORE_API_BASE/context/build" \ -H "Content-Type: application/json" \ -d "$BODY" ``` If `jq` is not available on the host, fall back to a Python one-liner: ```bash python -c "import json,sys; print(json.dumps({'prompt': sys.argv[1], 'project': sys.argv[2]} if sys.argv[2] else {'prompt': sys.argv[1]}))" "$PROMPT_TEXT" "$PROJECT_HINT" ``` ## Step 3 — present the context pack to the user The response is JSON with at least these fields: `formatted_context`, `chunks_used`, `total_chars`, `budget`, `budget_remaining`, `duration_ms`, and a `chunks` array. Print the response in a readable summary: 1. Print a one-line stats banner: `chunks=N, chars=X/budget, duration=Yms` 2. Print the `formatted_context` block verbatim inside a fenced text code block so the user can read what AtoCore would feed an LLM 3. Print the `chunks` array as a small bulleted list with `source_file`, `heading_path`, and `score` per chunk If the response is empty (`chunks_used=0`, no project state, no memories), tell the user explicitly: "AtoCore returned no context for this prompt — either the corpus does not have relevant information or the project hint is wrong. Try `/atocore-context `." If the curl call fails: - Network error → tell the user the AtoCore service may be down at `$ATOCORE_API_BASE` and suggest checking `curl $ATOCORE_API_BASE/health` - 4xx → print the error body verbatim, the API error message is usually enough - 5xx → print the error body and suggest checking the service logs ## Step 4 — capture the interaction (optional, opt-in) If the user has previously asked the assistant to capture interactions into AtoCore (or if the slash command was invoked with the trailing literal `--capture` token), also POST the captured exchange to `/interactions` so the Phase 9 reflection loop sees it. Skip this step silently otherwise. The capture body is: ```json { "prompt": "", "response": "", "response_summary": "", "project": "", "client": "claude-code-slash", "session_id": "", "memories_used": [""], "chunks_used": [""], "context_pack": {"chunks_used": , "total_chars": } } ``` Note that the response field stays empty here — the LLM hasn't actually answered yet at the moment the slash command runs. A separate post-turn hook (not part of this command) would update the same interaction with the response, OR a follow-up `/atocore-record-response ` command would do it. For now, leave that as future work. ## Notes for the assistant - DO NOT invent project ids that aren't in the registry. If the user passed something that doesn't match, treat it as part of the prompt. - DO NOT silently fall back to a different endpoint. If `ATOCORE_API_BASE` is wrong, surface the network error and let the user fix the env var. - DO NOT hide the formatted context pack from the user. The whole point of this command is to show what AtoCore would feed an LLM, so the user can decide if it's relevant. - The output goes into the user's working context as background — they may follow up with their actual question, and the AtoCore context pack acts as informal injected knowledge.