refactor slash command onto shared client + llm-client-integration doc
Codex's review caught that the Claude Code slash command shipped in Session 2 was a parallel reimplementation of routing logic the existing scripts/atocore_client.py already had. That client was introduced via the codex/port-atocore-ops-client merge and is already a comprehensive operator client (auto-context, detect-project, refresh-project, project-state, audit-query, etc.). The slash command should have been a thin wrapper from the start. This commit fixes the shape without expanding scope. .claude/commands/atocore-context.md ----------------------------------- Rewritten as a thin Claude Code-specific frontend that shells out to the shared client: - explicit project hint -> calls `python scripts/atocore_client.py context-build "<prompt>" "<project>"` - no explicit hint -> calls `python scripts/atocore_client.py auto-context "<prompt>"` which runs the client's detect-project routing first and falls through to context-build with the match Inherits the client's stable behaviour for free: - ATOCORE_BASE_URL env var (default http://dalidou:8100) - fail-open on network errors via ATOCORE_FAIL_OPEN - consistent JSON output shape - the same project alias matching the OpenClaw helper uses Removes the speculative `--capture` capture path that was in the original draft. Capture/extract/queue/promote/reject are intentionally NOT in the shared client yet (memory-review workflow not exercised in real use), so the slash command can't expose them either. docs/architecture/llm-client-integration.md ------------------------------------------- New planning doc that defines the layering rule for AtoCore's relationship with LLM client contexts: Three layers: 1. AtoCore HTTP API (universal, src/atocore/api/routes.py) 2. Shared operator client (scripts/atocore_client.py) — the canonical Python backbone for stable AtoCore operations 3. Per-agent thin frontends (Claude Code slash command, OpenClaw helper, future Codex skill, future MCP server) that shell out to the shared client Three non-negotiable rules: - every per-agent frontend is a thin wrapper (translate the agent's command format and render the JSON; nothing else) - the shared client never duplicates the API (it composes endpoints; new logic goes in the API first) - the shared client only exposes stable operations (subcommands land only after the API has been exercised in a real workflow) Doc covers: - the full table of subcommands currently in scope (project lifecycle, ingestion, project-state, retrieval, context build, audit-query, debug-context, health/stats) - the three deferred families with rationale: memory review queue (workflow not exercised), backup admin (fail-open default would hide errors), engineering layer entities (V1 not yet implemented) - the integration recipe for new agent platforms - explicit acknowledgement that the OpenClaw helper currently duplicates routing logic and that the refactor to the shared client is a queued cross-repo follow-up - how the layering connects to phase 8 (OpenClaw) and phase 11 (multi-model) - versioning and stability rules for the shared client surface - open follow-ups: OpenClaw refactor, memory-review subcommands when ready, optional backup admin subcommands, engineering entity subcommands during V1 implementation master-plan-status.md updated ----------------------------- - New "LLM Client Integration" subsection that points to the layering doc and explicitly notes the deferral of memory-review and engineering-entity subcommands - Frames the layering as sitting between phase 8 and phase 11 Scope is intentionally narrow per codex's framing: promote the existing client to canonical status, refactor the slash command to use it, document the layering. No new client subcommands added in this commit. The OpenClaw helper refactor is a separate cross-repo follow-up. Memory-review and engineering- entity work stay deferred. Full suite: 160 passing, no behavior changes.
This commit is contained in:
@@ -3,121 +3,122 @@ description: Pull a context pack from the live AtoCore service for the current p
|
||||
argument-hint: <prompt text> [project-id]
|
||||
---
|
||||
|
||||
You are about to enrich a user prompt with context from the live AtoCore
|
||||
service. This is the daily-use entry point for AtoCore from inside Claude
|
||||
Code.
|
||||
You are about to enrich a user prompt with context from the live
|
||||
AtoCore service. This is the daily-use entry point for AtoCore from
|
||||
inside Claude Code.
|
||||
|
||||
The work happens via the **shared AtoCore operator client** at
|
||||
`scripts/atocore_client.py`. That client is the canonical Python
|
||||
backbone for stable AtoCore operations and is meant to be reused by
|
||||
every LLM client (OpenClaw helper, future Codex skill, etc.) — see
|
||||
`docs/architecture/llm-client-integration.md` for the layering. This
|
||||
slash command is a thin Claude Code-specific frontend on top of it.
|
||||
|
||||
## Step 1 — parse the arguments
|
||||
|
||||
The user invoked `/atocore-context` with the following arguments:
|
||||
The user invoked `/atocore-context` with:
|
||||
|
||||
```
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
Treat the **entire argument string** as the prompt text by default. If the
|
||||
last whitespace-separated token looks like a registered project id (matches
|
||||
one of `atocore`, `p04-gigabit`, `p04`, `p05-interferometer`, `p05`,
|
||||
`p06-polisher`, `p06`, or any case-insensitive variant), treat it as the
|
||||
project hint and use the rest as the prompt text. Otherwise, leave the
|
||||
project hint empty.
|
||||
Treat the **entire argument string** as the prompt by default. If the
|
||||
last whitespace-separated token looks like a registered project id or
|
||||
alias (`atocore`, `p04`, `p04-gigabit`, `gigabit`, `p05`,
|
||||
`p05-interferometer`, `interferometer`, `p06`, `p06-polisher`,
|
||||
`polisher`, or any case-insensitive variant), pull it off and treat
|
||||
it as an explicit project hint. The remaining tokens become the
|
||||
prompt. Otherwise leave the project hint empty and the client will
|
||||
try to auto-detect one from the prompt itself.
|
||||
|
||||
## Step 2 — call the AtoCore /context/build endpoint
|
||||
## Step 2 — call the shared client
|
||||
|
||||
Use the Bash tool to call AtoCore. The default endpoint is the live
|
||||
Dalidou instance. Read `ATOCORE_API_BASE` from the environment if set,
|
||||
otherwise default to `http://dalidou:3000` (the gitea host) — wait,
|
||||
no, AtoCore lives on a different port. Default to `http://dalidou:8100`
|
||||
which is the AtoCore service port from `pyproject.toml` and `config.py`.
|
||||
Use the Bash tool. The client respects `ATOCORE_BASE_URL` (default
|
||||
`http://dalidou:8100`) and is fail-open by default — if AtoCore is
|
||||
unreachable it returns a `{"status": "unavailable"}` payload and
|
||||
exits 0, which is what the daily-use loop wants.
|
||||
|
||||
Build the JSON body with `jq -n` so quoting is safe. Run something like:
|
||||
**If the user passed an explicit project hint**, call `context-build`
|
||||
directly so AtoCore uses exactly that project:
|
||||
|
||||
```bash
|
||||
ATOCORE_API_BASE="${ATOCORE_API_BASE:-http://dalidou:8100}"
|
||||
PROMPT_TEXT='<the prompt text from step 1>'
|
||||
PROJECT_HINT='<the project hint or empty string>'
|
||||
|
||||
if [ -n "$PROJECT_HINT" ]; then
|
||||
BODY=$(jq -n --arg p "$PROMPT_TEXT" --arg proj "$PROJECT_HINT" \
|
||||
'{prompt:$p, project:$proj}')
|
||||
else
|
||||
BODY=$(jq -n --arg p "$PROMPT_TEXT" '{prompt:$p}')
|
||||
fi
|
||||
|
||||
curl -fsS -X POST "$ATOCORE_API_BASE/context/build" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "$BODY"
|
||||
python scripts/atocore_client.py context-build \
|
||||
"<the prompt text>" \
|
||||
"<the project hint>"
|
||||
```
|
||||
|
||||
If `jq` is not available on the host, fall back to a Python one-liner:
|
||||
**If no explicit project hint**, call `auto-context` which will run
|
||||
the client's `detect-project` routing first and only call
|
||||
`context-build` once it has a match:
|
||||
|
||||
```bash
|
||||
python -c "import json,sys; print(json.dumps({'prompt': sys.argv[1], 'project': sys.argv[2]} if sys.argv[2] else {'prompt': sys.argv[1]}))" "$PROMPT_TEXT" "$PROJECT_HINT"
|
||||
python scripts/atocore_client.py auto-context "<the prompt text>"
|
||||
```
|
||||
|
||||
In both cases the response is the JSON payload from `/context/build`
|
||||
(or, for the `auto-context` no-match case, a small
|
||||
`{"status": "no_project_match"}` envelope).
|
||||
|
||||
## Step 3 — present the context pack to the user
|
||||
|
||||
The response is JSON with at least these fields:
|
||||
`formatted_context`, `chunks_used`, `total_chars`, `budget`,
|
||||
`budget_remaining`, `duration_ms`, and a `chunks` array.
|
||||
The successful response contains at least:
|
||||
|
||||
Print the response in a readable summary:
|
||||
- `formatted_context` — the assembled context block AtoCore would
|
||||
feed an LLM
|
||||
- `chunks_used`, `total_chars`, `budget`, `budget_remaining`,
|
||||
`duration_ms`
|
||||
- `chunks` — array of source documents that contributed, each with
|
||||
`source_file`, `heading_path`, `score`
|
||||
|
||||
1. Print a one-line stats banner: `chunks=N, chars=X/budget, duration=Yms`
|
||||
2. Print the `formatted_context` block verbatim inside a fenced text
|
||||
code block so the user can read what AtoCore would feed an LLM
|
||||
3. Print the `chunks` array as a small bulleted list with `source_file`,
|
||||
Render in this order:
|
||||
|
||||
1. A one-line stats banner: `chunks=N, chars=X/budget, duration=Yms`
|
||||
2. The `formatted_context` block verbatim inside a fenced text code
|
||||
block so the user can read what AtoCore would feed an LLM
|
||||
3. The `chunks` array as a small bullet list with `source_file`,
|
||||
`heading_path`, and `score` per chunk
|
||||
|
||||
If the response is empty (`chunks_used=0`, no project state, no
|
||||
memories), tell the user explicitly: "AtoCore returned no context for
|
||||
this prompt — either the corpus does not have relevant information or
|
||||
the project hint is wrong. Try `/atocore-context <prompt> <project-id>`."
|
||||
Three special cases:
|
||||
|
||||
If the curl call fails:
|
||||
- Network error → tell the user the AtoCore service may be down at
|
||||
`$ATOCORE_API_BASE` and suggest checking `curl $ATOCORE_API_BASE/health`
|
||||
- 4xx → print the error body verbatim, the API error message is usually
|
||||
enough
|
||||
- 5xx → print the error body and suggest checking the service logs
|
||||
- **`{"status": "no_project_match"}`** (from `auto-context`)
|
||||
→ Tell the user: "AtoCore could not auto-detect a project from the
|
||||
prompt. Re-run with an explicit project id:
|
||||
`/atocore-context <prompt> <project-id>` (or call without a hint
|
||||
to use the corpus-wide context build)."
|
||||
- **`{"status": "unavailable"}`** (fail-open from the client)
|
||||
→ Tell the user: "AtoCore is unreachable at `$ATOCORE_BASE_URL`.
|
||||
Check `python scripts/atocore_client.py health` for diagnostics."
|
||||
- **Empty `chunks_used: 0` with no project state and no memories**
|
||||
→ Tell the user: "AtoCore returned no context for this prompt —
|
||||
either the corpus does not have relevant information for the
|
||||
detected project or the project hint is wrong. Try a different
|
||||
hint or a longer prompt."
|
||||
|
||||
## Step 4 — capture the interaction (optional, opt-in)
|
||||
## Step 4 — what about capturing the interaction
|
||||
|
||||
If the user has previously asked the assistant to capture interactions
|
||||
into AtoCore (or if the slash command was invoked with the trailing
|
||||
literal `--capture` token), also POST the captured exchange to
|
||||
`/interactions` so the Phase 9 reflection loop sees it. Skip this step
|
||||
silently otherwise. The capture body is:
|
||||
Capture (Phase 9 Commit A) and the rest of the reflection loop
|
||||
(reinforcement, extraction, review queue) are intentionally NOT
|
||||
exposed by the shared client yet. The contracts are stable but the
|
||||
workflow ergonomics are not, so the daily-use slash command stays
|
||||
focused on context retrieval until those review flows have been
|
||||
exercised in real use. See `docs/architecture/llm-client-integration.md`
|
||||
for the deferral rationale.
|
||||
|
||||
```json
|
||||
{
|
||||
"prompt": "<user prompt>",
|
||||
"response": "",
|
||||
"response_summary": "",
|
||||
"project": "<project hint or empty>",
|
||||
"client": "claude-code-slash",
|
||||
"session_id": "<a stable id for this Claude Code session>",
|
||||
"memories_used": ["<from chunks array if available>"],
|
||||
"chunks_used": ["<chunk_id from chunks array>"],
|
||||
"context_pack": {"chunks_used": <N>, "total_chars": <X>}
|
||||
}
|
||||
```
|
||||
|
||||
Note that the response field stays empty here — the LLM hasn't actually
|
||||
answered yet at the moment the slash command runs. A separate post-turn
|
||||
hook (not part of this command) would update the same interaction with
|
||||
the response, OR a follow-up `/atocore-record-response <interaction-id>`
|
||||
command would do it. For now, leave that as future work.
|
||||
When capture is added to the shared client, this slash command will
|
||||
gain a follow-up `/atocore-record-response` companion command that
|
||||
posts the LLM's response back to the same interaction. That work is
|
||||
queued.
|
||||
|
||||
## Notes for the assistant
|
||||
|
||||
- DO NOT invent project ids that aren't in the registry. If the user
|
||||
passed something that doesn't match, treat it as part of the prompt.
|
||||
- DO NOT silently fall back to a different endpoint. If `ATOCORE_API_BASE`
|
||||
is wrong, surface the network error and let the user fix the env var.
|
||||
- DO NOT hide the formatted context pack from the user. The whole point
|
||||
of this command is to show what AtoCore would feed an LLM, so the user
|
||||
can decide if it's relevant.
|
||||
- The output goes into the user's working context as background — they
|
||||
may follow up with their actual question, and the AtoCore context pack
|
||||
acts as informal injected knowledge.
|
||||
- DO NOT bypass the shared client by calling curl yourself. The
|
||||
client is the contract between AtoCore and every LLM frontend; if
|
||||
you find a missing capability, the right fix is to extend the
|
||||
client, not to work around it.
|
||||
- DO NOT silently change `ATOCORE_BASE_URL`. If the env var points at
|
||||
the wrong instance, surface the error so the user can fix it.
|
||||
- DO NOT hide the formatted context pack from the user. Showing
|
||||
what AtoCore would feed an LLM is the whole point.
|
||||
- The output goes into the user's working context as background;
|
||||
they may follow up with their actual question, and the AtoCore
|
||||
context pack acts as informal injected knowledge.
|
||||
|
||||
323
docs/architecture/llm-client-integration.md
Normal file
323
docs/architecture/llm-client-integration.md
Normal file
@@ -0,0 +1,323 @@
|
||||
# LLM Client Integration (the layering)
|
||||
|
||||
## Why this document exists
|
||||
|
||||
AtoCore must be reachable from many different LLM client contexts:
|
||||
|
||||
- **OpenClaw** on the T420 (already integrated via the read-only
|
||||
helper skill at `/home/papa/clawd/skills/atocore-context/`)
|
||||
- **Claude Code** on the laptop (via the slash command shipped in
|
||||
this repo at `.claude/commands/atocore-context.md`)
|
||||
- **Codex** sessions (future)
|
||||
- **Direct API consumers** — scripts, Python code, ad-hoc curl
|
||||
- **The eventual MCP server** when it's worth building
|
||||
|
||||
Without an explicit layering rule, every new client tends to
|
||||
reimplement the same routing logic (project detection, context
|
||||
build, retrieval audit, project-state inspection) in slightly
|
||||
different ways. That is exactly what almost happened in the first
|
||||
draft of the Claude Code slash command, which started as a curl +
|
||||
jq script that duplicated capabilities the existing operator client
|
||||
already had.
|
||||
|
||||
This document defines the layering so future clients don't repeat
|
||||
that mistake.
|
||||
|
||||
## The layering
|
||||
|
||||
Three layers, top to bottom:
|
||||
|
||||
```
|
||||
+----------------------------------------------------+
|
||||
| Per-agent thin frontends |
|
||||
| |
|
||||
| - Claude Code slash command |
|
||||
| (.claude/commands/atocore-context.md) |
|
||||
| - OpenClaw helper skill |
|
||||
| (/home/papa/clawd/skills/atocore-context/) |
|
||||
| - Codex skill (future) |
|
||||
| - MCP server (future) |
|
||||
+----------------------------------------------------+
|
||||
|
|
||||
| shells out to / imports
|
||||
v
|
||||
+----------------------------------------------------+
|
||||
| Shared operator client |
|
||||
| scripts/atocore_client.py |
|
||||
| |
|
||||
| - subcommands for stable AtoCore operations |
|
||||
| - fail-open on network errors |
|
||||
| - consistent JSON output across all subcommands |
|
||||
| - environment-driven configuration |
|
||||
| (ATOCORE_BASE_URL, ATOCORE_TIMEOUT_SECONDS, |
|
||||
| ATOCORE_REFRESH_TIMEOUT_SECONDS, |
|
||||
| ATOCORE_FAIL_OPEN) |
|
||||
+----------------------------------------------------+
|
||||
|
|
||||
| HTTP
|
||||
v
|
||||
+----------------------------------------------------+
|
||||
| AtoCore HTTP API |
|
||||
| src/atocore/api/routes.py |
|
||||
| |
|
||||
| - the universal interface to AtoCore |
|
||||
| - everything else above is glue |
|
||||
+----------------------------------------------------+
|
||||
```
|
||||
|
||||
## The non-negotiable rules
|
||||
|
||||
These rules are what make the layering work.
|
||||
|
||||
### Rule 1 — every per-agent frontend is a thin wrapper
|
||||
|
||||
A per-agent frontend exists to do exactly two things:
|
||||
|
||||
1. **Translate the agent platform's command/skill format** into an
|
||||
invocation of the shared client (or a small sequence of them)
|
||||
2. **Render the JSON response** into whatever shape the agent
|
||||
platform wants (markdown for Claude Code, plaintext for
|
||||
OpenClaw, MCP tool result for an MCP server, etc.)
|
||||
|
||||
Everything else — talking to AtoCore, project detection, retrieval
|
||||
audit, fail-open behavior, configuration — is the **shared
|
||||
client's** job.
|
||||
|
||||
If a per-agent frontend grows logic beyond the two responsibilities
|
||||
above, that logic is in the wrong place. It belongs in the shared
|
||||
client where every other frontend gets to use it.
|
||||
|
||||
### Rule 2 — the shared client never duplicates the API
|
||||
|
||||
The shared client is allowed to **compose** API calls (e.g.
|
||||
`auto-context` calls `detect-project` then `context-build`), but
|
||||
it never reimplements API logic. If a useful operation can't be
|
||||
expressed via the existing API endpoints, the right fix is to
|
||||
extend the API, not to embed the logic in the client.
|
||||
|
||||
This rule keeps the API as the single source of truth for what
|
||||
AtoCore can do.
|
||||
|
||||
### Rule 3 — the shared client only exposes stable operations
|
||||
|
||||
A subcommand only makes it into the shared client when:
|
||||
|
||||
- the API endpoint behind it has been exercised by at least one
|
||||
real workflow
|
||||
- the request and response shapes are unlikely to change
|
||||
- the operation is one that more than one frontend will plausibly
|
||||
want
|
||||
|
||||
This rule keeps the client surface stable so frontends don't have
|
||||
to chase changes. New endpoints land in the API first, get
|
||||
exercised in real use, and only then get a client subcommand.
|
||||
|
||||
## What's in scope for the shared client today
|
||||
|
||||
The currently shipped scope (per `scripts/atocore_client.py`):
|
||||
|
||||
| Subcommand | Purpose | API endpoint(s) |
|
||||
|---|---|---|
|
||||
| `health` | service status, mount + source readiness | `GET /health` |
|
||||
| `sources` | enabled source roots and their existence | `GET /sources` |
|
||||
| `stats` | document/chunk/vector counts | `GET /stats` |
|
||||
| `projects` | registered projects | `GET /projects` |
|
||||
| `project-template` | starter shape for a new project | `GET /projects/template` |
|
||||
| `propose-project` | preview a registration | `POST /projects/proposal` |
|
||||
| `register-project` | persist a registration | `POST /projects/register` |
|
||||
| `update-project` | update an existing registration | `PUT /projects/{name}` |
|
||||
| `refresh-project` | re-ingest a project's roots | `POST /projects/{name}/refresh` |
|
||||
| `project-state` | list trusted state for a project | `GET /project/state/{name}` |
|
||||
| `project-state-set` | curate trusted state | `POST /project/state` |
|
||||
| `project-state-invalidate` | supersede trusted state | `DELETE /project/state` |
|
||||
| `query` | raw retrieval | `POST /query` |
|
||||
| `context-build` | full context pack | `POST /context/build` |
|
||||
| `auto-context` | detect-project then context-build | composes `/projects` + `/context/build` |
|
||||
| `detect-project` | match a prompt to a registered project | composes `/projects` + local regex |
|
||||
| `audit-query` | retrieval-quality audit with classification | composes `/query` + local labelling |
|
||||
| `debug-context` | last context pack inspection | `GET /debug/context` |
|
||||
| `ingest-sources` | ingest configured source dirs | `POST /ingest/sources` |
|
||||
|
||||
That covers everything in the "stable operations" set today:
|
||||
project lifecycle, ingestion, project-state curation, retrieval and
|
||||
context build, retrieval-quality audit, health and stats inspection.
|
||||
|
||||
## What's intentionally NOT in scope today
|
||||
|
||||
Three families of operations are explicitly **deferred** until
|
||||
their workflows have been exercised in real use:
|
||||
|
||||
### 1. Memory review queue and reflection loop
|
||||
|
||||
Phase 9 Commit C shipped these endpoints:
|
||||
|
||||
- `POST /interactions` (capture)
|
||||
- `POST /interactions/{id}/reinforce`
|
||||
- `POST /interactions/{id}/extract`
|
||||
- `GET /memory?status=candidate`
|
||||
- `POST /memory/{id}/promote`
|
||||
- `POST /memory/{id}/reject`
|
||||
|
||||
The contracts are stable, but the **workflow ergonomics** are not.
|
||||
Until a real human has actually exercised the capture → extract →
|
||||
review → promote/reject loop a few times and we know what feels
|
||||
right, exposing those operations through the shared client would
|
||||
prematurely freeze a UX that's still being designed.
|
||||
|
||||
When the loop has been exercised in real use and we know what
|
||||
the right subcommand shapes are, the shared client gains:
|
||||
|
||||
- `capture <prompt> <response> [--project P] [--client C]`
|
||||
- `extract <interaction-id> [--persist]`
|
||||
- `queue` (list candidate review queue)
|
||||
- `promote <memory-id>`
|
||||
- `reject <memory-id>`
|
||||
|
||||
At that point the Claude Code slash command can grow a companion
|
||||
`/atocore-record-response` command and the OpenClaw helper can be
|
||||
extended with the same flow.
|
||||
|
||||
### 2. Backup and restore admin operations
|
||||
|
||||
Phase 9 Commit B shipped these endpoints:
|
||||
|
||||
- `POST /admin/backup` (with `include_chroma`)
|
||||
- `GET /admin/backup` (list)
|
||||
- `GET /admin/backup/{stamp}/validate`
|
||||
|
||||
The backup endpoints are stable, but the documented operational
|
||||
procedure (`docs/backup-restore-procedure.md`) intentionally uses
|
||||
direct curl rather than the shared client. The reason is that
|
||||
backup operations are *administrative* and benefit from being
|
||||
explicit about which instance they're targeting, with no
|
||||
fail-open behavior. The shared client's fail-open default would
|
||||
hide a real backup failure.
|
||||
|
||||
If we later decide to add backup commands to the shared client,
|
||||
they would set `ATOCORE_FAIL_OPEN=false` for the duration of the
|
||||
call so the operator gets a real error on failure rather than a
|
||||
silent fail-open envelope.
|
||||
|
||||
### 3. Engineering layer entity operations
|
||||
|
||||
The engineering layer is in planning, not implementation. When
|
||||
V1 ships per `engineering-v1-acceptance.md`, the shared client
|
||||
will gain entity, relationship, conflict, and Mirror commands.
|
||||
None of those exist as stable contracts yet, so they are not in
|
||||
the shared client today.
|
||||
|
||||
## How a new agent platform integrates
|
||||
|
||||
When a new LLM client needs AtoCore (e.g. Codex, ChatGPT custom
|
||||
GPT, a Cursor extension), the integration recipe is:
|
||||
|
||||
1. **Don't reimplement.** Don't write a new HTTP client. Use the
|
||||
shared client.
|
||||
2. **Write a thin frontend** that translates the platform's
|
||||
command/skill format into a shell call to
|
||||
`python scripts/atocore_client.py <subcommand> <args...>`.
|
||||
3. **Render the JSON response** in the platform's preferred shape.
|
||||
4. **Inherit fail-open and env-var behavior** from the shared
|
||||
client. Don't override unless the platform explicitly needs
|
||||
to (e.g. an admin tool that wants to see real errors).
|
||||
5. **If a needed capability is missing**, propose adding it to
|
||||
the shared client. If the underlying API endpoint also
|
||||
doesn't exist, propose adding it to the API first. Don't
|
||||
add the logic to your frontend.
|
||||
|
||||
The Claude Code slash command in this repo is a worked example:
|
||||
~50 lines of markdown that does argument parsing, calls the
|
||||
shared client, and renders the result. It contains zero AtoCore
|
||||
business logic of its own.
|
||||
|
||||
## How OpenClaw fits
|
||||
|
||||
OpenClaw's helper skill at `/home/papa/clawd/skills/atocore-context/`
|
||||
on the T420 currently has its own implementation of `auto-context`,
|
||||
`detect-project`, and the project lifecycle commands. It predates
|
||||
this layering doc.
|
||||
|
||||
The right long-term shape is to **refactor the OpenClaw helper to
|
||||
shell out to the shared client** instead of duplicating the
|
||||
routing logic. This isn't urgent because:
|
||||
|
||||
- OpenClaw's helper works today and is in active use
|
||||
- The duplication is on the OpenClaw side; AtoCore itself is not
|
||||
affected
|
||||
- The shared client and the OpenClaw helper are in different
|
||||
repos (AtoCore vs OpenClaw clawd), so the refactor is a
|
||||
cross-repo coordination
|
||||
|
||||
The refactor is queued as a follow-up. Until then, **the OpenClaw
|
||||
helper and the Claude Code slash command are parallel
|
||||
implementations** of the same idea. The shared client is the
|
||||
canonical backbone going forward; new clients should follow the
|
||||
new pattern even though the existing OpenClaw helper still has
|
||||
its own.
|
||||
|
||||
## How this connects to the master plan
|
||||
|
||||
| Layer | Phase home | Status |
|
||||
|---|---|---|
|
||||
| AtoCore HTTP API | Phases 0/0.5/1/2/3/5/7/9 | shipped |
|
||||
| Shared operator client (`scripts/atocore_client.py`) | implicitly Phase 8 (OpenClaw integration) infrastructure | shipped via codex/port-atocore-ops-client merge |
|
||||
| OpenClaw helper skill (T420) | Phase 8 — partial | shipped (own implementation, refactor queued) |
|
||||
| Claude Code slash command (this repo) | precursor to Phase 11 (multi-model) | shipped (refactored to use the shared client) |
|
||||
| Codex skill | Phase 11 | future |
|
||||
| MCP server | Phase 11 | future |
|
||||
| Web UI / dashboard | Phase 11+ | future |
|
||||
|
||||
The shared client is the **substrate Phase 11 will build on**.
|
||||
Every new client added in Phase 11 should be a thin frontend on
|
||||
the shared client, not a fresh reimplementation.
|
||||
|
||||
## Versioning and stability
|
||||
|
||||
The shared client's subcommand surface is **stable**. Adding new
|
||||
subcommands is non-breaking. Changing or removing existing
|
||||
subcommands is breaking and would require a coordinated update
|
||||
of every frontend that depends on them.
|
||||
|
||||
The current shared client has no explicit version constant; the
|
||||
implicit contract is "the subcommands and JSON shapes documented
|
||||
in this file". When the client surface meaningfully changes,
|
||||
add a `CLIENT_VERSION = "x.y.z"` constant to
|
||||
`scripts/atocore_client.py` and bump it per semver:
|
||||
|
||||
- patch: bug fixes, no surface change
|
||||
- minor: new subcommands or new optional fields
|
||||
- major: removed subcommands, renamed fields, changed defaults
|
||||
|
||||
## Open follow-ups
|
||||
|
||||
1. **Refactor the OpenClaw helper** to shell out to the shared
|
||||
client. Cross-repo coordination, not blocking anything in
|
||||
AtoCore itself.
|
||||
2. **Add memory-review subcommands** when the Phase 9 review
|
||||
workflow has been exercised in real use.
|
||||
3. **Add backup admin subcommands** if and when we decide the
|
||||
shared client should be the canonical backup operator
|
||||
interface (with fail-open disabled for admin commands).
|
||||
4. **Add engineering-layer entity subcommands** as part of the
|
||||
engineering V1 implementation sprint, per
|
||||
`engineering-v1-acceptance.md`.
|
||||
5. **Tag a `CLIENT_VERSION` constant** the next time the shared
|
||||
client surface meaningfully changes. Today's surface is the
|
||||
v0.1.0 baseline.
|
||||
|
||||
## TL;DR
|
||||
|
||||
- AtoCore HTTP API is the universal interface
|
||||
- `scripts/atocore_client.py` is the canonical shared Python
|
||||
backbone for stable AtoCore operations
|
||||
- Per-agent frontends (Claude Code slash command, OpenClaw
|
||||
helper, future Codex skill, future MCP server) are thin
|
||||
wrappers that shell out to the shared client
|
||||
- The shared client today covers project lifecycle, ingestion,
|
||||
retrieval, context build, project-state, and retrieval audit
|
||||
- Memory-review and engineering-entity commands are deferred
|
||||
until their workflows are exercised
|
||||
- The OpenClaw helper is currently a parallel implementation and
|
||||
the refactor to the shared client is a queued follow-up
|
||||
- New LLM clients should never reimplement HTTP calls — they
|
||||
follow the shell-out pattern documented here
|
||||
@@ -72,6 +72,22 @@ active project set.
|
||||
The next concrete next step is the V1 implementation sprint, which
|
||||
should follow engineering-v1-acceptance.md as its checklist.
|
||||
|
||||
### LLM Client Integration
|
||||
|
||||
A separate but related architectural concern: how AtoCore is reachable
|
||||
from many different LLM client contexts (OpenClaw, Claude Code, future
|
||||
Codex skills, future MCP server). The layering rule is documented in:
|
||||
|
||||
- [llm-client-integration.md](architecture/llm-client-integration.md) —
|
||||
three-layer shape: HTTP API → shared operator client
|
||||
(`scripts/atocore_client.py`) → per-agent thin frontends; the
|
||||
shared client is the canonical backbone every new client should
|
||||
shell out to instead of reimplementing HTTP calls
|
||||
|
||||
This sits implicitly between Phase 8 (OpenClaw) and Phase 11
|
||||
(multi-model). Memory-review and engineering-entity commands are
|
||||
deferred from the shared client until their workflows are exercised.
|
||||
|
||||
## What Is Real Today
|
||||
|
||||
- canonical AtoCore runtime on Dalidou
|
||||
|
||||
Reference in New Issue
Block a user