Codex's review caught that the Claude Code slash command shipped in Session 2 was a parallel reimplementation of routing logic the existing scripts/atocore_client.py already had. That client was introduced via the codex/port-atocore-ops-client merge and is already a comprehensive operator client (auto-context, detect-project, refresh-project, project-state, audit-query, etc.). The slash command should have been a thin wrapper from the start. This commit fixes the shape without expanding scope. .claude/commands/atocore-context.md ----------------------------------- Rewritten as a thin Claude Code-specific frontend that shells out to the shared client: - explicit project hint -> calls `python scripts/atocore_client.py context-build "<prompt>" "<project>"` - no explicit hint -> calls `python scripts/atocore_client.py auto-context "<prompt>"` which runs the client's detect-project routing first and falls through to context-build with the match Inherits the client's stable behaviour for free: - ATOCORE_BASE_URL env var (default http://dalidou:8100) - fail-open on network errors via ATOCORE_FAIL_OPEN - consistent JSON output shape - the same project alias matching the OpenClaw helper uses Removes the speculative `--capture` capture path that was in the original draft. Capture/extract/queue/promote/reject are intentionally NOT in the shared client yet (memory-review workflow not exercised in real use), so the slash command can't expose them either. docs/architecture/llm-client-integration.md ------------------------------------------- New planning doc that defines the layering rule for AtoCore's relationship with LLM client contexts: Three layers: 1. AtoCore HTTP API (universal, src/atocore/api/routes.py) 2. Shared operator client (scripts/atocore_client.py) — the canonical Python backbone for stable AtoCore operations 3. Per-agent thin frontends (Claude Code slash command, OpenClaw helper, future Codex skill, future MCP server) that shell out to the shared client Three non-negotiable rules: - every per-agent frontend is a thin wrapper (translate the agent's command format and render the JSON; nothing else) - the shared client never duplicates the API (it composes endpoints; new logic goes in the API first) - the shared client only exposes stable operations (subcommands land only after the API has been exercised in a real workflow) Doc covers: - the full table of subcommands currently in scope (project lifecycle, ingestion, project-state, retrieval, context build, audit-query, debug-context, health/stats) - the three deferred families with rationale: memory review queue (workflow not exercised), backup admin (fail-open default would hide errors), engineering layer entities (V1 not yet implemented) - the integration recipe for new agent platforms - explicit acknowledgement that the OpenClaw helper currently duplicates routing logic and that the refactor to the shared client is a queued cross-repo follow-up - how the layering connects to phase 8 (OpenClaw) and phase 11 (multi-model) - versioning and stability rules for the shared client surface - open follow-ups: OpenClaw refactor, memory-review subcommands when ready, optional backup admin subcommands, engineering entity subcommands during V1 implementation master-plan-status.md updated ----------------------------- - New "LLM Client Integration" subsection that points to the layering doc and explicitly notes the deferral of memory-review and engineering-entity subcommands - Frames the layering as sitting between phase 8 and phase 11 Scope is intentionally narrow per codex's framing: promote the existing client to canonical status, refactor the slash command to use it, document the layering. No new client subcommands added in this commit. The OpenClaw helper refactor is a separate cross-repo follow-up. Memory-review and engineering- entity work stay deferred. Full suite: 160 passing, no behavior changes.
6.1 KiB
AtoCore Master Plan Status
Current Position
AtoCore is currently between Phase 7 and Phase 8.
The platform is no longer just a proof of concept. The local engine exists, the core correctness pass is complete, Dalidou hosts the canonical runtime and machine database, and OpenClaw on the T420 can consume AtoCore safely in read-only additive mode.
Phase Status
Completed
- Phase 0 - Foundation
- Phase 0.5 - Proof of Concept
- Phase 1 - Ingestion
Baseline Complete
- Phase 2 - Memory Core
- Phase 3 - Retrieval
- Phase 5 - Project State
- Phase 7 - Context Builder
Partial
- Phase 4 - Identity / Preferences
- Phase 8 - OpenClaw Integration
Baseline Complete
- Phase 9 - Reflection (all three foundation commits landed: A capture, B reinforcement, C candidate extraction + review queue)
Not Yet Complete In The Intended Sense
- Phase 6 - AtoDrive
- Phase 10 - Write-back
- Phase 11 - Multi-model
- Phase 12 - Evaluation
- Phase 13 - Hardening
Engineering Layer Planning Sprint
Status: complete. All 8 architecture docs are drafted. The engineering layer is now ready for V1 implementation against the active project set.
- engineering-query-catalog.md — the 20 v1-required queries the engineering layer must answer
- memory-vs-entities.md — canonical home split between memory and entity tables
- promotion-rules.md — Layer 0 → Layer 2 pipeline, triggers, review queue mechanics
- conflict-model.md — detection, representation, and resolution of contradictory facts
- tool-handoff-boundaries.md — KB-CAD / KB-FEM one-way mirror stance, ingest endpoints, drift handling
- representation-authority.md — canonical home matrix across PKM / KB / repos / AtoCore for 22 fact kinds
- human-mirror-rules.md — templates, regeneration triggers, edit flow, "do not edit" enforcement
- engineering-v1-acceptance.md — measurable done definition with 23 acceptance criteria
- engineering-knowledge-hybrid-architecture.md — the 5-layer model (from the previous planning wave)
- engineering-ontology-v1.md — the initial V1 object and relationship inventory (previous wave)
The next concrete next step is the V1 implementation sprint, which should follow engineering-v1-acceptance.md as its checklist.
LLM Client Integration
A separate but related architectural concern: how AtoCore is reachable from many different LLM client contexts (OpenClaw, Claude Code, future Codex skills, future MCP server). The layering rule is documented in:
- llm-client-integration.md —
three-layer shape: HTTP API → shared operator client
(
scripts/atocore_client.py) → per-agent thin frontends; the shared client is the canonical backbone every new client should shell out to instead of reimplementing HTTP calls
This sits implicitly between Phase 8 (OpenClaw) and Phase 11 (multi-model). Memory-review and engineering-entity commands are deferred from the shared client until their workflows are exercised.
What Is Real Today
- canonical AtoCore runtime on Dalidou
- canonical machine DB and vector store on Dalidou
- project registry with:
- template
- proposal preview
- register
- update
- refresh
- read-only additive OpenClaw helper on the T420
- seeded project corpus for:
p04-gigabitp05-interferometerp06-polisher
- conservative Trusted Project State for those active projects
- first operational backup foundation for SQLite + project registry
- implementation-facing architecture notes for future engineering knowledge work
- first organic routing layer in OpenClaw via:
detect-projectauto-context
Now
These are the current practical priorities.
- Finish practical OpenClaw integration
- make the helper lifecycle feel natural in daily use
- use the new organic routing layer for project-knowledge questions
- confirm fail-open behavior remains acceptable
- keep AtoCore clearly additive
- Tighten retrieval quality
- reduce cross-project competition
- improve ranking on short or ambiguous prompts
- add only a few anchor docs where retrieval is still weak
- Continue controlled ingestion
- deepen active projects selectively
- avoid noisy bulk corpus growth
- Strengthen operational boringness
- backup and restore procedure
- Chroma rebuild / backup policy
- retention and restore validation
Next
These are the next major layers after the current practical pass.
- Clarify AtoDrive as a real operational truth layer
- Mature identity / preferences handling
- Improve observability for:
- retrieval quality
- context-pack inspection
- comparison of behavior with and without AtoCore
Later
These are the deliberate future expansions already supported by the architecture direction, but not yet ready for immediate implementation.
- Minimal engineering knowledge layer
- driven by
docs/architecture/engineering-knowledge-hybrid-architecture.md - guided by
docs/architecture/engineering-ontology-v1.md
- driven by
- Minimal typed objects and relationships
- Evidence-linking and provenance-rich structured records
- Human mirror generation from structured state
Not Yet
These remain intentionally deferred.
- automatic write-back from OpenClaw into AtoCore
- automatic memory promotion
- reflection loop integration
- replacing OpenClaw's own memory system
- live machine-DB sync between machines
- full ontology / graph expansion before the current baseline is stable
Working Rule
The next sensible implementation threshold for the engineering ontology work is:
- after the current ingestion, retrieval, registry, OpenClaw helper, organic routing, and backup baseline feels boring and dependable
Until then, the architecture docs should shape decisions, not force premature schema work.