78d4e979e5e3092e8e710503bf572ee021b93d23
Codex's review caught that the Claude Code slash command shipped in Session 2 was a parallel reimplementation of routing logic the existing scripts/atocore_client.py already had. That client was introduced via the codex/port-atocore-ops-client merge and is already a comprehensive operator client (auto-context, detect-project, refresh-project, project-state, audit-query, etc.). The slash command should have been a thin wrapper from the start. This commit fixes the shape without expanding scope. .claude/commands/atocore-context.md ----------------------------------- Rewritten as a thin Claude Code-specific frontend that shells out to the shared client: - explicit project hint -> calls `python scripts/atocore_client.py context-build "<prompt>" "<project>"` - no explicit hint -> calls `python scripts/atocore_client.py auto-context "<prompt>"` which runs the client's detect-project routing first and falls through to context-build with the match Inherits the client's stable behaviour for free: - ATOCORE_BASE_URL env var (default http://dalidou:8100) - fail-open on network errors via ATOCORE_FAIL_OPEN - consistent JSON output shape - the same project alias matching the OpenClaw helper uses Removes the speculative `--capture` capture path that was in the original draft. Capture/extract/queue/promote/reject are intentionally NOT in the shared client yet (memory-review workflow not exercised in real use), so the slash command can't expose them either. docs/architecture/llm-client-integration.md ------------------------------------------- New planning doc that defines the layering rule for AtoCore's relationship with LLM client contexts: Three layers: 1. AtoCore HTTP API (universal, src/atocore/api/routes.py) 2. Shared operator client (scripts/atocore_client.py) — the canonical Python backbone for stable AtoCore operations 3. Per-agent thin frontends (Claude Code slash command, OpenClaw helper, future Codex skill, future MCP server) that shell out to the shared client Three non-negotiable rules: - every per-agent frontend is a thin wrapper (translate the agent's command format and render the JSON; nothing else) - the shared client never duplicates the API (it composes endpoints; new logic goes in the API first) - the shared client only exposes stable operations (subcommands land only after the API has been exercised in a real workflow) Doc covers: - the full table of subcommands currently in scope (project lifecycle, ingestion, project-state, retrieval, context build, audit-query, debug-context, health/stats) - the three deferred families with rationale: memory review queue (workflow not exercised), backup admin (fail-open default would hide errors), engineering layer entities (V1 not yet implemented) - the integration recipe for new agent platforms - explicit acknowledgement that the OpenClaw helper currently duplicates routing logic and that the refactor to the shared client is a queued cross-repo follow-up - how the layering connects to phase 8 (OpenClaw) and phase 11 (multi-model) - versioning and stability rules for the shared client surface - open follow-ups: OpenClaw refactor, memory-review subcommands when ready, optional backup admin subcommands, engineering entity subcommands during V1 implementation master-plan-status.md updated ----------------------------- - New "LLM Client Integration" subsection that points to the layering doc and explicitly notes the deferral of memory-review and engineering-entity subcommands - Frames the layering as sitting between phase 8 and phase 11 Scope is intentionally narrow per codex's framing: promote the existing client to canonical status, refactor the slash command to use it, document the layering. No new client subcommands added in this commit. The OpenClaw helper refactor is a separate cross-repo follow-up. Memory-review and engineering- entity work stay deferred. Full suite: 160 passing, no behavior changes.
AtoCore
Personal context engine that enriches LLM interactions with durable memory, structured context, and project knowledge.
Quick Start
pip install -e .
uvicorn src.atocore.main:app --port 8100
Usage
# Ingest markdown files
curl -X POST http://localhost:8100/ingest \
-H "Content-Type: application/json" \
-d '{"path": "/path/to/notes"}'
# Build enriched context for a prompt
curl -X POST http://localhost:8100/context/build \
-H "Content-Type: application/json" \
-d '{"prompt": "What is the project status?", "project": "myproject"}'
# CLI ingestion
python scripts/ingest_folder.py --path /path/to/notes
# Live operator client
python scripts/atocore_client.py health
python scripts/atocore_client.py audit-query "gigabit" 5
API Endpoints
| Method | Path | Description |
|---|---|---|
| POST | /ingest | Ingest markdown file or folder |
| POST | /query | Retrieve relevant chunks |
| POST | /context/build | Build full context pack |
| GET | /health | Health check |
| GET | /debug/context | Inspect last context pack |
Architecture
FastAPI (port 8100)
|- Ingestion: markdown -> parse -> chunk -> embed -> store
|- Retrieval: query -> embed -> vector search -> rank
|- Context Builder: retrieve -> boost -> budget -> format
|- SQLite (documents, chunks, memories, projects, interactions)
'- ChromaDB (vector embeddings)
Configuration
Set via environment variables (prefix ATOCORE_):
| Variable | Default | Description |
|---|---|---|
| ATOCORE_DEBUG | false | Enable debug logging |
| ATOCORE_PORT | 8100 | Server port |
| ATOCORE_CHUNK_MAX_SIZE | 800 | Max chunk size (chars) |
| ATOCORE_CONTEXT_BUDGET | 3000 | Context pack budget (chars) |
| ATOCORE_EMBEDDING_MODEL | paraphrase-multilingual-MiniLM-L12-v2 | Embedding model |
Testing
pip install -e ".[dev]"
pytest
Operations
scripts/atocore_client.pyprovides a live API client for project refresh, project-state inspection, and retrieval-quality audits.docs/operations.mdcaptures the current operational priority order: retrieval quality, Wave 2 trusted-operational ingestion, AtoDrive scoping, and restore validation.
Architecture Notes
Implementation-facing architecture notes live under docs/architecture/.
Current additions:
docs/architecture/engineering-knowledge-hybrid-architecture.md— 5-layer hybrid modeldocs/architecture/engineering-ontology-v1.md— V1 object and relationship inventorydocs/architecture/engineering-query-catalog.md— 20 v1-required queriesdocs/architecture/memory-vs-entities.md— canonical home splitdocs/architecture/promotion-rules.md— Layer 0 to Layer 2 pipelinedocs/architecture/conflict-model.md— contradictory facts detection and resolution
Description
Languages
Python
96.2%
Shell
3.3%
JavaScript
0.4%