c49363fccc2e8f9b949e7b3650b01247165ea788
Stdlib-only Python stdio MCP server that wraps the AtoCore HTTP API. Makes AtoCore available as built-in tools to every MCP-aware client (Claude Desktop, Claude Code, Cursor, Zed, Windsurf). 7 tools exposed: - atocore_context: full context pack (state + memories + chunks) - atocore_search: semantic retrieval with scores + sources - atocore_memory_list: filter active memories by project/type - atocore_memory_create: propose a candidate memory - atocore_project_state: query Trusted Project State by category - atocore_projects: list registered projects + aliases - atocore_health: service status check Design choices: - stdlib only (no mcp SDK dep) — AtoCore philosophy - Thin HTTP passthrough — zero business logic, zero drift risk - Fail-open: AtoCore unreachable returns graceful error, not crash - Protocol MCP 2024-11-05 compatible Registered in Claude Code: `claude mcp add atocore -- python ...` Verified: ✓ Connected, all 7 tools exposed, context/search/state return live data from Dalidou (sha=775960c8, vectors=33253). This is the keystone for master brain vision: every Claude session now has AtoCore available as built-in capability without the user or agent having to remember to invoke it. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
AtoCore
Personal context engine that enriches LLM interactions with durable memory, structured context, and project knowledge.
Quick Start
pip install -e .
uvicorn src.atocore.main:app --port 8100
Usage
# Ingest markdown files
curl -X POST http://localhost:8100/ingest \
-H "Content-Type: application/json" \
-d '{"path": "/path/to/notes"}'
# Build enriched context for a prompt
curl -X POST http://localhost:8100/context/build \
-H "Content-Type: application/json" \
-d '{"prompt": "What is the project status?", "project": "myproject"}'
# CLI ingestion
python scripts/ingest_folder.py --path /path/to/notes
# Live operator client
python scripts/atocore_client.py health
python scripts/atocore_client.py audit-query "gigabit" 5
API Endpoints
| Method | Path | Description |
|---|---|---|
| POST | /ingest | Ingest markdown file or folder |
| POST | /query | Retrieve relevant chunks |
| POST | /context/build | Build full context pack |
| GET | /health | Health check |
| GET | /debug/context | Inspect last context pack |
Architecture
FastAPI (port 8100)
|- Ingestion: markdown -> parse -> chunk -> embed -> store
|- Retrieval: query -> embed -> vector search -> rank
|- Context Builder: retrieve -> boost -> budget -> format
|- SQLite (documents, chunks, memories, projects, interactions)
'- ChromaDB (vector embeddings)
Configuration
Set via environment variables (prefix ATOCORE_):
| Variable | Default | Description |
|---|---|---|
| ATOCORE_DEBUG | false | Enable debug logging |
| ATOCORE_PORT | 8100 | Server port |
| ATOCORE_CHUNK_MAX_SIZE | 800 | Max chunk size (chars) |
| ATOCORE_CONTEXT_BUDGET | 3000 | Context pack budget (chars) |
| ATOCORE_EMBEDDING_MODEL | paraphrase-multilingual-MiniLM-L12-v2 | Embedding model |
Testing
pip install -e ".[dev]"
pytest
Operations
scripts/atocore_client.pyprovides a live API client for project refresh, project-state inspection, and retrieval-quality audits.docs/operations.mdcaptures the current operational priority order: retrieval quality, Wave 2 trusted-operational ingestion, AtoDrive scoping, and restore validation.
Architecture Notes
Implementation-facing architecture notes live under docs/architecture/.
Current additions:
docs/architecture/engineering-knowledge-hybrid-architecture.md— 5-layer hybrid modeldocs/architecture/engineering-ontology-v1.md— V1 object and relationship inventorydocs/architecture/engineering-query-catalog.md— 20 v1-required queriesdocs/architecture/memory-vs-entities.md— canonical home splitdocs/architecture/promotion-rules.md— Layer 0 to Layer 2 pipelinedocs/architecture/conflict-model.md— contradictory facts detection and resolution
Description
Languages
Python
94.8%
Shell
3.9%
JavaScript
0.9%
PowerShell
0.3%