# AtoCore Personal context engine that enriches LLM interactions with durable memory, structured context, and project knowledge. ## Quick Start ```bash pip install -e . uvicorn src.atocore.main:app --port 8100 ``` ## Usage ```bash # Ingest markdown files curl -X POST http://localhost:8100/ingest \ -H "Content-Type: application/json" \ -d '{"path": "/path/to/notes"}' # Build enriched context for a prompt curl -X POST http://localhost:8100/context/build \ -H "Content-Type: application/json" \ -d '{"prompt": "What is the project status?", "project": "myproject"}' # CLI ingestion python scripts/ingest_folder.py --path /path/to/notes # Live operator client python scripts/atocore_client.py health python scripts/atocore_client.py audit-query "gigabit" 5 ``` ## API Endpoints | Method | Path | Description | |--------|------|-------------| | POST | /ingest | Ingest markdown file or folder | | POST | /query | Retrieve relevant chunks | | POST | /context/build | Build full context pack | | GET | /health | Health check | | GET | /debug/context | Inspect last context pack | ## Architecture ```text FastAPI (port 8100) |- Ingestion: markdown -> parse -> chunk -> embed -> store |- Retrieval: query -> embed -> vector search -> rank |- Context Builder: retrieve -> boost -> budget -> format |- SQLite (documents, chunks, memories, projects, interactions) '- ChromaDB (vector embeddings) ``` ## Configuration Set via environment variables (prefix `ATOCORE_`): | Variable | Default | Description | |----------|---------|-------------| | ATOCORE_DEBUG | false | Enable debug logging | | ATOCORE_PORT | 8100 | Server port | | ATOCORE_CHUNK_MAX_SIZE | 800 | Max chunk size (chars) | | ATOCORE_CONTEXT_BUDGET | 3000 | Context pack budget (chars) | | ATOCORE_EMBEDDING_MODEL | paraphrase-multilingual-MiniLM-L12-v2 | Embedding model | ## Testing ```bash pip install -e ".[dev]" pytest ``` ## Operations - `scripts/atocore_client.py` provides a live API client for project refresh, project-state inspection, and retrieval-quality audits. - `docs/operations.md` captures the current operational priority order: retrieval quality, Wave 2 trusted-operational ingestion, AtoDrive scoping, and restore validation. ## Architecture Notes Implementation-facing architecture notes live under `docs/architecture/`. Current additions: - `docs/architecture/engineering-knowledge-hybrid-architecture.md` — 5-layer hybrid model - `docs/architecture/engineering-ontology-v1.md` — V1 object and relationship inventory - `docs/architecture/engineering-query-catalog.md` — 20 v1-required queries - `docs/architecture/memory-vs-entities.md` — canonical home split - `docs/architecture/promotion-rules.md` — Layer 0 to Layer 2 pipeline - `docs/architecture/conflict-model.md` — contradictory facts detection and resolution