# AtoCore Personal context engine that enriches LLM interactions with durable memory, structured context, and project knowledge. ## Quick Start ```bash pip install -e . uvicorn src.atocore.main:app --port 8100 ``` ## Usage ```bash # Ingest markdown files curl -X POST http://localhost:8100/ingest \ -H "Content-Type: application/json" \ -d '{"path": "/path/to/notes"}' # Build enriched context for a prompt curl -X POST http://localhost:8100/context/build \ -H "Content-Type: application/json" \ -d '{"prompt": "What is the project status?", "project": "myproject"}' # CLI ingestion python scripts/ingest_folder.py --path /path/to/notes # Live operator client python scripts/atocore_client.py health python scripts/atocore_client.py audit-query "gigabit" 5 ``` ## API Endpoints | Method | Path | Description | |--------|------|-------------| | POST | /ingest | Ingest markdown file or folder | | POST | /query | Retrieve relevant chunks | | POST | /context/build | Build full context pack | | GET | /health | Health check | | GET | /debug/context | Inspect last context pack | ## Architecture ``` FastAPI (port 8100) ├── Ingestion: markdown → parse → chunk → embed → store ├── Retrieval: query → embed → vector search → rank ├── Context Builder: retrieve → boost → budget → format ├── SQLite (documents, chunks, memories, projects, interactions) └── ChromaDB (vector embeddings) ``` ## Configuration Set via environment variables (prefix `ATOCORE_`): | Variable | Default | Description | |----------|---------|-------------| | ATOCORE_DEBUG | false | Enable debug logging | | ATOCORE_PORT | 8100 | Server port | | ATOCORE_CHUNK_MAX_SIZE | 800 | Max chunk size (chars) | | ATOCORE_CONTEXT_BUDGET | 3000 | Context pack budget (chars) | | ATOCORE_EMBEDDING_MODEL | paraphrase-multilingual-MiniLM-L12-v2 | Embedding model | ## Testing ```bash pip install -e ".[dev]" pytest ``` ## Operations - `scripts/atocore_client.py` provides a live API client for project refresh, project-state inspection, and retrieval-quality audits. - `docs/operations.md` captures the current operational priority order: retrieval quality, Wave 2 trusted-operational ingestion, AtoDrive scoping, and restore validation.