Files
ATOCore/README.md

76 lines
2.0 KiB
Markdown

# AtoCore
Personal context engine that enriches LLM interactions with durable memory, structured context, and project knowledge.
## Quick Start
```bash
pip install -e .
uvicorn src.atocore.main:app --port 8100
```
## Usage
```bash
# Ingest markdown files
curl -X POST http://localhost:8100/ingest \
-H "Content-Type: application/json" \
-d '{"path": "/path/to/notes"}'
# Build enriched context for a prompt
curl -X POST http://localhost:8100/context/build \
-H "Content-Type: application/json" \
-d '{"prompt": "What is the project status?", "project": "myproject"}'
# CLI ingestion
python scripts/ingest_folder.py --path /path/to/notes
```
## API Endpoints
| Method | Path | Description |
|--------|------|-------------|
| POST | /ingest | Ingest markdown file or folder |
| POST | /query | Retrieve relevant chunks |
| POST | /context/build | Build full context pack |
| GET | /health | Health check |
| GET | /debug/context | Inspect last context pack |
## Architecture
```text
FastAPI (port 8100)
|- Ingestion: markdown -> parse -> chunk -> embed -> store
|- Retrieval: query -> embed -> vector search -> rank
|- Context Builder: retrieve -> boost -> budget -> format
|- SQLite (documents, chunks, memories, projects, interactions)
'- ChromaDB (vector embeddings)
```
## Configuration
Set via environment variables (prefix `ATOCORE_`):
| Variable | Default | Description |
|----------|---------|-------------|
| ATOCORE_DEBUG | false | Enable debug logging |
| ATOCORE_PORT | 8100 | Server port |
| ATOCORE_CHUNK_MAX_SIZE | 800 | Max chunk size (chars) |
| ATOCORE_CONTEXT_BUDGET | 3000 | Context pack budget (chars) |
| ATOCORE_EMBEDDING_MODEL | paraphrase-multilingual-MiniLM-L12-v2 | Embedding model |
## Testing
```bash
pip install -e ".[dev]"
pytest
```
## Architecture Notes
Implementation-facing architecture notes live under `docs/architecture/`.
Current additions:
- `docs/architecture/engineering-knowledge-hybrid-architecture.md`
- `docs/architecture/engineering-ontology-v1.md`