2e449a4c33c300850f559dde499a4996394d58e3
First doc in the engineering-layer planning sprint. The premise of this document is the inverse of the existing ontology doc: instead of listing objects and seeing what they could do, we list the questions we need to answer and let those drive what objects and relationships must exist. The rule established here: > If a typed object or relationship does not serve at least one query > in this catalog, it is not in V1. Contents: - 20 v1-required queries grouped into 5 tiers: - structure (Q-001..Q-004) - intent (Q-005..Q-009) - validation (Q-010..Q-012) - change/time (Q-013..Q-014) - cross-cutting (Q-016..Q-020) - 3 v1-stretch queries (Q-021..Q-023) - 4 v2 deferred queries (Q-024..Q-027) so V1 does not paint us into a corner Each entry has: id, question, invocation, expected result shape, required objects, required relationships, provenance requirement, and tier. Three queries are flagged as the "killer correctness" queries: - Q-006 orphan requirements (engineering equivalent of untested code) - Q-009 decisions based on flagged assumptions (catches fragile design) - Q-011 validation claims with no supporting result (catches unevidenced claims) The catalog ends with the implied implementation order for V1, the list of object families intentionally deferred (BOM, manufacturing, software, electrical, test correlation), and the open questions this catalog raises for the next planning docs: - when do orphan/unsupported queries flag (insert time vs query time)? - when an Assumption flips, are dependent Decisions auto-flagged? - does AtoCore block conflicts or always save-and-flag? - is EVIDENCED_BY mandatory at insert? - when does the Human Mirror regenerate? These are the questions the next planning docs (memory-vs-entities, conflict-model, promotion-rules) should answer before any engineering layer code is written. This is doc work only. No code, no schema, no behavior change. Per the working rule in master-plan-status.md: the architecture docs shape decisions, they do not force premature schema work.
AtoCore
Personal context engine that enriches LLM interactions with durable memory, structured context, and project knowledge.
Quick Start
pip install -e .
uvicorn src.atocore.main:app --port 8100
Usage
# Ingest markdown files
curl -X POST http://localhost:8100/ingest \
-H "Content-Type: application/json" \
-d '{"path": "/path/to/notes"}'
# Build enriched context for a prompt
curl -X POST http://localhost:8100/context/build \
-H "Content-Type: application/json" \
-d '{"prompt": "What is the project status?", "project": "myproject"}'
# CLI ingestion
python scripts/ingest_folder.py --path /path/to/notes
API Endpoints
| Method | Path | Description |
|---|---|---|
| POST | /ingest | Ingest markdown file or folder |
| POST | /query | Retrieve relevant chunks |
| POST | /context/build | Build full context pack |
| GET | /health | Health check |
| GET | /debug/context | Inspect last context pack |
Architecture
FastAPI (port 8100)
|- Ingestion: markdown -> parse -> chunk -> embed -> store
|- Retrieval: query -> embed -> vector search -> rank
|- Context Builder: retrieve -> boost -> budget -> format
|- SQLite (documents, chunks, memories, projects, interactions)
'- ChromaDB (vector embeddings)
Configuration
Set via environment variables (prefix ATOCORE_):
| Variable | Default | Description |
|---|---|---|
| ATOCORE_DEBUG | false | Enable debug logging |
| ATOCORE_PORT | 8100 | Server port |
| ATOCORE_CHUNK_MAX_SIZE | 800 | Max chunk size (chars) |
| ATOCORE_CONTEXT_BUDGET | 3000 | Context pack budget (chars) |
| ATOCORE_EMBEDDING_MODEL | paraphrase-multilingual-MiniLM-L12-v2 | Embedding model |
Testing
pip install -e ".[dev]"
pytest
Architecture Notes
Implementation-facing architecture notes live under docs/architecture/.
Current additions:
docs/architecture/engineering-knowledge-hybrid-architecture.mddocs/architecture/engineering-ontology-v1.md
Description
Languages
Python
96.2%
Shell
3.3%
JavaScript
0.4%