0dfecb3c14c7297fb45e5560fb3d937f19f766f4
Closes the graduation UX loop: no more SSH required to populate the
entity graph from memories. Click button → host watcher picks up
→ graduation runs → entity candidates appear in the same triage UI.
New API endpoints (src/atocore/api/routes.py):
- POST /admin/graduation/request: takes {project, limit}, writes flag
to project_state. Host watcher picks up within 2 min.
- GET /admin/graduation/status: returns requested/running/last_result
fields for UI polling.
Triage UI (src/atocore/engineering/triage_ui.py):
- Graduation bar with:
- 🎓 Graduate memories button
- Project selector populated from registry (or "all projects")
- Limit number input (default 30, max 200)
- Status message area
- Poll every 10s until is_running=false, then auto-reload the page to
show new entity candidates in the Entity section below
- Graduation bar appears on both populated and empty triage page
states so you can kick off graduation from either
Host watcher (deploy/dalidou/graduation-watcher.sh):
- Mirrors auto-triage-watcher.sh pattern: poll, lock, clear flag,
run, record result, unlock
- Parses {project, limit} JSON from the flag payload
- Runs graduate_memories.py with those args
- Records graduation_running/started/finished/last_result in project
state for the UI to display
- Lock file prevents concurrent runs
Install on host (one-time, via cron):
*/2 * * * * /srv/storage/atocore/app/deploy/dalidou/graduation-watcher.sh \
>> /home/papa/atocore-logs/graduation-watcher.log 2>&1
This completes the Phase 5 self-service loop: queue triage happens
autonomously via the 3-tier escalation (shipped in 3ca1972); entity
graph population happens autonomously via a button click. No shell
required for daily use.
Tests: 366 passing (no new tests — UI + shell are integration-level).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
AtoCore
Personal context engine that enriches LLM interactions with durable memory, structured context, and project knowledge.
Quick Start
pip install -e .
uvicorn src.atocore.main:app --port 8100
Usage
# Ingest markdown files
curl -X POST http://localhost:8100/ingest \
-H "Content-Type: application/json" \
-d '{"path": "/path/to/notes"}'
# Build enriched context for a prompt
curl -X POST http://localhost:8100/context/build \
-H "Content-Type: application/json" \
-d '{"prompt": "What is the project status?", "project": "myproject"}'
# CLI ingestion
python scripts/ingest_folder.py --path /path/to/notes
# Live operator client
python scripts/atocore_client.py health
python scripts/atocore_client.py audit-query "gigabit" 5
API Endpoints
| Method | Path | Description |
|---|---|---|
| POST | /ingest | Ingest markdown file or folder |
| POST | /query | Retrieve relevant chunks |
| POST | /context/build | Build full context pack |
| GET | /health | Health check |
| GET | /debug/context | Inspect last context pack |
Architecture
FastAPI (port 8100)
|- Ingestion: markdown -> parse -> chunk -> embed -> store
|- Retrieval: query -> embed -> vector search -> rank
|- Context Builder: retrieve -> boost -> budget -> format
|- SQLite (documents, chunks, memories, projects, interactions)
'- ChromaDB (vector embeddings)
Configuration
Set via environment variables (prefix ATOCORE_):
| Variable | Default | Description |
|---|---|---|
| ATOCORE_DEBUG | false | Enable debug logging |
| ATOCORE_PORT | 8100 | Server port |
| ATOCORE_CHUNK_MAX_SIZE | 800 | Max chunk size (chars) |
| ATOCORE_CONTEXT_BUDGET | 3000 | Context pack budget (chars) |
| ATOCORE_EMBEDDING_MODEL | paraphrase-multilingual-MiniLM-L12-v2 | Embedding model |
Testing
pip install -e ".[dev]"
pytest
Operations
scripts/atocore_client.pyprovides a live API client for project refresh, project-state inspection, and retrieval-quality audits.docs/operations.mdcaptures the current operational priority order: retrieval quality, Wave 2 trusted-operational ingestion, AtoDrive scoping, and restore validation.
Architecture Notes
Implementation-facing architecture notes live under docs/architecture/.
Current additions:
docs/architecture/engineering-knowledge-hybrid-architecture.md— 5-layer hybrid modeldocs/architecture/engineering-ontology-v1.md— V1 object and relationship inventorydocs/architecture/engineering-query-catalog.md— 20 v1-required queriesdocs/architecture/memory-vs-entities.md— canonical home splitdocs/architecture/promotion-rules.md— Layer 0 to Layer 2 pipelinedocs/architecture/conflict-model.md— contradictory facts detection and resolution
Description
Languages
Python
94.8%
Shell
3.9%
JavaScript
0.9%
PowerShell
0.3%