271ee25d99e680cce53614ae1dc3f6922c21e779
Adds an "Auto-process queue" button to /admin/triage that lets the
user kick off a full LLM triage pass without SSH. Bridges the gap
between web UI (in container) and claude CLI (host-only).
Architecture:
- UI button POSTs to /admin/triage/request-drain
- Endpoint writes atocore/config/auto_triage_requested_at flag
- Host-side watcher cron (every 2 min) checks for the flag
- When found: clears flag, acquires lock, runs auto_triage.py,
records progress via atocore/status/* entries
- UI polls /admin/triage/drain-status every 10s to show progress,
auto-reloads when done
Safety:
- Lock file prevents concurrent runs on host
- Flag cleared before run so duplicate clicks queue at most one re-run
- Fail-open: watcher errors just log, don't break anything
- Status endpoint stays read-only
Installation on host (one-time):
*/2 * * * * /srv/storage/atocore/app/deploy/dalidou/auto-triage-watcher.sh \
>> /home/papa/atocore-logs/auto-triage-watcher.log 2>&1
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
AtoCore
Personal context engine that enriches LLM interactions with durable memory, structured context, and project knowledge.
Quick Start
pip install -e .
uvicorn src.atocore.main:app --port 8100
Usage
# Ingest markdown files
curl -X POST http://localhost:8100/ingest \
-H "Content-Type: application/json" \
-d '{"path": "/path/to/notes"}'
# Build enriched context for a prompt
curl -X POST http://localhost:8100/context/build \
-H "Content-Type: application/json" \
-d '{"prompt": "What is the project status?", "project": "myproject"}'
# CLI ingestion
python scripts/ingest_folder.py --path /path/to/notes
# Live operator client
python scripts/atocore_client.py health
python scripts/atocore_client.py audit-query "gigabit" 5
API Endpoints
| Method | Path | Description |
|---|---|---|
| POST | /ingest | Ingest markdown file or folder |
| POST | /query | Retrieve relevant chunks |
| POST | /context/build | Build full context pack |
| GET | /health | Health check |
| GET | /debug/context | Inspect last context pack |
Architecture
FastAPI (port 8100)
|- Ingestion: markdown -> parse -> chunk -> embed -> store
|- Retrieval: query -> embed -> vector search -> rank
|- Context Builder: retrieve -> boost -> budget -> format
|- SQLite (documents, chunks, memories, projects, interactions)
'- ChromaDB (vector embeddings)
Configuration
Set via environment variables (prefix ATOCORE_):
| Variable | Default | Description |
|---|---|---|
| ATOCORE_DEBUG | false | Enable debug logging |
| ATOCORE_PORT | 8100 | Server port |
| ATOCORE_CHUNK_MAX_SIZE | 800 | Max chunk size (chars) |
| ATOCORE_CONTEXT_BUDGET | 3000 | Context pack budget (chars) |
| ATOCORE_EMBEDDING_MODEL | paraphrase-multilingual-MiniLM-L12-v2 | Embedding model |
Testing
pip install -e ".[dev]"
pytest
Operations
scripts/atocore_client.pyprovides a live API client for project refresh, project-state inspection, and retrieval-quality audits.docs/operations.mdcaptures the current operational priority order: retrieval quality, Wave 2 trusted-operational ingestion, AtoDrive scoping, and restore validation.
Architecture Notes
Implementation-facing architecture notes live under docs/architecture/.
Current additions:
docs/architecture/engineering-knowledge-hybrid-architecture.md— 5-layer hybrid modeldocs/architecture/engineering-ontology-v1.md— V1 object and relationship inventorydocs/architecture/engineering-query-catalog.md— 20 v1-required queriesdocs/architecture/memory-vs-entities.md— canonical home splitdocs/architecture/promotion-rules.md— Layer 0 to Layer 2 pipelinedocs/architecture/conflict-model.md— contradictory facts detection and resolution
Description
Languages
Python
94.8%
Shell
3.9%
JavaScript
0.9%
PowerShell
0.3%