Compare commits
55 Commits
codex/open
...
86637f8eee
| Author | SHA1 | Date | |
|---|---|---|---|
| 86637f8eee | |||
| c49363fccc | |||
| 33a6c61ca6 | |||
| 33a106732f | |||
| 3011aa77da | |||
| ba36a28453 | |||
| 999788b790 | |||
| 775960c8c8 | |||
| b687e7fa6f | |||
| 4d4d5f437a | |||
| 5b114baa87 | |||
| c2e7064238 | |||
| dc9fdd3a38 | |||
| 58ea21df80 | |||
| 8c0f1ff6f3 | |||
| 3db1dd99b5 | |||
| 57b64523fb | |||
| a13ea3b9d1 | |||
| 3f23ca1bc6 | |||
| c1f5b3bdee | |||
| 761c483474 | |||
| c57617f611 | |||
| 3f18ba3b35 | |||
| 8527c369ee | |||
| bd3dc50100 | |||
| 700e3ca2c2 | |||
| ccc49d3a8f | |||
| 3e0a357441 | |||
| dc20033a93 | |||
| b86181eb6c | |||
| 9118f824fa | |||
| db89978871 | |||
| 4ac4e5cc44 | |||
| a6ae6166a4 | |||
| 4f8bec7419 | |||
| 52380a233e | |||
| 8b77e83f0a | |||
| dbb8f915e2 | |||
| e5e9a9931e | |||
| 144dbbd700 | |||
| 7650c339a2 | |||
| 69c971708a | |||
| 8951c624fe | |||
| 1a2ee5e07f | |||
| 9b149d4bfd | |||
| abc8af5f7e | |||
| ac7f77d86d | |||
| 719ff649a8 | |||
| 8af8af90d0 | |||
| cd0fd390a8 | |||
| c67bec095c | |||
| bcb7675a0d | |||
| 54d84b52cb | |||
| b790e7eb30 | |||
| e2895b5d2b |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -6,6 +6,7 @@ __pycache__/
|
||||
dist/
|
||||
build/
|
||||
.pytest_cache/
|
||||
.mypy_cache/
|
||||
htmlcov/
|
||||
.coverage
|
||||
venv/
|
||||
|
||||
@@ -6,14 +6,23 @@
|
||||
|
||||
## Orientation
|
||||
|
||||
- **live_sha** (Dalidou `/health` build_sha): `39d73e9`
|
||||
- **last_updated**: 2026-04-12 by Claude (Wave 2 ingestion + R6 fix deployed)
|
||||
- **main_tip**: `39d73e9`
|
||||
- **test_count**: 280 passing
|
||||
- **harness**: `16/18 PASS` (p06-firmware-interface = R7 ranking tie; p06-tailscale = chunk bleed)
|
||||
- **active_memories**: 36 (p06-polisher 16, p05-interferometer 6, p04-gigabit 5, atocore 5, other 4)
|
||||
- **project_state_entries**: p04=6, p05=7, p06=7 (Wave 2 added 8 new entries)
|
||||
- **off_host_backup**: `papa@192.168.86.39:/home/papa/atocore-backups/` via cron env `ATOCORE_BACKUP_RSYNC`, verified
|
||||
- **live_sha** (Dalidou `/health` build_sha): `775960c` (verified 2026-04-16 via /health, build_time 2026-04-16T17:59:30Z)
|
||||
- **last_updated**: 2026-04-16 by Claude ("Make It Actually Useful" sprint — observability + Phase 10)
|
||||
- **main_tip**: `999788b`
|
||||
- **test_count**: 303 (4 new Phase 10 tests)
|
||||
- **harness**: `17/18 PASS` on live Dalidou (p04-constraints expects "Zerodur" — retrieval content gap, not regression)
|
||||
- **vectors**: 33,253
|
||||
- **active_memories**: 84 (31 project, 23 knowledge, 10 episodic, 8 adaptation, 7 preference, 5 identity)
|
||||
- **candidate_memories**: 2
|
||||
- **interactions**: 234 total (192 claude-code, 38 openclaw, 4 test)
|
||||
- **registered_projects**: atocore, p04-gigabit, p05-interferometer, p06-polisher, atomizer-v2, abb-space (aliased p08)
|
||||
- **project_state_entries**: 110 total (atocore=47, p06=19, p05=18, p04=15, abb=6, atomizer=5)
|
||||
- **entities**: 35 (engineering knowledge graph, Layer 2)
|
||||
- **off_host_backup**: `papa@192.168.86.39:/home/papa/atocore-backups/` via cron, verified
|
||||
- **nightly_pipeline**: backup → cleanup → rsync → OpenClaw import → vault refresh → extract → auto-triage → **auto-promote/expire (NEW)** → weekly synth/lint Sundays → **retrieval harness (NEW)** → **pipeline summary (NEW)**
|
||||
- **capture_clients**: claude-code (Stop hook + cwd project inference), openclaw (before_agent_start + llm_output plugin, verified live)
|
||||
- **wiki**: http://dalidou:8100/wiki (browse), /wiki/projects/{id}, /wiki/entities/{id}, /wiki/search
|
||||
- **dashboard**: http://dalidou:8100/admin/dashboard (now shows pipeline health, interaction totals by client, all registered projects)
|
||||
|
||||
## Active Plan
|
||||
|
||||
@@ -121,14 +130,19 @@ One branch `codex/extractor-eval-loop` for Day 1-5, a second `codex/retrieval-ha
|
||||
|
||||
| id | finder | severity | file:line | summary | status | owner | opened_at | resolved_by |
|
||||
|-----|--------|----------|------------------------------------|-------------------------------------------------------------------------|--------------|--------|------------|-------------|
|
||||
| R1 | Codex | P1 | deploy/hooks/capture_stop.py:76-85 | Live Claude capture still omits `extract`, so "loop closed both sides" remains overstated in practice even though the API supports it | acknowledged | Claude | 2026-04-11 | |
|
||||
| R1 | Codex | P1 | deploy/hooks/capture_stop.py:76-85 | Live Claude capture still omits `extract`, so "loop closed both sides" remains overstated in practice even though the API supports it | fixed | Claude | 2026-04-11 | c67bec0 |
|
||||
| R2 | Codex | P1 | src/atocore/context/builder.py | Project memories excluded from pack | fixed | Claude | 2026-04-11 | 8ea53f4 |
|
||||
| R3 | Claude | P2 | src/atocore/memory/extractor.py | Rule cues (`## Decision:`) never fire on conversational LLM text | open | Claude | 2026-04-11 | |
|
||||
| R3 | Claude | P2 | src/atocore/memory/extractor.py | Rule cues (`## Decision:`) never fire on conversational LLM text | declined | Claude | 2026-04-11 | see 2026-04-14 session log |
|
||||
| R4 | Codex | P2 | DEV-LEDGER.md:11 | Orientation `main_tip` was stale versus `HEAD` / `origin/main` | fixed | Codex | 2026-04-11 | 81307ce |
|
||||
| R5 | Codex | P1 | src/atocore/interactions/service.py:157-174 | The deployed extraction path still calls only the rule extractor; the new LLM extractor is eval/script-only, so Day 4 "gate cleared" is true as a benchmark result but not as an operational extraction path | acknowledged | Claude | 2026-04-12 | |
|
||||
| R6 | Codex | P1 | src/atocore/memory/extractor_llm.py:258-276 | LLM extraction accepts model-supplied `project` verbatim with no fallback to `interaction.project`; live triage promoted a clearly p06 memory (offline/network rule) as project=`""`, which explains the p06-offline-design harness miss and falsifies the current "all 3 failures are budget-contention" claim | fixed | Claude | 2026-04-12 | this commit |
|
||||
| R7 | Codex | P2 | src/atocore/memory/service.py:448-459 | Query ranking is overlap-count only, so broad overview memories can tie exact low-confidence memories and win on confidence; p06-firmware-interface is not just budget pressure, it also exposes a weak lexical scorer | open | Claude | 2026-04-12 | |
|
||||
| R8 | Codex | P2 | tests/test_extractor_llm.py:1-7 | LLM extractor tests stop at parser/failure contracts; there is no automated coverage for the script-only persistence/review path that produced the 16 promoted memories, including project-scope preservation | open | Claude | 2026-04-12 | |
|
||||
| R5 | Codex | P1 | src/atocore/interactions/service.py:157-174 | The deployed extraction path still calls only the rule extractor; the new LLM extractor is eval/script-only, so Day 4 "gate cleared" is true as a benchmark result but not as an operational extraction path | fixed | Claude | 2026-04-12 | c67bec0 |
|
||||
| R6 | Codex | P1 | src/atocore/memory/extractor_llm.py:258-276 | LLM extraction accepts model-supplied `project` verbatim with no fallback to `interaction.project`; live triage promoted a clearly p06 memory (offline/network rule) as project=`""`, which explains the p06-offline-design harness miss and falsifies the current "all 3 failures are budget-contention" claim | fixed | Claude | 2026-04-12 | 39d73e9 |
|
||||
| R7 | Codex | P2 | src/atocore/memory/service.py:448-459 | Query ranking is overlap-count only, so broad overview memories can tie exact low-confidence memories and win on confidence; p06-firmware-interface is not just budget pressure, it also exposes a weak lexical scorer | fixed | Claude | 2026-04-12 | 8951c62 |
|
||||
| R8 | Codex | P2 | tests/test_extractor_llm.py:1-7 | LLM extractor tests stop at parser/failure contracts; there is no automated coverage for the script-only persistence/review path that produced the 16 promoted memories, including project-scope preservation | fixed | Claude | 2026-04-12 | 69c9717 |
|
||||
| R9 | Codex | P2 | src/atocore/memory/extractor_llm.py:258-259 | The R6 fallback only repairs empty project output. A wrong non-empty model project still overrides the interaction's known scope, so project attribution is improved but not yet trust-preserving. | fixed | Claude | 2026-04-12 | e5e9a99 |
|
||||
| R10 | Codex | P2 | docs/master-plan-status.md:31-33 | "Phase 8 - OpenClaw Integration" is fair as a baseline milestone, but not as a "primary" integration claim. `t420-openclaw/atocore.py` currently covers a narrow read-oriented subset (13 request shapes vs 32 API routes) plus fail-open health, while memory/interactions/admin write paths remain out of surface. | fixed | Claude | 2026-04-12 | (pending) |
|
||||
| R11 | Codex | P2 | src/atocore/api/routes.py:773-845 | `POST /admin/extract-batch` still accepts `mode="llm"` inside the container and returns a successful 0-candidate result instead of surfacing that host-only LLM extraction is unavailable from this runtime. That is a misleading API contract for operators. | fixed | Claude | 2026-04-12 | (pending) |
|
||||
| R12 | Codex | P2 | scripts/batch_llm_extract_live.py:39-190 | The host-side extractor duplicates the LLM system prompt and JSON parsing logic from `src/atocore/memory/extractor_llm.py`. It works today, but this is now a prompt/parser drift risk across the container and host implementations. | fixed | Claude | 2026-04-12 | (pending) |
|
||||
| R13 | Codex | P2 | DEV-LEDGER.md:12 | The new `286 passing` test-count claim is not reproducibly auditable from the current audit environments: neither Dalidou nor the clean worktree has `pytest` available. The claim may be true in Claude's dev shell, but it remains unverified in this audit. | fixed | Claude | 2026-04-12 | (pending) |
|
||||
|
||||
## Recent Decisions
|
||||
|
||||
@@ -146,10 +160,28 @@ One branch `codex/extractor-eval-loop` for Day 1-5, a second `codex/retrieval-ha
|
||||
|
||||
## Session Log
|
||||
|
||||
- **2026-04-23 Codex** Phase 2 policy/doc cleanup for the OpenClaw x AtoCore operating model. Normalized the 5 Phase 1 docs to clean ASCII, removed Screenpipe from V1 active scope, added `docs/openclaw-atocore-v1-proof-runbook.md`, and added a non-applied shared-client consolidation preview at `docs/openclaw-atocore-shared-client-consolidation-preview.md`. Also updated OpenClaw governance text in `/home/papa/clawd/AGENTS.md` and `/home/papa/clawd/skills/atocore-context/SKILL.md` so Discord-originated AtoCore actions are read-only by default and mutating actions require explicit current-thread/session approval. No code/runtime/schema changes, no deploy, no tests run.
|
||||
- **2026-04-16 Claude** `b687e7f..999788b` **"Make It Actually Useful" sprint.** Two-part session: ops fixes then consolidation sprint.
|
||||
|
||||
- **2026-04-23 Codex** Phase 1 OpenClaw × AtoCore operating-model audit/design/doc pass only. Read AGENTS/CLAUDE/DEV-LEDGER plus requested integration docs, verified OpenClaw helper surface vs shared operator client, confirmed live fail-open read path, confirmed discrawl presence, and confirmed Screenpipe was not installed locally. Wrote 5 new docs: `docs/openclaw-atocore-audit-note.md`, `docs/openclaw-atocore-v1-architecture.md`, `docs/openclaw-atocore-write-policy-matrix.md`, `docs/openclaw-atocore-promotion-pipeline.md`, `docs/openclaw-atocore-nightly-screener-runbook.md`. No code/runtime/skill changes, no deploy, no tests run.
|
||||
**Part 1 — Ops fixes:** Deployed `b687e7f` (project inference from cwd). Fixed cron logging (was `/dev/null` — redirected to `~/atocore-logs/`). Fixed OpenClaw gateway crash-loop (`discord.replyToMode: "any"` invalid → `"all"`). Deployed `atocore-capture` plugin on T420 OpenClaw using `before_agent_start` + `llm_output` hooks — verified end-to-end: 38 `client=openclaw` interactions captured. Backfilled project tags on 179/181 unscoped interactions (165 atocore, 8 p06, 6 p04).
|
||||
|
||||
**Part 2 — Sprint (Phase A+C):** Pipeline observability: retrieval harness now runs nightly (Step E), pipeline summary persisted to project state (Step F), dashboard enhanced with interaction totals by client + pipeline health section + dynamic project list. Phase 10 landed: `auto_promote_reinforced()` (candidate→active when reference_count≥3, confidence≥0.7) + `expire_stale_candidates()` (14-day unreinforced→auto-reject), both wired into nightly cron Step B2. Seeding script created (26 entries across 6 projects — all already existed from prior session). Tests 299→303. Harness 17/18 on live Dalidou (p04-constraints expects "Zerodur" — retrieval content gap, not regression). Deployed `775960c`.
|
||||
|
||||
- **2026-04-15 Claude (pm)** Closed the last harness failure honestly. **p06-tailscale fixed: 18/18 PASS.** Root-caused: not a retrieval bug — the p06 `ARCHITECTURE.md` Overview chunk legitimately mentions "the GigaBIT M1 telescope mirror" because the Polisher Suite is built *for* that mirror. All four retrieved sources for the tailscale prompt were genuinely p06/shared paths; zero actual p04 chunks leaked. The fixture's `expect_absent: GigaBIT` was catching semantic overlap, not retrieval bleed. Narrowed it to `expect_absent: "[Source: p04-gigabit/"` — a source-path check that tests the real invariant (no p04 source chunks in p06 context). Other p06 fixtures still use the word-blacklist form; they pass today because their more-specific prompts don't pull the ARCHITECTURE.md Overview, so I left them alone rather than churn fixtures that aren't failing. Did NOT change retrieval/ranking — no code change, fixture-only fix. Tests unchanged at 299.
|
||||
|
||||
- **2026-04-15 Claude** Deploy + doc debt sweep. Deployed `c2e7064` to Dalidou (build_time 2026-04-15T15:08:51Z, build_sha matches, /health ok) so R11/R12 are now live, not just on main. **R11 verified on live**: `POST /admin/extract-batch {"mode":"llm"}` against http://127.0.0.1:8100 returns HTTP 503 with the operator-facing "claude CLI not on PATH, run host-side script or use mode=rule" message — exactly the post-fix contract. **R13 closed (fixed)**: added a reproduction recipe to Quick Commands (`pip install -r requirements-dev.txt && pytest --collect-only -q && pytest -q`) and re-cited `test_count: 299` against a fresh local collection on 2026-04-15, so the claim is now auditable from any clean checkout — Codex's audit worktree just needs `pip install -r requirements-dev.txt`. **R10 closed (fixed)**: rewrote the `docs/master-plan-status.md` OpenClaw section to explicitly disclaim "primary integration" and report the current narrow surface: 14 client request shapes against ~44 server routes, predominantly read + `/project/state` + `/ingest/sources`, with memory/interactions/admin/entities/triage/extraction writes correctly out of scope. Open findings now: none blocking. Next natural move: the last harness failure `p06-tailscale` (chunk bleed).
|
||||
|
||||
- **2026-04-14 Claude (pm)** Closed R11+R12, declined R3. **R11 (fixed):** `POST /admin/extract-batch` with `mode="llm"` now returns 503 when the `claude` CLI is not on PATH, with a message pointing at the host-side script. Previously it silently returned a success-0 payload, masking host-vs-container truth. 2 new tests in `test_extraction_pipeline.py` cover the 503 path and the rule-mode-still-works path. **R12 (fixed):** extracted shared `SYSTEM_PROMPT` + `parse_llm_json_array` + `normalize_candidate_item` + `build_user_message` into stdlib-only `src/atocore/memory/_llm_prompt.py`. Both `src/atocore/memory/extractor_llm.py` (container) and `scripts/batch_llm_extract_live.py` (host) now import from it. The host script uses `sys.path` to reach the stdlib-only module without needing the full atocore package. Project-attribution policy stays path-specific (container uses registry-check; host defers to server). **R3 (declined):** rule cues not firing on conversational LLM text is by design now — the LLM extractor (llm-0.4.0) is the production path for conversational content as of the Day 4 gate (2026-04-12). Expanding rules to match conversational prose risks the FP blowup Day 2 already showed. Rule extractor stays narrow for structural PKM text. Tests 297 → 299. Live `/health` still `58ea21d`; this session's changes need deploy.
|
||||
|
||||
- **2026-04-14 Claude** MAJOR session: Engineering knowledge layer V1 (Layer 2) built — entity + relationship tables, 15 types, 12 relationship kinds, 35 bootstrapped entities across p04/p05/p06. Human Mirror (Layer 3) — GET /projects/{name}/mirror.html + navigable wiki at /wiki with search. Karpathy-inspired upgrades: contradiction detection in triage, weekly lint pass, weekly synthesis pass producing "current state" paragraphs at top of project pages. Auto-detection of new projects from extraction. Registry persistence fix (ATOCORE_PROJECT_REGISTRY_DIR env var). abb-space/p08 aliases added, atomizer-v2 ingested (568 docs, +12,472 vectors). Identity/preference seed (6 new), signal-aggressive extractor rewrite (llm-0.4.0), auto vault refresh in cron. **OpenClaw one-way pull importer** built per codex proposal — reads /home/papa/clawd SOUL.md, USER.md, MEMORY.md, MODEL-ROUTING.md, memory/*.md via SSH, hash-delta import, pipeline triages. First import: 10 candidates → 10 promoted with lenient triage rule. Active memories 47→84. State entries 61→78. Tests 290→297. Dashboard at /admin/dashboard. Wiki at /wiki.
|
||||
|
||||
|
||||
- **2026-04-12 Claude** `4f8bec7..4ac4e5c` Session close. Merged OpenClaw capture plugin, ingested atomizer-v2 (568 docs, 12,472 new vectors → 33,253 total), seeded Phase 4 identity/preference memories (6 new, 47 total active), added deeper Wave 2 state entries (p05 +3, p06 +3), fixed R9 project trust hierarchy (7 case tests), built auto-triage pipeline, observability dashboard at /admin/dashboard. Updated master-plan-status.md and DEV-LEDGER.md to reflect full current state. 7/14 phases baseline complete. All P1s closed. Nightly pipeline runs unattended with both Claude Code and OpenClaw feeding the reflection loop.
|
||||
- **2026-04-12 Codex (branch `codex/openclaw-capture-plugin`)** added a minimal external OpenClaw plugin at `openclaw-plugins/atocore-capture/` that mirrors Claude Code capture semantics: user-triggered assistant turns are POSTed to AtoCore `/interactions` with `client="openclaw"` and `reinforce=true`, fail-open, no extraction in-path. For live verification, temporarily added the local plugin load path to OpenClaw config and restarted the gateway so the plugin can load. Branch truth is ready; end-to-end verification still needs one fresh post-restart OpenClaw user turn to confirm new `client=openclaw` interactions appear on Dalidou.
|
||||
- **2026-04-12 Claude** Batch 3 (R9 fix): `144dbbd..e5e9a99`. Trust hierarchy for project attribution — interaction scope always wins when set, model project only used for unscoped interactions + registered check. 7 case tests (A-G) cover every combination. Harness 17/18 (no regression). Tests 286->290. Before: wrong registered project could silently override interaction scope. After: interaction.project is the strongest signal; model project is only a fallback for unscoped captures. Not yet guaranteed: nothing prevents the *same* project's model output from being semantically wrong within that project. R9 marked fixed.
|
||||
|
||||
- **2026-04-12 Codex (audit branch `codex/audit-batch2`)** audited `69c9717..origin/main` against the current branch tip and live Dalidou. Verified: live build is `8951c62`, retrieval harness improved to **17/18 PASS**, candidate queue is now empty, active memories rose to **41**, and `python3 scripts/auto_triage.py --dry-run --base-url http://127.0.0.1:8100` runs cleanly on Dalidou but only exercised the empty-queue path. Updated R7 to **fixed** (`8951c62`) and R8 to **fixed** (`69c9717`). Kept R9 **open** because project trust-preservation still allows a wrong non-empty registered project from the model to override the interaction scope. Added R13 because the new `286 passing` claim could not be independently reproduced in this audit: `pytest` is absent on both Dalidou and the clean audit worktree. Also corrected stale Orientation fields (live SHA, main tip, harness, active/candidate memory counts).
|
||||
- **2026-04-12 Codex (audit branch `codex/audit-2026-04-12-extraction`)** audited `54d84b5..ac7f77d` with live Dalidou verification. Confirmed the host-side LLM extraction pipeline is operational: nightly cron points at `deploy/dalidou/cron-backup.sh`, Step 4 calls `deploy/dalidou/batch-extract.sh`, the batch script exists/executable on Dalidou, and a manual host-side run produced candidates successfully. Updated R1 and R5 to **fixed** (`c67bec0`) because extraction now runs unattended off-container. Live state during audit: build `39d73e9`, active memories **36**, candidate queue **29** (16 existing + 13 added by manual verification run), and `last_extract_batch_run` populated in AtoCore project state. Added R11-R12 for the misleading container `mode=llm` no-op and host/container prompt-parser duplication. Security note: CLI positional prompt/response text is visible in process args while `claude -p` runs; acceptable on a single-user home host, but worth remembering if Dalidou's trust boundary changes.
|
||||
- **2026-04-12 Codex (audit branch `codex/audit-2026-04-12-final`)** audited `c5bad99..e2895b5` against origin/main, live Dalidou, and the OpenClaw client script. Live state checked: build `39d73e9`, harness reproducible at **16/18 PASS**, active memories **36**, and `t420-openclaw/atocore.py health` fails open correctly with `fail_open=true`. Spot-checks of Wave 2 project-state entries matched their cited vault docs. Updated R5-R8 status reality (R6 fixed by `39d73e9`), added R9-R10, and corrected Orientation `main_tip` to `e2895b5` because the ledger had drifted behind origin/main. Note: live Dalidou is still on `39d73e9`, so branch-truth and deploy-truth are not the same yet.
|
||||
- **2026-04-12 Claude** Wave 2 trusted operational ingestion + codex audit response. Read 6 vault docs, created 8 new Trusted Project State entries (p04 +2, p05 +3, p06 +3). Fixed R6 (project fallback in LLM extractor) per codex audit. Fixed misscoped p06 offline memory on live Dalidou. Merged codex/audit-2026-04-12. Switched default LLM model from haiku to sonnet. Harness 15/18 -> 16/18. Tests 278 -> 280. main_tip 146f2e4 -> 39d73e9.
|
||||
|
||||
- **2026-04-12 Codex (audit branch `codex/audit-2026-04-12`)** audited `c5bad99..146f2e4` against code, live Dalidou, and the 36 active memories. Confirmed: `claude -p` invocation is not shell-injection-prone (`subprocess.run(args)` with no shell), off-host backup wiring matches the ledger, and R1 remains unresolved in practice. Added R5-R8. Corrected Orientation `main_tip` (`146f2e4`, not `5c69f77`) and tightened the harness note: p06-firmware-interface is a ranking-tie issue, p06-offline-design comes from a project-scope miss in live triage, and p06-tailscale is retrieved-chunk bleed rather than memory-band budget contention.
|
||||
@@ -188,4 +220,9 @@ git push origin main && ssh papa@dalidou "bash /srv/storage/atocore/app/deploy/d
|
||||
python scripts/atocore_client.py batch-extract '' '' 200 false # preview
|
||||
python scripts/atocore_client.py batch-extract '' '' 200 true # persist
|
||||
python scripts/atocore_client.py triage
|
||||
|
||||
# Reproduce the ledger's test_count claim from a clean checkout
|
||||
pip install -r requirements-dev.txt
|
||||
pytest --collect-only -q | tail -1 # -> "N tests collected"
|
||||
pytest -q # -> "N passed"
|
||||
```
|
||||
|
||||
@@ -38,7 +38,7 @@
|
||||
},
|
||||
{
|
||||
"id": "p06-polisher",
|
||||
"aliases": ["p06", "polisher"],
|
||||
"aliases": ["p06", "polisher", "p11", "polisher-fullum", "P11-Polisher-Fullum"],
|
||||
"description": "Active P06 polisher corpus from PKM, software-suite notes, and selected repo context.",
|
||||
"ingest_roots": [
|
||||
{
|
||||
@@ -47,6 +47,30 @@
|
||||
"label": "P06 staged project docs"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "abb-space",
|
||||
"aliases": ["abb", "abb-mirror", "p08", "p08-abb-space", "p08-abb-space-mirror"],
|
||||
"description": "ABB Space mirror - lead/proposition for Atomaste. Also tracked as P08.",
|
||||
"ingest_roots": [
|
||||
{
|
||||
"source": "vault",
|
||||
"subpath": "incoming/projects/abb-space",
|
||||
"label": "ABB Space docs"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "atomizer-v2",
|
||||
"aliases": ["atomizer", "aom", "aom-v2"],
|
||||
"description": "Atomizer V2 parametric optimization platform",
|
||||
"ingest_roots": [
|
||||
{
|
||||
"source": "vault",
|
||||
"subpath": "incoming/projects/atomizer-v2/repo",
|
||||
"label": "Atomizer V2 repo"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
153
deploy/dalidou/batch-extract.sh
Normal file
153
deploy/dalidou/batch-extract.sh
Normal file
@@ -0,0 +1,153 @@
|
||||
#!/usr/bin/env bash
|
||||
#
|
||||
# deploy/dalidou/batch-extract.sh
|
||||
# --------------------------------
|
||||
# Host-side LLM batch extraction for Dalidou.
|
||||
#
|
||||
# The claude CLI is available on the Dalidou HOST but NOT inside the
|
||||
# Docker container. This script runs on the host, fetches recent
|
||||
# interactions from the AtoCore API, runs the LLM extractor locally
|
||||
# (claude -p sonnet), and posts candidates back to the API.
|
||||
#
|
||||
# Intended to be called from cron-backup.sh after backup/cleanup/rsync,
|
||||
# or manually via:
|
||||
#
|
||||
# bash /srv/storage/atocore/app/deploy/dalidou/batch-extract.sh
|
||||
#
|
||||
# Environment variables:
|
||||
# ATOCORE_URL default http://127.0.0.1:8100
|
||||
# ATOCORE_EXTRACT_LIMIT default 50
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
ATOCORE_URL="${ATOCORE_URL:-http://127.0.0.1:8100}"
|
||||
LIMIT="${ATOCORE_EXTRACT_LIMIT:-50}"
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
APP_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
TIMESTAMP="$(date -u +%Y-%m-%dT%H:%M:%SZ)"
|
||||
|
||||
log() { printf '[%s] %s\n' "$TIMESTAMP" "$*"; }
|
||||
|
||||
# The Python script needs the atocore source on PYTHONPATH
|
||||
export PYTHONPATH="$APP_DIR/src:${PYTHONPATH:-}"
|
||||
|
||||
log "=== AtoCore batch extraction + triage starting ==="
|
||||
log "URL=$ATOCORE_URL LIMIT=$LIMIT"
|
||||
|
||||
# --- Pipeline stats accumulator ---
|
||||
EXTRACT_OUT=""
|
||||
TRIAGE_OUT=""
|
||||
HARNESS_OUT=""
|
||||
|
||||
# Step A: Extract candidates from recent interactions
|
||||
log "Step A: LLM extraction"
|
||||
EXTRACT_OUT=$(python3 "$APP_DIR/scripts/batch_llm_extract_live.py" \
|
||||
--base-url "$ATOCORE_URL" \
|
||||
--limit "$LIMIT" \
|
||||
2>&1) || {
|
||||
log "WARN: batch extraction failed (non-blocking)"
|
||||
}
|
||||
echo "$EXTRACT_OUT"
|
||||
|
||||
# Step B: Auto-triage candidates in the queue
|
||||
log "Step B: auto-triage"
|
||||
TRIAGE_OUT=$(python3 "$APP_DIR/scripts/auto_triage.py" \
|
||||
--base-url "$ATOCORE_URL" \
|
||||
2>&1) || {
|
||||
log "WARN: auto-triage failed (non-blocking)"
|
||||
}
|
||||
echo "$TRIAGE_OUT"
|
||||
|
||||
# Step B2: Auto-promote reinforced candidates + expire stale ones
|
||||
log "Step B2: auto-promote + expire"
|
||||
python3 "$APP_DIR/scripts/auto_promote_reinforced.py" \
|
||||
2>&1 || {
|
||||
log "WARN: auto-promote/expire failed (non-blocking)"
|
||||
}
|
||||
|
||||
# Step C: Daily project synthesis (keeps wiki/mirror pages fresh)
|
||||
log "Step C: project synthesis (daily)"
|
||||
python3 "$APP_DIR/scripts/synthesize_projects.py" \
|
||||
--base-url "$ATOCORE_URL" \
|
||||
2>&1 || {
|
||||
log "WARN: synthesis failed (non-blocking)"
|
||||
}
|
||||
|
||||
# Step D: Weekly lint pass (Sundays only — heavier, not needed daily)
|
||||
if [[ "$(date -u +%u)" == "7" ]]; then
|
||||
log "Step D: weekly lint pass"
|
||||
python3 "$APP_DIR/scripts/lint_knowledge_base.py" \
|
||||
--base-url "$ATOCORE_URL" \
|
||||
2>&1 || true
|
||||
fi
|
||||
|
||||
# Step E: Retrieval harness (daily)
|
||||
log "Step E: retrieval harness"
|
||||
HARNESS_OUT=$(python3 "$APP_DIR/scripts/retrieval_eval.py" \
|
||||
--json \
|
||||
--base-url "$ATOCORE_URL" \
|
||||
2>&1) || {
|
||||
log "WARN: retrieval harness failed (non-blocking)"
|
||||
}
|
||||
echo "$HARNESS_OUT"
|
||||
|
||||
# Step F: Persist pipeline summary to project state
|
||||
log "Step F: pipeline summary"
|
||||
python3 -c "
|
||||
import json, urllib.request, re, sys
|
||||
|
||||
base = '$ATOCORE_URL'
|
||||
ts = '$TIMESTAMP'
|
||||
|
||||
def post_state(key, value):
|
||||
body = json.dumps({
|
||||
'project': 'atocore', 'category': 'status',
|
||||
'key': key, 'value': value, 'source': 'nightly pipeline',
|
||||
}).encode()
|
||||
req = urllib.request.Request(
|
||||
f'{base}/project/state', data=body,
|
||||
headers={'Content-Type': 'application/json'}, method='POST',
|
||||
)
|
||||
try:
|
||||
urllib.request.urlopen(req, timeout=10)
|
||||
except Exception as e:
|
||||
print(f'WARN: failed to persist {key}: {e}', file=sys.stderr)
|
||||
|
||||
# Parse harness JSON
|
||||
harness = {}
|
||||
try:
|
||||
harness = json.loads('''$HARNESS_OUT''')
|
||||
post_state('retrieval_harness_result', json.dumps({
|
||||
'passed': harness.get('passed', 0),
|
||||
'total': harness.get('total', 0),
|
||||
'failures': [f['name'] for f in harness.get('fixtures', []) if not f.get('ok')],
|
||||
'run_at': ts,
|
||||
}))
|
||||
p, t = harness.get('passed', '?'), harness.get('total', '?')
|
||||
print(f'Harness: {p}/{t}')
|
||||
except Exception:
|
||||
print('WARN: could not parse harness output')
|
||||
|
||||
# Parse triage counts from stdout
|
||||
triage_out = '''$TRIAGE_OUT'''
|
||||
promoted = len(re.findall(r'promoted', triage_out, re.IGNORECASE))
|
||||
rejected = len(re.findall(r'rejected', triage_out, re.IGNORECASE))
|
||||
needs_human = len(re.findall(r'needs.human', triage_out, re.IGNORECASE))
|
||||
|
||||
# Build summary
|
||||
summary = {
|
||||
'run_at': ts,
|
||||
'harness_passed': harness.get('passed', -1),
|
||||
'harness_total': harness.get('total', -1),
|
||||
'triage_promoted': promoted,
|
||||
'triage_rejected': rejected,
|
||||
'triage_needs_human': needs_human,
|
||||
}
|
||||
post_state('pipeline_last_run', ts)
|
||||
post_state('pipeline_summary', json.dumps(summary))
|
||||
print(f'Pipeline summary persisted: {json.dumps(summary)}')
|
||||
" 2>&1 || {
|
||||
log "WARN: pipeline summary persistence failed (non-blocking)"
|
||||
}
|
||||
|
||||
log "=== AtoCore batch extraction + triage complete ==="
|
||||
@@ -82,4 +82,48 @@ else
|
||||
log "Step 3: ATOCORE_BACKUP_RSYNC not set, skipping off-host copy"
|
||||
fi
|
||||
|
||||
# Step 3a: Pull OpenClaw state from clawdbot (one-way import of
|
||||
# SOUL.md, USER.md, MODEL-ROUTING.md, MEMORY.md, recent memory/*.md).
|
||||
# Loose coupling: OpenClaw's internals don't need to change.
|
||||
# Fail-open: importer failure never blocks the pipeline.
|
||||
log "Step 3a: pull OpenClaw state"
|
||||
OPENCLAW_IMPORT="${ATOCORE_OPENCLAW_IMPORT:-true}"
|
||||
if [[ "$OPENCLAW_IMPORT" == "true" ]]; then
|
||||
python3 "$SCRIPT_DIR/../../scripts/import_openclaw_state.py" \
|
||||
--base-url "$ATOCORE_URL" \
|
||||
2>&1 | while IFS= read -r line; do log " $line"; done || {
|
||||
log " WARN: OpenClaw import failed (non-blocking)"
|
||||
}
|
||||
else
|
||||
log " skipped (ATOCORE_OPENCLAW_IMPORT != true)"
|
||||
fi
|
||||
|
||||
# Step 3b: Auto-refresh vault sources so new PKM files flow in
|
||||
# automatically. Fail-open: never blocks the rest of the pipeline.
|
||||
log "Step 3b: auto-refresh vault sources"
|
||||
REFRESH_RESULT=$(curl -sf -X POST --max-time 600 \
|
||||
"$ATOCORE_URL/ingest/sources" 2>&1) && {
|
||||
log "Sources refresh complete"
|
||||
} || {
|
||||
log "WARN: sources refresh failed (non-blocking): $REFRESH_RESULT"
|
||||
}
|
||||
|
||||
# Step 4: Batch LLM extraction on recent interactions (optional).
|
||||
# Runs HOST-SIDE because claude CLI is on the host, not inside the
|
||||
# Docker container. The script fetches interactions from the API,
|
||||
# runs claude -p locally, and POSTs candidates back.
|
||||
# Fail-open: extraction failure never blocks backup.
|
||||
EXTRACT="${ATOCORE_EXTRACT_BATCH:-true}"
|
||||
if [[ "$EXTRACT" == "true" ]]; then
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
log "Step 4: running host-side batch LLM extraction"
|
||||
bash "$SCRIPT_DIR/batch-extract.sh" 2>&1 && {
|
||||
log "Extraction complete"
|
||||
} || {
|
||||
log "WARN: batch extraction failed (this is non-blocking)"
|
||||
}
|
||||
else
|
||||
log "Step 4: ATOCORE_EXTRACT_BATCH not set to true, skipping extraction"
|
||||
fi
|
||||
|
||||
log "=== AtoCore daily backup complete ==="
|
||||
|
||||
@@ -166,10 +166,19 @@ def _extract_last_user_prompt(transcript_path: str) -> str:
|
||||
# Project inference from working directory.
|
||||
# Maps known repo paths to AtoCore project IDs. The user can extend
|
||||
# this table or replace it with a registry lookup later.
|
||||
_VAULT = "C:\\Users\\antoi\\antoine\\My Libraries\\Antoine Brain Extension"
|
||||
|
||||
_PROJECT_PATH_MAP: dict[str, str] = {
|
||||
# Add mappings as needed, e.g.:
|
||||
# "C:\\Users\\antoi\\gigabit": "p04-gigabit",
|
||||
# "C:\\Users\\antoi\\interferometer": "p05-interferometer",
|
||||
f"{_VAULT}\\2-Projects\\P04-GigaBIT-M1": "p04-gigabit",
|
||||
f"{_VAULT}\\2-Projects\\P10-Interferometer": "p05-interferometer",
|
||||
f"{_VAULT}\\2-Projects\\P11-Polisher-Fullum": "p06-polisher",
|
||||
f"{_VAULT}\\2-Projects\\P08-ABB-Space-Mirror": "abb-space",
|
||||
f"{_VAULT}\\2-Projects\\I01-Atomizer": "atomizer-v2",
|
||||
f"{_VAULT}\\2-Projects\\I02-AtoCore": "atocore",
|
||||
"C:\\Users\\antoi\\ATOCore": "atocore",
|
||||
"C:\\Users\\antoi\\Polisher-Sim": "p06-polisher",
|
||||
"C:\\Users\\antoi\\Fullum-Interferometer": "p05-interferometer",
|
||||
"C:\\Users\\antoi\\Atomizer-V2": "atomizer-v2",
|
||||
}
|
||||
|
||||
|
||||
|
||||
284
docs/MASTER-BRAIN-PLAN.md
Normal file
284
docs/MASTER-BRAIN-PLAN.md
Normal file
@@ -0,0 +1,284 @@
|
||||
# AtoCore Master Brain Plan
|
||||
|
||||
> Vision: AtoCore becomes the **single source of truth** that grounds every LLM
|
||||
> interaction across the entire ecosystem (Claude, OpenClaw, Codex, Ollama, future
|
||||
> agents). Every prompt is automatically enriched with full project context. The
|
||||
> brain self-grows from daily work, auto-organizes its metadata, and stays
|
||||
> flawlessly reliable.
|
||||
|
||||
## The Core Insight
|
||||
|
||||
AtoCore today is a **well-architected capture + curation system with a critical
|
||||
gap on the consumption side**. We pour water into the bucket (capture from
|
||||
Claude Code Stop hook + OpenClaw message hooks) but nothing is drinking from it
|
||||
at prompt time. Fixing that gap is the single highest-leverage move.
|
||||
|
||||
**Once every LLM call is AtoCore-grounded automatically, the feedback loop
|
||||
closes**: LLMs use the context → produce better responses → those responses
|
||||
reference the injected memories → reinforcement fires → knowledge curates
|
||||
itself. The capture side is already working. The pull side is what's missing.
|
||||
|
||||
## Universal Consumption Strategy
|
||||
|
||||
MCP is great for Claude (Claude Desktop, Claude Code, Cursor, Zed, Windsurf) but
|
||||
is **not universal**. OpenClaw has its own plugin SDK. Codex, Ollama, and GPT
|
||||
don't natively support MCP. The right strategy:
|
||||
|
||||
**HTTP API is the truth; every client gets the thinnest possible adapter.**
|
||||
|
||||
```
|
||||
┌─────────────────────┐
|
||||
│ AtoCore HTTP API │ ← canonical interface
|
||||
│ /context/build │
|
||||
│ /query │
|
||||
│ /memory │
|
||||
│ /project/state │
|
||||
└──────────┬──────────┘
|
||||
│
|
||||
┌────────────┬───────────┼──────────┬────────────┐
|
||||
│ │ │ │ │
|
||||
┌──┴───┐ ┌────┴────┐ ┌───┴───┐ ┌───┴────┐ ┌───┴────┐
|
||||
│ MCP │ │OpenClaw │ │Claude │ │ Codex │ │ Ollama │
|
||||
│server│ │ plugin │ │ Code │ │ skill │ │ proxy │
|
||||
│ │ │ (pull) │ │ hook │ │ │ │ │
|
||||
└──┬───┘ └────┬────┘ └───┬───┘ └────┬───┘ └────┬───┘
|
||||
│ │ │ │ │
|
||||
Claude OpenClaw Claude Code Codex CLI Ollama
|
||||
Desktop, agent local
|
||||
Cursor, models
|
||||
Zed,
|
||||
Windsurf
|
||||
```
|
||||
|
||||
Each adapter's only job: accept a prompt, call AtoCore HTTP, prepend the
|
||||
returned context pack. The adapter itself carries no logic.
|
||||
|
||||
## Three Integration Tiers
|
||||
|
||||
### Tier 1: MCP-native clients (Claude ecosystem)
|
||||
Build **atocore-mcp** — a standalone MCP server that wraps the HTTP API. Exposes:
|
||||
- `context(query, project)` → context pack
|
||||
- `search(query)` → raw retrieval
|
||||
- `remember(type, content, project)` → create candidate memory
|
||||
- `recall(project, key)` → project state lookup
|
||||
- `list_projects()` → registered projects
|
||||
|
||||
Works with Claude Desktop, Claude Code (via `claude mcp add atocore`), Cursor,
|
||||
Zed, Windsurf without any per-client work beyond config.
|
||||
|
||||
### Tier 2: Custom plugin ecosystems (OpenClaw)
|
||||
Extend the existing `atocore-capture` plugin on T420 to also register a
|
||||
**`before_prompt_build`** hook that pulls context from AtoCore and injects it
|
||||
into the agent's system prompt. The plugin already has the HTTP client, the
|
||||
authentication, the fail-open pattern. This is ~30 lines of added code.
|
||||
|
||||
### Tier 3: Everything else (Codex, Ollama, custom agents)
|
||||
For clients without plugin/hook systems, ship a **thin proxy/middleware** the
|
||||
user configures as the LLM endpoint:
|
||||
- `atocore-proxy` listens on `localhost:PORT`
|
||||
- Intercepts OpenAI-compatible chat/completion calls
|
||||
- Pulls context from AtoCore, injects into system prompt
|
||||
- Forwards to the real model endpoint (OpenAI, Ollama, Anthropic, etc.)
|
||||
- Returns the response, then captures the interaction back to AtoCore
|
||||
|
||||
This makes AtoCore a "drop-in" layer for anything that speaks
|
||||
OpenAI-compatible HTTP — which is nearly every modern LLM runtime.
|
||||
|
||||
## Knowledge Density Plan
|
||||
|
||||
The brain is only as smart as what it knows. Current state: 80 active memories
|
||||
across 6 projects, 324 candidates in the queue being processed. Target:
|
||||
**1,000+ curated memories** to become a real master brain.
|
||||
|
||||
Mechanisms:
|
||||
1. **Finish the current triage pass** (324 → ~80 more promotions expected).
|
||||
2. **Re-extract with stronger prompt on existing 236 interactions** — tune the
|
||||
LLM extractor system prompt to pull more durable facts and fewer ephemeral
|
||||
snapshots.
|
||||
3. **Ingest all drive/vault documents as memory candidates** (not just chunks).
|
||||
Every structured markdown section with a decision/fact/requirement header
|
||||
becomes a candidate memory.
|
||||
4. **Multi-source triangulation**: same fact in 3+ sources = auto-promote to
|
||||
confidence 0.95.
|
||||
5. **Cross-project synthesis**: facts appearing in multiple project contexts
|
||||
get promoted to global domain knowledge.
|
||||
|
||||
## Auto-Organization of Metadata
|
||||
|
||||
Currently: `type`, `project`, `confidence`, `status`, `reference_count`. For
|
||||
master brain we need more structure, inferred automatically:
|
||||
|
||||
| Addition | Purpose | Mechanism |
|
||||
|---|---|---|
|
||||
| **Domain tags** (optics, mechanics, firmware, business…) | Cross-cutting retrieval | LLM inference during triage |
|
||||
| **Temporal scope** (permanent, valid_until_X, transient) | Avoid stale truth | LLM classifies during triage |
|
||||
| **Source refs** (chunk_id[], interaction_id[]) | Provenance for every fact | Enforced at creation time |
|
||||
| **Relationships** (contradicts, updates, depends_on) | Memory graph | Triage infers during review |
|
||||
| **Semantic clusters** | Detect duplicates, find gaps | Weekly HDBSCAN pass on embeddings |
|
||||
|
||||
Layer these in progressively — none of them require schema rewrites, just
|
||||
additional fields and batch jobs.
|
||||
|
||||
## Self-Growth Mechanisms
|
||||
|
||||
Four loops that make AtoCore grow autonomously:
|
||||
|
||||
### 1. Drift detection (nightly)
|
||||
Compare new chunk embeddings to existing vector distribution. Centroids >X
|
||||
cosine distance from any existing centroid = new knowledge area. Log to
|
||||
dashboard; human decides if it's noise or a domain worth curating.
|
||||
|
||||
### 2. Gap identification (continuous)
|
||||
Every `/context/build` logs `query + chunks_returned + memories_returned`.
|
||||
Weekly report: "top 10 queries with weak coverage." Those are targeted
|
||||
curation opportunities.
|
||||
|
||||
### 3. Multi-source triangulation (weekly)
|
||||
Scan memory content similarity across sources. When a fact appears in 3+
|
||||
independent sources (vault doc + drive doc + interaction), auto-promote to
|
||||
high confidence and mark as "triangulated."
|
||||
|
||||
### 4. Active learning prompts (monthly)
|
||||
Surface "you have 200 p06 memories but only 15 p04 memories. Spend 30 min
|
||||
curating p04?" via dashboard digest.
|
||||
|
||||
## Robustness Strategy (Flawless Operation Bar)
|
||||
|
||||
Current: nightly backup, off-host rsync, health endpoint, 303 tests, harness,
|
||||
enhanced dashboard with pipeline health (this session).
|
||||
|
||||
To reach "flawless":
|
||||
|
||||
| Gap | Fix | Priority |
|
||||
|---|---|---|
|
||||
| Silent pipeline failures | Alerting webhook on harness drop / pipeline skip | P1 |
|
||||
| Memory mutations untracked | Append-only audit log table | P1 |
|
||||
| Integrity drift | Nightly FK + vector-chunk parity checks | P1 |
|
||||
| Schema migrations ad-hoc | Formal migration framework with rollback | P2 |
|
||||
| Single point of failure | Daily backup to user's main computer (new) | P1 |
|
||||
| No hot standby | Second instance following primary via WAL | P3 |
|
||||
| No temporal history | Memory audit + valid_until fields | P2 |
|
||||
|
||||
### Daily Backup to Main Computer
|
||||
|
||||
Currently: Dalidou → T420 (192.168.86.39) via rsync.
|
||||
|
||||
Add: Dalidou → main computer via a pull (main computer runs the rsync,
|
||||
pulls from Dalidou). Pull-based is simpler than push — no need for SSH
|
||||
keys on Dalidou to reach the Windows machine.
|
||||
|
||||
```bash
|
||||
# On main computer, daily scheduled task:
|
||||
rsync -a papa@dalidou:/srv/storage/atocore/backups/snapshots/ \
|
||||
/path/to/local/atocore-backups/
|
||||
```
|
||||
|
||||
Configure via Windows Task Scheduler or a cron-like runner. Verify weekly
|
||||
that the latest snapshot is present.
|
||||
|
||||
## Human Interface Auto-Evolution
|
||||
|
||||
Current: wiki at `/wiki`, regenerates on every request from DB. Synthesis
|
||||
(the "current state" paragraph at top of project pages) runs **weekly on
|
||||
Sundays only**. That's why it feels stalled.
|
||||
|
||||
Fixes:
|
||||
1. **Run synthesis daily, not weekly.** It's cheap (one claude call per
|
||||
project) and keeps the human-readable overview fresh.
|
||||
2. **Trigger synthesis on major events** — when 5+ new memories land for a
|
||||
project, regenerate its synthesis.
|
||||
3. **Add "What's New" feed** — wiki homepage shows recent additions across all
|
||||
projects (last 7 days of memory promotions, state entries, entities).
|
||||
4. **Memory timeline view** — project page gets a chronological list of what
|
||||
we learned when.
|
||||
|
||||
## Phased Roadmap (8-10 weeks)
|
||||
|
||||
### Phase 1 (week 1-2): Universal Consumption
|
||||
**Goal: every LLM call is AtoCore-grounded automatically.**
|
||||
|
||||
- [ ] Build `atocore-mcp` server (wraps HTTP API, stdio transport)
|
||||
- [ ] Publish to npm / or run via `pipx` / stdlib HTTP
|
||||
- [ ] Configure in Claude Desktop (`~/.claude/mcp_servers.json`)
|
||||
- [ ] Configure in Claude Code (`claude mcp add atocore …`)
|
||||
- [ ] Extend OpenClaw plugin with `before_prompt_build` PULL
|
||||
- [ ] Write `atocore-proxy` middleware for Codex/Ollama/generic clients
|
||||
- [ ] Document configuration for each client
|
||||
|
||||
**Success:** open a fresh Claude Code session, ask a project question, verify
|
||||
the response references AtoCore memories without manual context commands.
|
||||
|
||||
### Phase 2 (week 2-3): Knowledge Density + Wiki Evolution
|
||||
- [ ] Finish current triage pass (324 candidates → active)
|
||||
- [ ] Tune extractor prompt for higher promotion rate on durable facts
|
||||
- [ ] Daily synthesis in cron (not just Sundays)
|
||||
- [ ] Event-triggered synthesis on significant project changes
|
||||
- [ ] Wiki "What's New" feed
|
||||
- [ ] Memory timeline per project
|
||||
|
||||
**Target:** 300+ active memories, wiki feels alive daily.
|
||||
|
||||
### Phase 3 (week 3-4): Auto-Organization
|
||||
- [ ] Schema: add `domain_tags`, `valid_until`, `source_refs`, `triangulated_count`
|
||||
- [ ] Triage prompt upgraded: infer tags + temporal scope + relationships
|
||||
- [ ] Weekly HDBSCAN clustering of embeddings → dup detection + gap reports
|
||||
- [ ] Relationship edges in a new `memory_relationships` table
|
||||
|
||||
### Phase 4 (week 4-5): Robustness Hardening
|
||||
- [ ] Append-only `memory_audit` table + retrofit mutations
|
||||
- [ ] Nightly integrity checks (FK validation, orphan detection, parity)
|
||||
- [ ] Alerting webhook (Discord/email) on pipeline anomalies
|
||||
- [ ] Daily backup to user's main computer (pull-based)
|
||||
- [ ] Formal migration framework
|
||||
|
||||
### Phase 5 (week 6-7): Engineering V1 Implementation
|
||||
Execute the 23 acceptance criteria in `docs/architecture/engineering-v1-acceptance.md`
|
||||
against p06-polisher as the test bed. The ontology and queries are designed;
|
||||
this phase implements them.
|
||||
|
||||
### Phase 6 (week 8-9): Self-Growth Loops
|
||||
- [ ] Drift detection (nightly)
|
||||
- [ ] Gap identification from `/context/build` logs
|
||||
- [ ] Multi-source triangulation
|
||||
- [ ] Active learning digest (monthly)
|
||||
- [ ] Cross-project synthesis
|
||||
|
||||
### Phase 7 (ongoing): Scale & Polish
|
||||
- [ ] Multi-model validation (sonnet triages, opus cross-checks on disagreements)
|
||||
- [ ] AtoDrive integration (Google Drive as trusted source)
|
||||
- [ ] Hot standby when real production dependence materializes
|
||||
- [ ] More MCP tools (write-back, memory search, entity queries)
|
||||
|
||||
## Success Criteria
|
||||
|
||||
AtoCore is a master brain when:
|
||||
|
||||
1. **Zero manual context commands.** A fresh Claude/OpenClaw session answering
|
||||
a project question without being told "use AtoCore context."
|
||||
2. **1,000+ active memories** with >90% provenance coverage (every fact
|
||||
traceable to a source).
|
||||
3. **Every project has a current, human-readable overview** updated within 24h
|
||||
of significant changes.
|
||||
4. **Harness stays >95%** across 20+ fixtures covering all active projects.
|
||||
5. **Zero silent pipeline failures** for 30 consecutive days (all failures
|
||||
surface via alert within the hour).
|
||||
6. **Claude on any task knows what we know** — user asks "what did we decide
|
||||
about X?" and the answer is grounded in AtoCore, not reconstructed from
|
||||
scratch.
|
||||
|
||||
## Where We Are Now (2026-04-16)
|
||||
|
||||
- ✅ Core infrastructure: HTTP API, SQLite, Chroma, deploy pipeline
|
||||
- ✅ Capture pipes: Claude Code Stop hook, OpenClaw message hooks
|
||||
- ✅ Nightly pipeline: backup, extract, triage, synthesis, lint, harness, summary
|
||||
- ✅ Phase 10: auto-promotion from reinforcement + candidate expiry
|
||||
- ✅ Dashboard shows pipeline health + interaction totals + all projects
|
||||
- ⚡ 324 candidates being triaged (down from 439), ~80 active memories, growing
|
||||
- ❌ No consumption at prompt time (capture-only)
|
||||
- ❌ Wiki auto-evolves only on Sundays (synthesis cadence)
|
||||
- ❌ No MCP adapter
|
||||
- ❌ No daily backup to main computer
|
||||
- ❌ Engineering V1 not implemented
|
||||
- ❌ No alerting on pipeline failures
|
||||
|
||||
The path is clear. Phase 1 is the keystone.
|
||||
206
docs/architecture/knowledge-architecture.md
Normal file
206
docs/architecture/knowledge-architecture.md
Normal file
@@ -0,0 +1,206 @@
|
||||
# AtoCore Knowledge Architecture
|
||||
|
||||
## The Problem
|
||||
|
||||
Engineering work produces two kinds of knowledge simultaneously:
|
||||
|
||||
1. **Applied knowledge** — specific to the project being worked on
|
||||
("the p04 support pad layout is driven by CTE gradient analysis")
|
||||
2. **Domain knowledge** — generalizable insight earned through that work
|
||||
("Zerodur CTE gradient dominates WFE at fast focal ratios")
|
||||
|
||||
A system that only stores applied knowledge loses the general insight.
|
||||
A system that mixes them pollutes project context with cross-project
|
||||
noise. AtoCore needs both — separated, but both growing organically
|
||||
from the same conversations.
|
||||
|
||||
## The Quality Bar
|
||||
|
||||
**AtoCore stores earned insight, not information.**
|
||||
|
||||
The test: "Would a competent engineer need experience to know this,
|
||||
or could they find it in 30 seconds?"
|
||||
|
||||
| Store | Don't store |
|
||||
|-------|-------------|
|
||||
| "Preston removal model breaks down below 5N because the contact assumption fails" | "Preston's equation relates removal rate to pressure and velocity" |
|
||||
| "m=1 (coma) is NOT correctable by force modulation (score 0.09)" | "Zernike polynomials describe wavefront aberrations" |
|
||||
| "At F/1.2, CTE gradient costs ~3nm WFE and drives pad placement" | "Zerodur CTE is 0.05 ppm/K" |
|
||||
| "Quilting limit for 16-inch tool is 234N" | "Quilting is a mid-spatial-frequency artifact in polishing" |
|
||||
|
||||
The bar is enforced in the LLM extraction system prompt
|
||||
(`src/atocore/memory/extractor_llm.py`) and the auto-triage prompt
|
||||
(`scripts/auto_triage.py`). Both explicitly list examples of what
|
||||
qualifies and what doesn't.
|
||||
|
||||
## Architecture
|
||||
|
||||
### Five-tier context assembly
|
||||
|
||||
When AtoCore builds a context pack for any LLM query, it assembles
|
||||
five tiers in strict trust order:
|
||||
|
||||
```
|
||||
Tier 1: Trusted Project State [project-specific, highest trust]
|
||||
Curated key-value entries from the project state API.
|
||||
Example: "decision/vendor_path: Twyman-Green preferred, 4D
|
||||
technical lead but cost-challenged"
|
||||
|
||||
Tier 2: Identity / Preferences [global, always included]
|
||||
Who the user is and how they work.
|
||||
Example: "Antoine Letarte, mechanical/optical engineer at
|
||||
Atomaste" / "No API keys — uses OAuth exclusively"
|
||||
|
||||
Tier 3: Project Memories [project-specific]
|
||||
Reinforced memories from the reflection loop, scoped to the
|
||||
queried project. Example: "Firmware interface contract is
|
||||
invariant: controller-job.v1 in, run-log.v1 out"
|
||||
|
||||
Tier 4: Domain Knowledge [cross-project]
|
||||
Earned engineering insight with project="" and a domain tag.
|
||||
Surfaces in ALL project packs when query-relevant.
|
||||
Example: "[materials] Zerodur CTE gradient dominates WFE at
|
||||
fast focal ratios — costs ~3nm at F/1.2"
|
||||
|
||||
Tier 5: Retrieved Chunks [project-boosted, lowest trust]
|
||||
Vector-similarity search over the ingested document corpus.
|
||||
Project-hinted but not filtered — cross-project docs can
|
||||
appear at lower rank.
|
||||
```
|
||||
|
||||
### Budget allocation (at default 3000 chars)
|
||||
|
||||
| Tier | Budget ratio | Approx chars | Entries |
|
||||
|------|-------------|-------------|---------|
|
||||
| Project State | 20% | 600 | all curated entries |
|
||||
| Identity/Preferences | 5% | 150 | 1 memory |
|
||||
| Project Memories | 25% | 750 | 2-3 memories |
|
||||
| Domain Knowledge | 10% | 300 | 1-2 memories |
|
||||
| Retrieved Chunks | 40% | 1200 | 2-4 chunks |
|
||||
|
||||
Trim order when budget is tight: chunks first, then domain knowledge,
|
||||
then project memories, then identity, then project state last.
|
||||
|
||||
### Knowledge domains
|
||||
|
||||
The LLM extractor tags domain knowledge with one of these domains:
|
||||
|
||||
| Domain | What qualifies |
|
||||
|--------|---------------|
|
||||
| `physics` | Optical physics, wave propagation, diffraction, thermal effects |
|
||||
| `materials` | Material properties in context, CTE behavior, stress limits |
|
||||
| `optics` | Lens/mirror design, aberration analysis, metrology techniques |
|
||||
| `mechanics` | Structural FEA insights, support system design, kinematics |
|
||||
| `manufacturing` | Polishing, grinding, machining, process control |
|
||||
| `metrology` | Measurement systems, interferometry, calibration techniques |
|
||||
| `controls` | PID tuning, force control, servo systems, real-time constraints |
|
||||
| `software` | Architecture patterns, testing strategies, deployment insights |
|
||||
| `math` | Numerical methods, optimization, statistical analysis |
|
||||
| `finance` | Cost modeling, procurement strategy, budget optimization |
|
||||
|
||||
New domains can be added by updating the system prompt in
|
||||
`extractor_llm.py` and `batch_llm_extract_live.py`.
|
||||
|
||||
### How domain knowledge is stored
|
||||
|
||||
Domain tags are embedded as a prefix in the memory content:
|
||||
|
||||
```
|
||||
memory_type: knowledge
|
||||
project: "" ← empty = cross-project
|
||||
content: "[materials] Zerodur CTE gradient dominates WFE at F/1.2"
|
||||
```
|
||||
|
||||
The `[domain]` prefix is a lightweight encoding that avoids a schema
|
||||
migration. The context builder's query-relevance ranking matches on
|
||||
domain terms naturally (a query about "materials" or "CTE" will rank
|
||||
a `[materials]` memory higher). A future migration can parse the
|
||||
prefix into a proper `domain` column.
|
||||
|
||||
## How knowledge flows
|
||||
|
||||
### Capture → Extract → Triage → Surface
|
||||
|
||||
```
|
||||
1. CAPTURE
|
||||
Claude Code (Stop hook) or OpenClaw (plugin)
|
||||
→ POST /interactions with reinforce=true
|
||||
→ Interaction stored on Dalidou
|
||||
|
||||
2. EXTRACT (nightly cron, 03:00 UTC)
|
||||
batch_llm_extract_live.py runs claude -p sonnet
|
||||
→ For each interaction, the LLM decides:
|
||||
- Is this project-specific? → candidate with project=X
|
||||
- Is this generalizable insight? → candidate with domain=Y, project=""
|
||||
- Is it both? → TWO candidates emitted
|
||||
- Is it common knowledge? → skip (quality bar)
|
||||
→ Candidates persisted as status=candidate
|
||||
|
||||
3. TRIAGE (nightly, immediately after extraction)
|
||||
auto_triage.py runs claude -p sonnet
|
||||
→ Each candidate classified: promote / reject / needs_human
|
||||
→ Auto-promote at confidence ≥ 0.8 + no duplicate
|
||||
→ Auto-reject stale snapshots, duplicates, common knowledge
|
||||
→ Only needs_human reaches the operator
|
||||
|
||||
4. SURFACE (every context/build query)
|
||||
→ Project-specific memories appear in Tier 3
|
||||
→ Domain knowledge appears in Tier 4 (regardless of project)
|
||||
→ Both are query-ranked by overlap-density
|
||||
```
|
||||
|
||||
### Example: knowledge earned on p04 surfaces on p06
|
||||
|
||||
Working on p04-gigabit, you discover that Zerodur CTE gradient is
|
||||
the dominant WFE contributor at fast focal ratios. The extraction
|
||||
produces:
|
||||
|
||||
```json
|
||||
[
|
||||
{"type": "project", "content": "CTE gradient analysis drove the
|
||||
M1 support pad layout — 2nd largest WFE contributor after gravity",
|
||||
"project": "p04-gigabit", "domain": "", "confidence": 0.6},
|
||||
|
||||
{"type": "knowledge", "content": "Zerodur CTE gradient dominates
|
||||
WFE contribution at fast focal ratios (F/1.2 = ~3nm)",
|
||||
"project": "", "domain": "materials", "confidence": 0.6}
|
||||
]
|
||||
```
|
||||
|
||||
Two weeks later, working on p06-polisher (which also uses Zerodur):
|
||||
|
||||
```
|
||||
Query: "thermal effects on polishing accuracy"
|
||||
Project: p06-polisher
|
||||
|
||||
Tier 3 (Project Memories):
|
||||
[project] Calibration loop adjusts Preston kp from surface measurements...
|
||||
|
||||
Tier 4 (Domain Knowledge):
|
||||
[materials] Zerodur CTE gradient dominates WFE contribution at fast
|
||||
focal ratios — THIS CAME FROM P04 WORK
|
||||
```
|
||||
|
||||
The insight crosses over without any manual curation.
|
||||
|
||||
## Future directions
|
||||
|
||||
### Personal knowledge branch
|
||||
The same architecture supports personal domains (health, finance,
|
||||
personal) by adding new domain tags and a trust boundary so
|
||||
Atomaste project data never leaks into personal packs. The domain
|
||||
system is domain-agnostic — it doesn't care whether the domain is
|
||||
"optics" or "nutrition".
|
||||
|
||||
### Multi-model extraction
|
||||
Different models can specialize: sonnet for extraction, opus or
|
||||
Gemini for triage review. Independent validation reduces correlated
|
||||
blind spots on what qualifies as "earned insight" vs "common
|
||||
knowledge."
|
||||
|
||||
### Reinforcement-based domain promotion
|
||||
A domain-knowledge memory that gets reinforced across multiple
|
||||
projects (its content echoed in p04, p05, and p06 responses)
|
||||
accumulates confidence faster than a project-specific memory.
|
||||
High-confidence domain memories could auto-promote to a "verified
|
||||
knowledge" tier above regular domain knowledge.
|
||||
@@ -24,10 +24,30 @@ read-only additive mode.
|
||||
- Phase 5 - Project State
|
||||
- Phase 7 - Context Builder
|
||||
|
||||
### Partial
|
||||
### Baseline Complete
|
||||
|
||||
- Phase 4 - Identity / Preferences
|
||||
- Phase 8 - OpenClaw Integration
|
||||
- Phase 4 - Identity / Preferences. As of 2026-04-12: 3 identity
|
||||
memories (role, projects, infrastructure) and 3 preference memories
|
||||
(no API keys, multi-model collab, action-over-discussion) seeded
|
||||
on live Dalidou. Identity/preference band surfaces in context packs
|
||||
at 5% budget ratio. Future identity/preference extraction happens
|
||||
organically via the nightly LLM extraction pipeline.
|
||||
|
||||
- Phase 8 - OpenClaw Integration (baseline only, not primary surface).
|
||||
As of 2026-04-15 the T420 OpenClaw helper (`t420-openclaw/atocore.py`)
|
||||
is verified end-to-end against live Dalidou: health check, auto-context
|
||||
with project detection, Trusted Project State surfacing, project-memory
|
||||
band, fail-open on unreachable host. Tested from both the development
|
||||
machine and the T420 via SSH. Scope is narrow: **14 request shapes
|
||||
against ~44 server routes**, predominantly read-oriented plus
|
||||
`POST/DELETE /project/state` and `POST /ingest/sources`. Memory
|
||||
management, interactions capture (covered separately by the OpenClaw
|
||||
capture plugin), admin/backup, entities, triage, and extraction write
|
||||
paths remain out of this client's surface by design — they are scoped
|
||||
to the operator client (`scripts/atocore_client.py`) per the
|
||||
read-heavy additive integration model. "Primary integration" is
|
||||
therefore overclaim; "baseline read + project-state write helper" is
|
||||
the accurate framing.
|
||||
|
||||
### Baseline Complete
|
||||
|
||||
@@ -106,59 +126,58 @@ This sits implicitly between Phase 8 (OpenClaw) and Phase 11
|
||||
(multi-model). Memory-review and engineering-entity commands are
|
||||
deferred from the shared client until their workflows are exercised.
|
||||
|
||||
## What Is Real Today
|
||||
## What Is Real Today (updated 2026-04-16)
|
||||
|
||||
- canonical AtoCore runtime on Dalidou
|
||||
- canonical machine DB and vector store on Dalidou
|
||||
- project registry with:
|
||||
- template
|
||||
- proposal preview
|
||||
- register
|
||||
- update
|
||||
- refresh
|
||||
- read-only additive OpenClaw helper on the T420
|
||||
- seeded project corpus for:
|
||||
- `p04-gigabit`
|
||||
- `p05-interferometer`
|
||||
- `p06-polisher`
|
||||
- conservative Trusted Project State for those active projects
|
||||
- first operational backup foundation for SQLite + project registry
|
||||
- implementation-facing architecture notes for future engineering knowledge work
|
||||
- first organic routing layer in OpenClaw via:
|
||||
- `detect-project`
|
||||
- `auto-context`
|
||||
- canonical AtoCore runtime on Dalidou (`775960c`, deploy.sh verified)
|
||||
- 33,253 vectors across 6 registered projects
|
||||
- 234 captured interactions (192 claude-code, 38 openclaw, 4 test)
|
||||
- 6 registered projects:
|
||||
- `p04-gigabit` (483 docs, 15 state entries)
|
||||
- `p05-interferometer` (109 docs, 18 state entries)
|
||||
- `p06-polisher` (564 docs, 19 state entries)
|
||||
- `atomizer-v2` (568 docs, 5 state entries)
|
||||
- `abb-space` (6 state entries)
|
||||
- `atocore` (drive source, 47 state entries)
|
||||
- 110 Trusted Project State entries across all projects (decisions, requirements, facts, contacts, milestones)
|
||||
- 84 active memories (31 project, 23 knowledge, 10 episodic, 8 adaptation, 7 preference, 5 identity)
|
||||
- context pack assembly with 4 tiers: Trusted Project State > identity/preference > project memories > retrieved chunks
|
||||
- query-relevance memory ranking with overlap-density scoring
|
||||
- retrieval eval harness: 18 fixtures, 17/18 passing on live
|
||||
- 303 tests passing
|
||||
- nightly pipeline: backup → cleanup → rsync → OpenClaw import → vault refresh → extract → triage → **auto-promote/expire** → weekly synth/lint → **retrieval harness** → **pipeline summary to project state**
|
||||
- Phase 10 operational: reinforcement-based auto-promotion (ref_count ≥ 3, confidence ≥ 0.7) + stale candidate expiry (14 days unreinforced)
|
||||
- pipeline health visible in dashboard: interaction totals by client, pipeline last_run, harness results, triage stats
|
||||
- off-host backup to clawdbot (T420) via rsync
|
||||
- both Claude Code and OpenClaw capture interactions to AtoCore (OpenClaw via `before_agent_start` + `llm_output` plugin, verified live)
|
||||
- DEV-LEDGER.md as shared operating memory between Claude and Codex
|
||||
- observability dashboard at GET /admin/dashboard
|
||||
|
||||
## Now
|
||||
|
||||
These are the current practical priorities.
|
||||
|
||||
1. Finish practical OpenClaw integration
|
||||
- make the helper lifecycle feel natural in daily use
|
||||
- use the new organic routing layer for project-knowledge questions
|
||||
- confirm fail-open behavior remains acceptable
|
||||
- keep AtoCore clearly additive
|
||||
2. Tighten retrieval quality
|
||||
- reduce cross-project competition
|
||||
- improve ranking on short or ambiguous prompts
|
||||
- add only a few anchor docs where retrieval is still weak
|
||||
3. Continue controlled ingestion
|
||||
- deepen active projects selectively
|
||||
- avoid noisy bulk corpus growth
|
||||
4. Strengthen operational boringness
|
||||
- backup and restore procedure
|
||||
- Chroma rebuild / backup policy
|
||||
- retention and restore validation
|
||||
1. **Observe the enhanced pipeline** — let the nightly pipeline run for a
|
||||
week with the new harness + summary + auto-promote steps. Check the
|
||||
dashboard daily. Verify pipeline summary populates correctly.
|
||||
2. **Knowledge density** — run batch extraction over the full 234
|
||||
interactions (`--since 2026-01-01`) to mine the backlog for knowledge.
|
||||
Target: 100+ active memories.
|
||||
3. **Multi-model triage** (Phase 11 entry) — switch auto-triage to a
|
||||
different model than the extractor for independent validation
|
||||
4. **Fix p04-constraints harness failure** — retrieval doesn't surface
|
||||
"Zerodur" for p04 constraint queries. Investigate if it's a missing
|
||||
memory or retrieval ranking issue.
|
||||
|
||||
## Next
|
||||
|
||||
These are the next major layers after the current practical pass.
|
||||
These are the next major layers after the current stabilization pass.
|
||||
|
||||
1. Clarify AtoDrive as a real operational truth layer
|
||||
2. Mature identity / preferences handling
|
||||
3. Improve observability for:
|
||||
- retrieval quality
|
||||
- context-pack inspection
|
||||
- comparison of behavior with and without AtoCore
|
||||
1. Phase 6 AtoDrive — clarify Google Drive as a trusted operational
|
||||
source and ingest from it
|
||||
2. Phase 13 Hardening — Chroma backup policy, monitoring, alerting,
|
||||
failure visibility beyond log files
|
||||
3. Engineering V1 implementation sprint — once knowledge density is
|
||||
sufficient and the pipeline feels boring and dependable
|
||||
|
||||
## Later
|
||||
|
||||
@@ -176,11 +195,17 @@ direction, but not yet ready for immediate implementation.
|
||||
|
||||
These remain intentionally deferred.
|
||||
|
||||
- automatic write-back from OpenClaw into AtoCore
|
||||
- automatic memory promotion
|
||||
- ~~reflection loop integration~~ — baseline now in (capture→reinforce
|
||||
auto, extract batch/manual). Extractor tuning and scheduled batch
|
||||
extraction still open.
|
||||
- ~~automatic write-back from OpenClaw into AtoCore~~ — OpenClaw capture
|
||||
plugin now exists (`openclaw-plugins/atocore-capture/`), interactions
|
||||
flow. Write-back of promoted memories back to OpenClaw's own memory
|
||||
system is still deferred.
|
||||
- ~~automatic memory promotion~~ — Phase 10 complete: auto-triage handles
|
||||
extraction candidates, reinforcement-based auto-promotion graduates
|
||||
candidates referenced 3+ times to active, stale candidates expire
|
||||
after 14 days unreinforced.
|
||||
- ~~reflection loop integration~~ — fully operational: capture (both
|
||||
clients) → reinforce (automatic) → extract (nightly cron, sonnet) →
|
||||
auto-triage (nightly, sonnet) → only needs_human reaches the user.
|
||||
- replacing OpenClaw's own memory system
|
||||
- live machine-DB sync between machines
|
||||
- full ontology / graph expansion before the current baseline is stable
|
||||
|
||||
@@ -1,317 +0,0 @@
|
||||
# OpenClaw x AtoCore V1 Audit Note
|
||||
|
||||
## Scope
|
||||
|
||||
This note is the Phase 1 audit for a safe OpenClaw x AtoCore operating model.
|
||||
It covers only what was directly verified in `/home/papa/ATOCore` and `/home/papa/clawd` on 2026-04-23, plus explicit assumptions called out as assumptions.
|
||||
|
||||
This phase does not change code, runtime behavior, skills, helpers, or automation.
|
||||
|
||||
## Files requested and verified
|
||||
|
||||
The following requested AtoCore files were present and reviewed:
|
||||
|
||||
- `docs/openclaw-integration-contract.md`
|
||||
- `docs/architecture/llm-client-integration.md`
|
||||
- `docs/architecture/representation-authority.md`
|
||||
- `docs/operating-model.md`
|
||||
- `docs/current-state.md`
|
||||
- `docs/master-plan-status.md`
|
||||
- `docs/operations.md`
|
||||
- `AGENTS.md`
|
||||
- `CLAUDE.md`
|
||||
- `DEV-LEDGER.md`
|
||||
|
||||
No requested files were missing.
|
||||
|
||||
## What was directly verified
|
||||
|
||||
### 1. OpenClaw instruction surface
|
||||
|
||||
In `/home/papa/clawd/AGENTS.md`, OpenClaw is currently instructed to:
|
||||
|
||||
- use the `atocore-context` skill for project-dependent work
|
||||
- treat AtoCore as additive and fail-open
|
||||
- prefer `auto-context` for project knowledge questions
|
||||
- prefer `project-state` for trusted current truth
|
||||
- use `refresh-project` if the human explicitly asked to refresh or ingest project changes
|
||||
- use `discrawl` automatically when Antoine asks about prior Discord discussions
|
||||
|
||||
This is already close to the intended additive read path, but it also exposes mutating project operations in a general operator workflow.
|
||||
|
||||
### 2. OpenClaw helper skill surface
|
||||
|
||||
The current helper skill is:
|
||||
|
||||
- `/home/papa/clawd/skills/atocore-context/SKILL.md`
|
||||
- `/home/papa/clawd/skills/atocore-context/scripts/atocore.sh`
|
||||
|
||||
The skill describes AtoCore as a read-only additive context service, but the helper script currently exposes the following commands:
|
||||
|
||||
- `health`
|
||||
- `sources`
|
||||
- `stats`
|
||||
- `projects`
|
||||
- `project-template`
|
||||
- `detect-project`
|
||||
- `auto-context`
|
||||
- `debug-context`
|
||||
- `propose-project`
|
||||
- `register-project`
|
||||
- `update-project`
|
||||
- `refresh-project`
|
||||
- `project-state`
|
||||
- `query`
|
||||
- `context-build`
|
||||
- `ingest-sources`
|
||||
|
||||
That means the helper is not actually read-only. It can drive registry mutation and ingestion-related operations.
|
||||
|
||||
### 3. AtoCore shared operator client surface
|
||||
|
||||
The shared operator client in `/home/papa/ATOCore/scripts/atocore_client.py` exposes a broader surface than the OpenClaw helper, including:
|
||||
|
||||
- all of the project and context operations above
|
||||
- `project-state-set`
|
||||
- `project-state-invalidate`
|
||||
- `capture`
|
||||
- `extract`
|
||||
- `reinforce-interaction`
|
||||
- `list-interactions`
|
||||
- `get-interaction`
|
||||
- `queue`
|
||||
- `promote`
|
||||
- `reject`
|
||||
- `batch-extract`
|
||||
- `triage`
|
||||
|
||||
This matches the architectural intent in `docs/architecture/llm-client-integration.md`: a shared operator client should be the canonical reusable surface for multiple frontends.
|
||||
|
||||
### 4. Actual layering status today
|
||||
|
||||
The intended layering is documented in `docs/architecture/llm-client-integration.md` as:
|
||||
|
||||
- AtoCore HTTP API
|
||||
- shared operator client
|
||||
- thin per-agent frontends
|
||||
|
||||
But the current OpenClaw helper is still its own Bash implementation. It does not shell out to the shared operator client today.
|
||||
|
||||
So the shared-client pattern is documented, but not yet applied to OpenClaw.
|
||||
|
||||
### 5. AtoCore availability and fail-open behavior
|
||||
|
||||
The OpenClaw helper successfully reached the live AtoCore instance during this audit.
|
||||
|
||||
Verified live behavior:
|
||||
|
||||
- `health` worked
|
||||
- `projects` worked
|
||||
- the helper still has fail-open logic when network access fails
|
||||
|
||||
This part is consistent with the stated additive and fail-open stance.
|
||||
|
||||
### 6. Discrawl availability
|
||||
|
||||
The `discrawl` CLI is installed locally and available.
|
||||
|
||||
Verified during audit:
|
||||
|
||||
- binary present
|
||||
- version `0.3.0`
|
||||
- OpenClaw workspace instructions explicitly route project-history recall through `discrawl`
|
||||
|
||||
This supports the desired framing of Discord and Discrawl as an evidence stream.
|
||||
|
||||
### 7. Screenpipe status
|
||||
|
||||
`screenpipe` was not present as a local command in this environment during the audit.
|
||||
|
||||
For V1, Screenpipe is deferred and out of scope. No active Screenpipe input lane was verified or adopted in the final V1 policy.
|
||||
|
||||
## Current implementation shape
|
||||
|
||||
### What OpenClaw can do safely right now
|
||||
|
||||
The current safe, directly verified OpenClaw -> AtoCore path is:
|
||||
|
||||
- project detection
|
||||
- context build
|
||||
- query and retrieval
|
||||
- project-state read
|
||||
- service inspection
|
||||
- fail-open fallback
|
||||
|
||||
That is the mature part of the integration.
|
||||
|
||||
### What OpenClaw can also do today, but should be treated as controlled operator actions
|
||||
|
||||
The current helper also exposes:
|
||||
|
||||
- project proposal preview
|
||||
- project registration
|
||||
- project update
|
||||
- project refresh
|
||||
- ingest-sources
|
||||
|
||||
These should not be treated as background or conversational automation. They are operator actions and need explicit approval policy.
|
||||
|
||||
### What exists in AtoCore but is not exposed through the OpenClaw helper
|
||||
|
||||
The shared operator client already supports:
|
||||
|
||||
- interaction capture
|
||||
- candidate extraction
|
||||
- queue review
|
||||
- promote or reject
|
||||
- trusted project-state write and invalidate
|
||||
|
||||
The current OpenClaw helper does not expose that surface.
|
||||
|
||||
This is important for V1 design: the write-capable lanes already exist in AtoCore, but they are not yet safely shaped for Discord-originated automation.
|
||||
|
||||
## Conflicts with the target V1 stance
|
||||
|
||||
The following conflicts are real and should be named explicitly.
|
||||
|
||||
### Conflict 1 - the OpenClaw helper is described as read-only, but it is not read-only
|
||||
|
||||
`SKILL.md` frames the integration as read-only additive context.
|
||||
`atocore.sh` exposes mutating operations:
|
||||
|
||||
- `register-project`
|
||||
- `update-project`
|
||||
- `refresh-project`
|
||||
- `ingest-sources`
|
||||
|
||||
That mismatch needs a policy fix in Phase 2. For Phase 1 it must be documented as a conflict.
|
||||
|
||||
### Conflict 2 - OpenClaw duplicates client logic instead of using the shared operator client
|
||||
|
||||
The architecture docs prefer a shared operator client reused across frontends.
|
||||
The OpenClaw helper currently reimplements request logic and project detection in Bash.
|
||||
|
||||
That is a direct conflict with the preferred shared-client pattern.
|
||||
|
||||
### Conflict 3 - mutating project operations are too close to the conversational surface
|
||||
|
||||
The helper makes registry and ingestion operations reachable from the OpenClaw side without a dedicated Discord-specific approval gate.
|
||||
|
||||
Even if the human explicitly asks for a refresh, the current shape does not yet distinguish between:
|
||||
|
||||
- a direct trusted operator action in a controlled session
|
||||
- a Discord-originated conversational path that should require an explicit human approval step before mutation
|
||||
|
||||
The Phase 2 V1 policy needs that distinction.
|
||||
|
||||
### Conflict 4 - current docs overstate or blur write capabilities
|
||||
|
||||
`docs/current-state.md` says OpenClaw can seed AtoCore through project-scoped memory entries and staged document ingestion.
|
||||
That was not directly verified through the current OpenClaw helper surface in `/home/papa/clawd`.
|
||||
|
||||
The helper script does not expose:
|
||||
|
||||
- `capture`
|
||||
- `extract`
|
||||
- `promote`
|
||||
- `reject`
|
||||
- `project-state-set`
|
||||
|
||||
So there is at least a documentation and runtime-surface mismatch.
|
||||
|
||||
### Conflict 5 - there was no single OpenClaw-facing evidence lane description before this doc set
|
||||
|
||||
The target architecture needs a clean distinction between:
|
||||
|
||||
- raw evidence
|
||||
- reviewable candidates
|
||||
- active memories and entities
|
||||
- trusted project_state
|
||||
|
||||
Today that distinction exists conceptually across several AtoCore docs, but before this Phase 1 doc set there was no single OpenClaw-facing operating model that told an operator exactly where Discord and Discrawl signals are allowed to land.
|
||||
|
||||
That is the main gap this doc set closes.
|
||||
|
||||
## What is already aligned with the target V1 stance
|
||||
|
||||
Several important pieces are already aligned.
|
||||
|
||||
### Aligned 1 - additive plus fail-open
|
||||
|
||||
Both AtoCore and OpenClaw docs consistently say AtoCore should be additive and fail-open from the OpenClaw side.
|
||||
That is the right baseline and was verified live.
|
||||
|
||||
### Aligned 2 - project_state is already treated as special and curated
|
||||
|
||||
AtoCore architecture docs already treat `project_state` as the highest-trust curated layer.
|
||||
This supports the rule that raw signals must not directly auto-write trusted project state.
|
||||
|
||||
### Aligned 3 - canonical-home thinking already exists
|
||||
|
||||
`docs/architecture/representation-authority.md` already establishes that each fact type needs one canonical home.
|
||||
That is exactly the right foundation for the Discord and Discrawl design.
|
||||
|
||||
### Aligned 4 - reflection and candidate lifecycle already exists in AtoCore
|
||||
|
||||
The shared operator client and AtoCore docs already have a candidate workflow:
|
||||
|
||||
- capture
|
||||
- extract
|
||||
- queue
|
||||
- promote or reject
|
||||
|
||||
That means V1 does not need to invent a new trust model. It needs to apply the existing one correctly to Discord and Discrawl signals.
|
||||
|
||||
## Recommended V1 operating interpretation
|
||||
|
||||
Until implementation work begins, the safest V1 operating interpretation is:
|
||||
|
||||
1. Discord and Discrawl are evidence sources, not truth sources.
|
||||
2. OpenClaw is the orchestrator and operator, not canonical storage.
|
||||
3. AtoCore memories may hold reviewed episodic, personal, and loose project signal.
|
||||
4. Future AtoCore entities should hold reviewed structured decisions, requirements, and constraints.
|
||||
5. `project_state` remains manual or tightly gated only.
|
||||
6. Registry mutation, refresh, ingestion, and candidate promotion or rejection require explicit human approval on Discord-originated paths.
|
||||
7. The shared operator client should become the only write-capable operator surface reused by OpenClaw and other frontends.
|
||||
8. Screenpipe remains deferred and out of V1 scope.
|
||||
|
||||
## Assumption log
|
||||
|
||||
The following points were not directly verified and must stay labeled as assumptions.
|
||||
|
||||
1. Screenpipe integration shape is unverified and deferred.
|
||||
- The `screenpipe` command was not present locally.
|
||||
- No verified Screenpipe pipeline files were found in the inspected workspaces.
|
||||
- V1 therefore excludes Screenpipe from active policy and runtime scope.
|
||||
|
||||
2. No direct Discord -> AtoCore auto-mutation path was verified in code.
|
||||
- The OpenClaw workspace clearly contains read and query context behavior and a Discrawl retrieval rule.
|
||||
- It does not clearly expose a verified Discord-triggered path that auto-calls `project-state-set`, `promote`, `reject`, or `register-project`.
|
||||
- The risk is therefore policy and proximity of commands, not a proven live mutation bug.
|
||||
|
||||
3. OpenClaw runtime use of the shared operator client was not verified because it is not implemented yet.
|
||||
- The shared client exists in the AtoCore repo.
|
||||
- The OpenClaw helper is still its own Bash implementation.
|
||||
|
||||
4. A dedicated evidence store was not verified as a first-class AtoCore schema layer.
|
||||
- Existing AtoCore surfaces clearly support interactions and candidate memories.
|
||||
- This V1 model therefore uses evidence artifacts, interactions, and archive bundles as an architectural lane, without claiming a new implemented table already exists.
|
||||
|
||||
5. Future entities remain future.
|
||||
- The entity layer is architected in AtoCore docs.
|
||||
- This audit did not verify a production entity promotion flow being used by OpenClaw.
|
||||
|
||||
## Bottom line
|
||||
|
||||
The good news is that the trust foundations already exist.
|
||||
|
||||
The main conclusion is that the current system is closest to a safe V1 when interpreted this way:
|
||||
|
||||
- keep AtoCore additive and fail-open
|
||||
- treat Discord and Discrawl as evidence only
|
||||
- route reviewed signal into memory candidates first
|
||||
- reserve `project_state` for explicit curation only
|
||||
- move OpenClaw toward the shared operator client instead of maintaining a separate write-capable helper surface
|
||||
- keep Screenpipe out of V1
|
||||
|
||||
That gives a coherent path to Phase 2 without pretending the current implementation is already there.
|
||||
@@ -1,224 +0,0 @@
|
||||
commit 80bd99aaea1bcab2ea5ea732df2f749e84d84318
|
||||
Author: Anto01 <antoine.letarte@gmail.com>
|
||||
Date: Thu Apr 23 15:59:59 2026 +0000
|
||||
|
||||
Tighten OpenClaw AtoCore governance policy
|
||||
|
||||
diff --git a/AGENTS.md b/AGENTS.md
|
||||
index 1da3385..ea4d103 100644
|
||||
--- a/AGENTS.md
|
||||
+++ b/AGENTS.md
|
||||
@@ -105,7 +105,7 @@ Reactions are lightweight social signals. Humans use them constantly — they sa
|
||||
|
||||
## Tools
|
||||
|
||||
-When a task is contextual and project-dependent, use the `atocore-context` skill to query Dalidou-hosted AtoCore for trusted project state, retrieval, context-building, registered project refresh, or project registration discovery when that will improve accuracy. Treat AtoCore as additive and fail-open; do not replace OpenClaw's own memory with it. Prefer `projects` and `refresh-project <id>` when a known project needs a clean source refresh, and use `project-template` when proposing a new project registration, and `propose-project ...` when you want a normalized preview before editing the registry manually.
|
||||
+When a task is contextual and project-dependent, use the `atocore-context` skill to query Dalidou-hosted AtoCore for trusted project-state reads, retrieval, and context-building when that will improve accuracy. Treat AtoCore as additive and fail-open; do not replace OpenClaw's own memory with it.
|
||||
|
||||
### Organic AtoCore Routing
|
||||
|
||||
@@ -116,14 +116,60 @@ Use AtoCore first when the prompt:
|
||||
- asks about architecture, constraints, status, requirements, vendors, planning, prior decisions, or current project truth
|
||||
- would benefit from cross-source context instead of only the local repo
|
||||
|
||||
-Preferred flow:
|
||||
+Preferred read path:
|
||||
1. `auto-context "<prompt>" 3000` for most project knowledge questions
|
||||
2. `project-state <project>` when the user is clearly asking for trusted current truth
|
||||
-3. `refresh-project <id>` before answering if the user explicitly asked to refresh or ingest project changes
|
||||
+3. fall back to normal OpenClaw tools and memory if AtoCore returns `no_project_match` or is unavailable
|
||||
|
||||
Do not force AtoCore for purely local coding actions like fixing a function, editing one file, or running tests, unless broader project context is likely to matter.
|
||||
|
||||
-If `auto-context` returns `no_project_match` or AtoCore is unavailable, continue normally with OpenClaw's own tools and memory.
|
||||
+### AtoCore Governance
|
||||
+
|
||||
+Default Discord posture for AtoCore is read-only and additive.
|
||||
+
|
||||
+Discord-originated or Discrawl-originated context may inform:
|
||||
+- evidence collection
|
||||
+- retrieval
|
||||
+- context building
|
||||
+- candidate review preparation
|
||||
+
|
||||
+It must not directly perform AtoCore mutating actions.
|
||||
+
|
||||
+Mutating AtoCore actions include:
|
||||
+- `register-project`
|
||||
+- `update-project`
|
||||
+- `refresh-project`
|
||||
+- `ingest-sources`
|
||||
+- `project-state-set`
|
||||
+- `project-state-invalidate`
|
||||
+- `promote`
|
||||
+- `reject`
|
||||
+- any future trusted-state or review mutation
|
||||
+
|
||||
+These actions require explicit human approval for the specific action in the current thread or session.
|
||||
+Do not infer approval from:
|
||||
+- prior Discord discussion
|
||||
+- Discrawl archive recall
|
||||
+- screener output
|
||||
+- vague intent like "we should probably refresh this"
|
||||
+
|
||||
+Hard rules:
|
||||
+- no direct Discord -> `project_state`
|
||||
+- no direct Discord -> register / update / refresh / ingest / promote / reject
|
||||
+- no hidden mutation inside screening or review-prep flows
|
||||
+- PKM notes are not the main operator instruction surface for AtoCore behavior
|
||||
+
|
||||
+### Discord Archive Retrieval (discrawl)
|
||||
+
|
||||
+When Antoine asks in natural language about prior project discussions, decisions, thread history, answers, or whether something was already discussed in Discord, use the local `discrawl` archive automatically.
|
||||
+
|
||||
+Rules:
|
||||
+- Antoine should not need to remember or type `discrawl` commands.
|
||||
+- Treat Discord history as a normal background retrieval source, like memory or project docs.
|
||||
+- Use `discrawl` silently when it will materially improve recall or confidence.
|
||||
+- Prefer this for prompts like "what did we decide", "did we discuss", "summarize the thread", "what were the open questions", or anything clearly anchored in prior Discord conversation.
|
||||
+- If both AtoCore and Discord history are relevant, use both and synthesize.
|
||||
+- If `discrawl` is stale or unavailable, say so briefly and continue with the best available context.
|
||||
|
||||
Skills provide your tools. When you need one, check its `SKILL.md`. Keep local notes (camera names, SSH details, voice preferences) in `TOOLS.md`.
|
||||
|
||||
diff --git a/skills/atocore-context/SKILL.md b/skills/atocore-context/SKILL.md
|
||||
index e42a7b7..fa23207 100644
|
||||
--- a/skills/atocore-context/SKILL.md
|
||||
+++ b/skills/atocore-context/SKILL.md
|
||||
@@ -1,12 +1,11 @@
|
||||
---
|
||||
name: atocore-context
|
||||
-description: Use Dalidou-hosted AtoCore as a read-only external context service for project state, retrieval, and context-building without touching OpenClaw's own memory.
|
||||
+description: Use Dalidou-hosted AtoCore as an additive external context service for project-state reads, retrieval, and context-building without replacing OpenClaw's own memory.
|
||||
---
|
||||
|
||||
# AtoCore Context
|
||||
|
||||
-Use this skill when you need trusted project context, retrieval help, or AtoCore
|
||||
-health/status from the canonical Dalidou instance.
|
||||
+Use this skill when you need trusted project context, retrieval help, or AtoCore health and status from the canonical Dalidou instance.
|
||||
|
||||
## Purpose
|
||||
|
||||
@@ -14,7 +13,7 @@ AtoCore is an additive external context service.
|
||||
|
||||
- It does not replace OpenClaw's own memory.
|
||||
- It should be used for contextual work, not trivial prompts.
|
||||
-- It is read-only in this first integration batch.
|
||||
+- The default posture is read-only and fail-open.
|
||||
- If AtoCore is unavailable, continue normally.
|
||||
|
||||
## Canonical Endpoint
|
||||
@@ -31,27 +30,22 @@ Override with:
|
||||
ATOCORE_BASE_URL=http://host:port
|
||||
```
|
||||
|
||||
-## Safe Usage
|
||||
+## V1 scope
|
||||
|
||||
-Use AtoCore for:
|
||||
-- project-state checks
|
||||
+Use this skill in V1 for:
|
||||
+
|
||||
+- project-state reads
|
||||
- automatic project detection for normal project questions
|
||||
-- retrieval over ingested project/ecosystem docs
|
||||
+- retrieval over ingested project and ecosystem docs
|
||||
- context-building for complex project prompts
|
||||
- verifying current AtoCore hosting and architecture state
|
||||
-- listing registered projects and refreshing a known project source set
|
||||
-- inspecting the project registration template before proposing a new project entry
|
||||
-- generating a proposal preview for a new project registration without writing it
|
||||
-- registering an approved project entry when explicitly requested
|
||||
-- updating an existing registered project when aliases or description need refinement
|
||||
+- inspecting project registrations and proposal previews when operator review is needed
|
||||
|
||||
-Do not use AtoCore for:
|
||||
-- automatic memory write-back
|
||||
-- replacing OpenClaw memory
|
||||
-- silent ingestion of broad new corpora without approval
|
||||
-- mutating the registry automatically without human approval
|
||||
+Screenpipe is out of V1 scope. Do not treat it as an active input lane or dependency for this skill.
|
||||
+
|
||||
+## Read path commands
|
||||
|
||||
-## Commands
|
||||
+These are the normal additive commands:
|
||||
|
||||
```bash
|
||||
~/clawd/skills/atocore-context/scripts/atocore.sh health
|
||||
@@ -62,15 +56,56 @@ Do not use AtoCore for:
|
||||
~/clawd/skills/atocore-context/scripts/atocore.sh detect-project "what's the interferometer error budget?"
|
||||
~/clawd/skills/atocore-context/scripts/atocore.sh auto-context "what's the interferometer error budget?" 3000
|
||||
~/clawd/skills/atocore-context/scripts/atocore.sh debug-context
|
||||
-~/clawd/skills/atocore-context/scripts/atocore.sh propose-project p07-example "p07,example-project" vault incoming/projects/p07-example "Example project" "Primary staged project docs"
|
||||
-~/clawd/skills/atocore-context/scripts/atocore.sh register-project p07-example "p07,example-project" vault incoming/projects/p07-example "Example project" "Primary staged project docs"
|
||||
-~/clawd/skills/atocore-context/scripts/atocore.sh update-project p05 "Curated staged docs for the P05 interferometer architecture, vendors, and error-budget project."
|
||||
-~/clawd/skills/atocore-context/scripts/atocore.sh refresh-project p05
|
||||
~/clawd/skills/atocore-context/scripts/atocore.sh project-state atocore
|
||||
~/clawd/skills/atocore-context/scripts/atocore.sh query "What is AtoDrive?"
|
||||
~/clawd/skills/atocore-context/scripts/atocore.sh context-build "Need current AtoCore architecture" atocore 3000
|
||||
```
|
||||
|
||||
+## Approved operator actions only
|
||||
+
|
||||
+The helper currently exposes some mutating commands, but they are not normal background behavior.
|
||||
+Treat them as approved operator actions only:
|
||||
+
|
||||
+```bash
|
||||
+~/clawd/skills/atocore-context/scripts/atocore.sh propose-project ...
|
||||
+~/clawd/skills/atocore-context/scripts/atocore.sh register-project ...
|
||||
+~/clawd/skills/atocore-context/scripts/atocore.sh update-project ...
|
||||
+~/clawd/skills/atocore-context/scripts/atocore.sh refresh-project ...
|
||||
+~/clawd/skills/atocore-context/scripts/atocore.sh ingest-sources
|
||||
+```
|
||||
+
|
||||
+Do not use these from a Discord-originated path unless the human explicitly approves the specific action in the current thread or session.
|
||||
+
|
||||
+## Explicit approval rule
|
||||
+
|
||||
+Explicit approval means all of the following:
|
||||
+
|
||||
+- the human directly instructs the specific mutating action
|
||||
+- the instruction is in the current thread or current session
|
||||
+- the approval is for that specific action
|
||||
+- the approval is not inferred from Discord evidence, Discrawl recall, screener output, or vague intent
|
||||
+
|
||||
+Examples of explicit approval:
|
||||
+
|
||||
+- "refresh p05 now"
|
||||
+- "register this project"
|
||||
+- "update the aliases"
|
||||
+
|
||||
+Non-examples:
|
||||
+
|
||||
+- "we should probably refresh this"
|
||||
+- archived discussion suggesting a refresh
|
||||
+- a screener note recommending promotion or ingestion
|
||||
+
|
||||
+## Do not use AtoCore for
|
||||
+
|
||||
+- automatic memory write-back
|
||||
+- replacing OpenClaw memory
|
||||
+- silent ingestion of broad new corpora without approval
|
||||
+- automatic registry mutation
|
||||
+- direct Discord-originated mutation of trusted or operator state
|
||||
+- direct Discord-originated promote or reject actions
|
||||
+
|
||||
## Contract
|
||||
|
||||
- prefer AtoCore only when additional context is genuinely useful
|
||||
@@ -79,10 +114,6 @@ Do not use AtoCore for:
|
||||
- cite when information came from AtoCore rather than local OpenClaw memory
|
||||
- for normal project knowledge questions, prefer `auto-context "<prompt>" 3000` before answering
|
||||
- use `detect-project "<prompt>"` when you want to inspect project inference explicitly
|
||||
-- use `debug-context` right after `auto-context` or `context-build` when you want
|
||||
- to inspect the exact last AtoCore context pack
|
||||
-- prefer `projects` plus `refresh-project <id>` over long ad hoc ingest instructions when the project is already registered
|
||||
-- use `project-template` when preparing a new project registration proposal
|
||||
-- use `propose-project ...` to draft a normalized entry and review collisions first
|
||||
-- use `register-project ...` only after the proposal has been reviewed and approved
|
||||
-- use `update-project ...` when a registered project's description or aliases need refinement before refresh
|
||||
+- use `debug-context` right after `auto-context` or `context-build` when you want to inspect the exact last AtoCore context pack
|
||||
+- use `project-template` and `propose-project ...` when preparing a reviewed registration proposal
|
||||
+- use `register-project ...`, `update-project ...`, `refresh-project ...`, and `ingest-sources` only after explicit approval
|
||||
56
docs/openclaw-atocore-integration-proposal.md
Normal file
56
docs/openclaw-atocore-integration-proposal.md
Normal file
@@ -0,0 +1,56 @@
|
||||
# OpenClaw -> AtoCore Integration Proposal
|
||||
|
||||
One-way pull is the right pattern.
|
||||
|
||||
**Stable surface to pull**
|
||||
- Durable files in the OpenClaw workspace:
|
||||
- `SOUL.md`
|
||||
- `USER.md`
|
||||
- `MODEL-ROUTING.md`
|
||||
- `MEMORY.md`
|
||||
- `memory/YYYY-MM-DD.md`
|
||||
- `memory/heartbeat-state.json`
|
||||
- `HEARTBEAT.md` only as operational state, not long-term truth
|
||||
- These are explicitly documented in `t420-openclaw/AGENTS.md` as the continuity layer OpenClaw reads every session.
|
||||
|
||||
**Volatile vs durable**
|
||||
- Durable:
|
||||
- `SOUL.md`, `USER.md`, `MODEL-ROUTING.md`, `MEMORY.md`
|
||||
- dated memory notes under `memory/`
|
||||
- explicit JSON state like `memory/heartbeat-state.json`
|
||||
- Volatile:
|
||||
- in-session context
|
||||
- ephemeral heartbeat work
|
||||
- transient orchestration state
|
||||
- platform response buffers
|
||||
- Semi-durable:
|
||||
- `HEARTBEAT.md` and operational notes; useful for importer hints, but not canonical identity/memory truth
|
||||
|
||||
**Formats**
|
||||
- Mostly Markdown
|
||||
- Some JSON (`heartbeat-state.json`)
|
||||
- No stable OpenClaw-local DB or API surface is visible in this snapshot
|
||||
|
||||
**How pull should work**
|
||||
- Start with cron-based filesystem reads, not an OpenClaw HTTP API.
|
||||
- Read the durable files on a schedule, hash them, and import only deltas.
|
||||
- Map them by type:
|
||||
- `SOUL.md` / `USER.md` -> identity/preferences review candidates
|
||||
- `MEMORY.md` -> curated long-term memory candidates
|
||||
- `memory/YYYY-MM-DD.md` -> interaction/episodic import stream
|
||||
- `heartbeat-state.json` -> low-priority ops metadata only if useful
|
||||
|
||||
**Discord**
|
||||
- I do not see a documented durable Discord message store in the OpenClaw workspace snapshot.
|
||||
- `AGENTS.md` references Discord behavior, but not a canonical local log/database.
|
||||
- Treat Discord as transient unless OpenClaw exposes an explicit export/log file later.
|
||||
|
||||
**Biggest risk**
|
||||
- Importing raw OpenClaw files as truth will blur curated memory and noisy session chatter.
|
||||
- Mitigation: importer should classify by source tier, preserve provenance, and default to candidate/episodic ingestion rather than active memory promotion.
|
||||
|
||||
**Recommendation**
|
||||
- Do not build two-way sync.
|
||||
- Do not require OpenClaw to change architecture.
|
||||
- Build one importer against the file continuity layer first.
|
||||
- Add a formal export surface later only if the importer becomes too heuristic.
|
||||
@@ -1,354 +0,0 @@
|
||||
# OpenClaw x AtoCore Nightly Screener Runbook
|
||||
|
||||
## Purpose
|
||||
|
||||
The nightly screener is the V1 bridge between broad evidence capture and narrow trusted state.
|
||||
|
||||
Its job is to:
|
||||
|
||||
- gather raw evidence from approved V1 sources
|
||||
- reduce noise
|
||||
- produce reviewable candidate material
|
||||
- prepare operator review work
|
||||
- never silently create trusted truth
|
||||
|
||||
## Scope
|
||||
|
||||
The nightly screener is a screening and preparation job.
|
||||
It is not a trusted-state writer.
|
||||
It is not a registry operator.
|
||||
It is not a hidden reviewer.
|
||||
|
||||
V1 active inputs are:
|
||||
|
||||
- Discord and Discrawl evidence
|
||||
- OpenClaw interaction evidence
|
||||
- PKM, repos, and KB references
|
||||
- read-only AtoCore context for comparison and deduplication
|
||||
|
||||
## Explicit approval rule
|
||||
|
||||
If the screener output points at a mutating operator action, that action still requires:
|
||||
|
||||
- direct human instruction
|
||||
- in the current thread or current session
|
||||
- for that specific action
|
||||
- with no inference from evidence or screener output alone
|
||||
|
||||
The screener may recommend review. It may not manufacture approval.
|
||||
|
||||
## Inputs
|
||||
|
||||
The screener may consume the following inputs when available.
|
||||
|
||||
### 1. Discord and Discrawl evidence
|
||||
|
||||
Examples:
|
||||
|
||||
- recent archived Discord messages
|
||||
- thread excerpts relevant to known projects
|
||||
- conversation clusters around decisions, requirements, constraints, or repeated questions
|
||||
|
||||
### 2. OpenClaw interaction evidence
|
||||
|
||||
Examples:
|
||||
|
||||
- captured interactions
|
||||
- recent operator conversations relevant to projects
|
||||
- already-logged evidence bundles
|
||||
|
||||
### 3. Read-only AtoCore context inputs
|
||||
|
||||
Examples:
|
||||
|
||||
- project registry lookup for project matching
|
||||
- project_state read for comparison only
|
||||
- memory or entity lookups for deduplication only
|
||||
|
||||
These reads may help the screener rank or classify candidates, but they must not be used as a write side effect.
|
||||
|
||||
### 4. Optional canonical-source references
|
||||
|
||||
Examples:
|
||||
|
||||
- PKM notes
|
||||
- repo docs
|
||||
- KB-export summaries
|
||||
|
||||
These may be consulted to decide whether a signal appears to duplicate or contradict already-canonical truth.
|
||||
|
||||
## Outputs
|
||||
|
||||
The screener should produce output in four buckets.
|
||||
|
||||
### 1. Nightly screener report
|
||||
|
||||
A compact report describing:
|
||||
|
||||
- inputs seen
|
||||
- items skipped
|
||||
- candidate counts
|
||||
- project match confidence distribution
|
||||
- failures or unavailable sources
|
||||
- items requiring human review
|
||||
|
||||
### 2. Evidence bundle or manifest
|
||||
|
||||
A structured bundle of the source snippets that justified each candidate or unresolved item.
|
||||
This is the reviewer's provenance package.
|
||||
|
||||
### 3. Candidate manifests
|
||||
|
||||
Separate candidate manifests for:
|
||||
|
||||
- memory candidates
|
||||
- entity candidates later
|
||||
- unresolved "needs canonical-source update first" items
|
||||
|
||||
### 4. Operator action queue
|
||||
|
||||
A short list of items needing explicit human action, such as:
|
||||
|
||||
- review these candidates
|
||||
- decide whether to refresh project X
|
||||
- decide whether to curate project_state
|
||||
- decide whether a Discord-originated claim should first be reflected in PKM, repo, or KB
|
||||
|
||||
## Required non-output
|
||||
|
||||
The screener must not directly produce any of the following:
|
||||
|
||||
- active memories without review
|
||||
- active entities without review
|
||||
- project_state writes
|
||||
- registry mutation
|
||||
- refresh operations
|
||||
- ingestion operations
|
||||
- promote or reject decisions
|
||||
|
||||
## Nightly procedure
|
||||
|
||||
### Step 1 - load last-run checkpoint
|
||||
|
||||
Read the last successful screener checkpoint so the run knows:
|
||||
|
||||
- what time range to inspect
|
||||
- what evidence was already processed
|
||||
- which items were already dropped or bundled
|
||||
|
||||
If no checkpoint exists, use a conservative bounded time window and mark the run as bootstrap mode.
|
||||
|
||||
### Step 2 - gather evidence
|
||||
|
||||
Collect available evidence from each configured source.
|
||||
|
||||
Per-source rule:
|
||||
|
||||
- source unavailable -> note it, continue
|
||||
- source empty -> note it, continue
|
||||
- source noisy -> keep raw capture bounded and deduplicated
|
||||
|
||||
### Step 3 - normalize and deduplicate
|
||||
|
||||
For each collected item:
|
||||
|
||||
- normalize timestamps, source ids, and project hints
|
||||
- remove exact duplicates
|
||||
- group repeated or near-identical evidence when practical
|
||||
- keep provenance pointers intact
|
||||
|
||||
The goal is to avoid flooding review with repeated copies of the same conversation.
|
||||
|
||||
### Step 4 - attempt project association
|
||||
|
||||
For each evidence item, try to associate it with:
|
||||
|
||||
- a registered project id, or
|
||||
- `unassigned` if confidence is low
|
||||
|
||||
Rules:
|
||||
|
||||
- high confidence match -> attach project id
|
||||
- low confidence match -> mark as uncertain
|
||||
- no good match -> leave unassigned
|
||||
|
||||
Do not force a project assignment just to make the output tidier.
|
||||
|
||||
### Step 5 - classify signal type
|
||||
|
||||
Classify each normalized item into one of these buckets:
|
||||
|
||||
- noise / ignore
|
||||
- evidence only
|
||||
- memory candidate
|
||||
- entity candidate
|
||||
- needs canonical-source update first
|
||||
- needs explicit operator decision
|
||||
|
||||
If the classification is uncertain, choose the lower-trust bucket.
|
||||
|
||||
### Step 6 - compare against higher-trust layers
|
||||
|
||||
For non-noise items, compare against the current higher-trust landscape.
|
||||
|
||||
Check for:
|
||||
|
||||
- already-active equivalent memory
|
||||
- already-active equivalent entity later
|
||||
- existing project_state answer
|
||||
- obvious duplication of canonical source truth
|
||||
- obvious contradiction with canonical source truth
|
||||
|
||||
This comparison is read-only.
|
||||
It is used only to rank and annotate output.
|
||||
|
||||
### Step 7 - build candidate bundles
|
||||
|
||||
For each candidate:
|
||||
|
||||
- include the candidate text or shape
|
||||
- include provenance snippets
|
||||
- include source type
|
||||
- include project association confidence
|
||||
- include reason for candidate classification
|
||||
- include conflict or duplicate notes if found
|
||||
|
||||
### Step 8 - build unresolved operator queue
|
||||
|
||||
Some items should not become candidates yet.
|
||||
Examples:
|
||||
|
||||
- "This looks like current truth but should first be updated in PKM, repo, or KB."
|
||||
- "This Discord-originated request asks for refresh or ingest."
|
||||
- "This might be a decision, but confidence is too low."
|
||||
|
||||
These belong in a small operator queue, not in trusted state.
|
||||
|
||||
### Step 9 - persist report artifacts only
|
||||
|
||||
Persist only:
|
||||
|
||||
- screener report
|
||||
- evidence manifests
|
||||
- candidate manifests
|
||||
- checkpoint metadata
|
||||
|
||||
If candidate persistence into AtoCore is enabled later, it still remains a candidate-only path and must not skip review.
|
||||
|
||||
### Step 10 - exit fail-open
|
||||
|
||||
If the screener could not reach AtoCore or some source system:
|
||||
|
||||
- write the failure or skip into the report
|
||||
- keep the checkpoint conservative
|
||||
- do not fake success
|
||||
- do not silently mutate anything elsewhere
|
||||
|
||||
## Failure modes
|
||||
|
||||
### Failure mode 1 - AtoCore unavailable
|
||||
|
||||
Behavior:
|
||||
|
||||
- continue in fail-open mode if possible
|
||||
- write a report that the run was evidence-only or degraded
|
||||
- do not attempt write-side recovery actions
|
||||
|
||||
### Failure mode 2 - Discrawl unavailable or stale
|
||||
|
||||
Behavior:
|
||||
|
||||
- note Discord archive input unavailable or stale
|
||||
- continue with other sources
|
||||
- do not invent Discord evidence summaries
|
||||
|
||||
### Failure mode 3 - candidate explosion
|
||||
|
||||
Behavior:
|
||||
|
||||
- rank candidates
|
||||
- keep only a bounded top set for review
|
||||
- put the remainder into a dropped or deferred manifest
|
||||
- do not overwhelm the reviewer queue
|
||||
|
||||
### Failure mode 4 - low-confidence project mapping
|
||||
|
||||
Behavior:
|
||||
|
||||
- leave items unassigned or uncertain
|
||||
- do not force them into a project-specific truth lane
|
||||
|
||||
### Failure mode 5 - contradiction with trusted truth
|
||||
|
||||
Behavior:
|
||||
|
||||
- flag the contradiction in the report
|
||||
- keep the evidence or candidate for review if useful
|
||||
- do not overwrite project_state
|
||||
|
||||
### Failure mode 6 - direct operator-action request found in evidence
|
||||
|
||||
Examples:
|
||||
|
||||
- "register this project"
|
||||
- "refresh this source"
|
||||
- "promote this memory"
|
||||
|
||||
Behavior:
|
||||
|
||||
- place the item into the operator action queue
|
||||
- require explicit human approval
|
||||
- do not perform the mutation as part of the screener
|
||||
|
||||
## Review handoff format
|
||||
|
||||
Each screener run should hand off a compact review package containing:
|
||||
|
||||
1. a run summary
|
||||
2. candidate counts by type and project
|
||||
3. top candidates with provenance
|
||||
4. unresolved items needing explicit operator choice
|
||||
5. unavailable-source notes
|
||||
6. checkpoint status
|
||||
|
||||
The handoff should be short enough for a human to review without reading the entire raw archive.
|
||||
|
||||
## Safety rules
|
||||
|
||||
The screener must obey these rules every night.
|
||||
|
||||
1. No direct project_state writes.
|
||||
2. No direct registry mutation.
|
||||
3. No direct refresh or ingest.
|
||||
4. No direct promote or reject.
|
||||
5. No treating Discord or Discrawl as trusted truth.
|
||||
6. No hiding source uncertainty.
|
||||
7. No inventing missing integrations.
|
||||
8. No bringing deferred sources into V1 through policy drift or hidden dependency.
|
||||
|
||||
## Minimum useful run
|
||||
|
||||
A useful screener run can still succeed even if it only does this:
|
||||
|
||||
- gathers available Discord and OpenClaw evidence
|
||||
- filters obvious noise
|
||||
- produces a small candidate manifest
|
||||
- notes unavailable archive inputs if any
|
||||
- leaves trusted state untouched
|
||||
|
||||
That is still a correct V1 run.
|
||||
|
||||
## Deferred from V1
|
||||
|
||||
Screenpipe is deferred from V1. It is not an active input, not a required dependency, and not part of the runtime behavior of this V1 screener.
|
||||
|
||||
## Bottom line
|
||||
|
||||
The nightly screener is not the brain of the system.
|
||||
It is the filter.
|
||||
|
||||
Its purpose is to make human review easier while preserving the trust hierarchy:
|
||||
|
||||
- broad capture in
|
||||
- narrow reviewed truth out
|
||||
- no hidden mutations in the middle
|
||||
@@ -1,360 +0,0 @@
|
||||
# OpenClaw x AtoCore V1 Promotion Pipeline
|
||||
|
||||
## Purpose
|
||||
|
||||
This document defines the V1 promotion pipeline for signals coming from Discord, Discrawl, OpenClaw, PKM, and repos.
|
||||
|
||||
The rule is simple:
|
||||
|
||||
- raw capture is evidence
|
||||
- screening turns evidence into candidate material
|
||||
- review promotes candidates into canonical homes
|
||||
- trusted state is curated explicitly, not inferred automatically
|
||||
|
||||
## V1 scope
|
||||
|
||||
V1 active inputs are:
|
||||
|
||||
- Discord live conversation
|
||||
- Discrawl archive retrieval
|
||||
- OpenClaw interaction logs and evidence bundles
|
||||
- PKM notes
|
||||
- repos, KB exports, and repo docs
|
||||
|
||||
Read-only AtoCore context may be consulted for comparison and deduplication.
|
||||
|
||||
## Explicit approval rule
|
||||
|
||||
When this pipeline refers to approval or review for a mutating action, it means:
|
||||
|
||||
- the human directly instructs the specific action
|
||||
- the instruction is in the current thread or current session
|
||||
- the approval is for that specific action
|
||||
- the approval is not inferred from evidence, archives, or screener output
|
||||
|
||||
## Pipeline summary
|
||||
|
||||
```text
|
||||
raw capture
|
||||
-> evidence bundle
|
||||
-> nightly screening
|
||||
-> candidate queue
|
||||
-> human review
|
||||
-> canonical home
|
||||
-> optional trusted-state curation
|
||||
```
|
||||
|
||||
## Stage 0 - raw capture
|
||||
|
||||
### Inputs
|
||||
|
||||
Raw capture may come from:
|
||||
|
||||
- Discord live conversation
|
||||
- Discrawl archive retrieval
|
||||
- OpenClaw interaction logs
|
||||
- PKM notes
|
||||
- repos / KB exports / repo docs
|
||||
|
||||
### Rule at this stage
|
||||
|
||||
Nothing captured here is trusted truth yet.
|
||||
Everything is either:
|
||||
|
||||
- raw evidence, or
|
||||
- a pointer to an already-canonical source
|
||||
|
||||
## Stage 1 - evidence bundle
|
||||
|
||||
The first durable V1 destination for raw signals is the evidence lane.
|
||||
|
||||
Examples of evidence bundle forms:
|
||||
|
||||
- AtoCore interaction records
|
||||
- Discrawl retrieval result sets
|
||||
- nightly screener input bundles
|
||||
- local archived artifacts or manifests
|
||||
- optional source snapshots used only for review preparation
|
||||
|
||||
### What evidence is for
|
||||
|
||||
Evidence exists so the operator can later answer:
|
||||
|
||||
- what did we actually see?
|
||||
- where did this claim come from?
|
||||
- what context supported the candidate?
|
||||
- what should the reviewer inspect before promoting anything?
|
||||
|
||||
### What evidence is not for
|
||||
|
||||
Evidence is not:
|
||||
|
||||
- active memory
|
||||
- active entity
|
||||
- trusted project_state
|
||||
- registry truth
|
||||
|
||||
## Stage 2 - screening
|
||||
|
||||
The nightly screener or an explicit review flow reads evidence and classifies it.
|
||||
|
||||
### Screening outputs
|
||||
|
||||
Each observed signal should be classified into one of these lanes:
|
||||
|
||||
1. Ignore / noise
|
||||
- chatter
|
||||
- duplicate archive material
|
||||
- ambiguous fragments
|
||||
- low-signal scraps
|
||||
|
||||
2. Keep as evidence only
|
||||
- useful context, but too ambiguous or too raw to promote
|
||||
|
||||
3. Memory candidate
|
||||
- stable enough to review as episodic, personal, or loose project signal
|
||||
|
||||
4. Entity candidate
|
||||
- structured enough to review as a future decision, requirement, constraint, or validation fact
|
||||
|
||||
5. Needs canonical-source update first
|
||||
- appears to assert current trusted truth but should first be reflected in the real canonical home, such as PKM, repo, or KB tool
|
||||
|
||||
### Key screening rule
|
||||
|
||||
If the screener cannot confidently tell whether a signal is:
|
||||
|
||||
- raw evidence,
|
||||
- a loose durable memory,
|
||||
- or a structured project truth,
|
||||
|
||||
then it must pick the lower-trust lane.
|
||||
|
||||
In V1, uncertainty resolves downward.
|
||||
|
||||
## Stage 3 - candidate queue
|
||||
|
||||
Only screened outputs may enter the candidate queue.
|
||||
|
||||
### Memory-candidate lane
|
||||
|
||||
Use this lane for reviewed-signal candidates such as:
|
||||
|
||||
- preferences
|
||||
- episodic facts
|
||||
- identity facts
|
||||
- loose stable project signal that is useful to remember but not yet a formal structured entity
|
||||
|
||||
Examples:
|
||||
|
||||
- "Antoine prefers operator summaries without extra ceremony."
|
||||
- "The team discussed moving OpenClaw toward a shared operator client."
|
||||
- "Discord history is useful as evidence but not as direct truth."
|
||||
|
||||
### Entity-candidate lane
|
||||
|
||||
Use this lane for future structured facts such as:
|
||||
|
||||
- decisions
|
||||
- requirements
|
||||
- constraints
|
||||
- validation claims
|
||||
|
||||
Examples:
|
||||
|
||||
- "Decision: use the shared operator client instead of duplicated frontend logic."
|
||||
- "Constraint: Discord-originated paths must not directly mutate project_state."
|
||||
|
||||
### What cannot enter directly from raw capture
|
||||
|
||||
The following must not be created directly from raw Discord or Discrawl evidence without a screening step:
|
||||
|
||||
- active memories
|
||||
- active entities
|
||||
- project_state entries
|
||||
- registry mutations
|
||||
- promote or reject decisions
|
||||
|
||||
## Stage 4 - human review
|
||||
|
||||
This is the load-bearing stage.
|
||||
|
||||
A human reviewer, mediated by OpenClaw and eventually using the shared operator client, decides whether the candidate:
|
||||
|
||||
- should be promoted
|
||||
- should be rejected
|
||||
- should stay pending
|
||||
- should first be rewritten into the actual canonical source
|
||||
- should become project_state only after stronger curation
|
||||
|
||||
### Review questions
|
||||
|
||||
For every candidate, the reviewer should ask:
|
||||
|
||||
1. Is this actually stable enough to preserve?
|
||||
2. Is this fact ambiguous, historical, or current?
|
||||
3. What is the one canonical home for this fact type?
|
||||
4. Is memory the right home, or should this be an entity later?
|
||||
5. Is project_state justified, or is this still only evidence or candidate material?
|
||||
6. Does the source prove current truth, or only past conversation?
|
||||
|
||||
## Stage 5 - canonical promotion
|
||||
|
||||
After review, the signal can move into exactly one canonical home.
|
||||
|
||||
## Promotion rules by fact shape
|
||||
|
||||
### A. Personal, episodic, or loose project signal
|
||||
|
||||
Promotion destination:
|
||||
|
||||
- AtoCore memory
|
||||
|
||||
Use when the fact is durable and useful, but not a formal structured engineering record.
|
||||
|
||||
### B. Structured engineering fact
|
||||
|
||||
Promotion destination:
|
||||
|
||||
- future AtoCore entity
|
||||
|
||||
Use when the fact is really a:
|
||||
|
||||
- decision
|
||||
- requirement
|
||||
- constraint
|
||||
- validation claim
|
||||
|
||||
### C. Current trusted project answer
|
||||
|
||||
Promotion destination:
|
||||
|
||||
- AtoCore project_state
|
||||
|
||||
But only after explicit curation.
|
||||
|
||||
A candidate does not become project_state just because it looks important.
|
||||
The reviewer must decide that it now represents the trusted current answer.
|
||||
|
||||
### D. Human or tool source truth
|
||||
|
||||
Promotion destination:
|
||||
|
||||
- PKM / repo / KB tool of origin
|
||||
|
||||
If a Discord-originated signal claims current truth but the canonical home is not AtoCore memory or entity, the right move may be:
|
||||
|
||||
1. update the canonical source first
|
||||
2. then optionally refresh or ingest, with explicit approval if the action is mutating
|
||||
3. then optionally curate a project_state answer
|
||||
|
||||
This prevents Discord from becoming the hidden source of truth.
|
||||
|
||||
## Stage 6 - optional trusted-state curation
|
||||
|
||||
`project_state` is not the general destination for important facts.
|
||||
It is the curated destination for current trusted project answers.
|
||||
|
||||
Examples that may justify explicit project_state curation:
|
||||
|
||||
- current selected architecture
|
||||
- current next milestone
|
||||
- current status summary
|
||||
- current trusted decision outcome
|
||||
|
||||
Examples that usually do not justify immediate project_state curation:
|
||||
|
||||
- a raw Discord debate
|
||||
- a speculative suggestion
|
||||
- a historical conversation retrieved through Discrawl
|
||||
|
||||
## Discord-originated pipeline examples
|
||||
|
||||
### Example 1 - raw discussion about operator-client refactor
|
||||
|
||||
1. Discord message enters the evidence lane.
|
||||
2. Nightly screener marks it as either evidence-only or decision candidate.
|
||||
3. Human review checks whether it is an actual decision or just discussion.
|
||||
4. If stable and approved, it becomes a memory or future entity.
|
||||
5. It reaches project_state only if explicitly curated as the trusted current answer.
|
||||
|
||||
### Example 2 - Discord thread says "refresh this project now"
|
||||
|
||||
1. Discord message is evidence of operator intent.
|
||||
2. It does not auto-trigger refresh.
|
||||
3. OpenClaw asks for or recognizes explicit human approval.
|
||||
4. Approved operator action invokes the shared operator client.
|
||||
5. Refresh result may later influence candidates or trusted state, but the raw Discord message never performed the mutation by itself.
|
||||
|
||||
### Example 3 - archived thread says a requirement might be current
|
||||
|
||||
1. Discrawl retrieval enters the evidence lane.
|
||||
2. Screener marks it as evidence-only or a requirement candidate.
|
||||
3. Human review checks the canonical source alignment.
|
||||
4. If accepted later, it becomes an entity candidate or active entity.
|
||||
5. project_state remains a separate explicit curation step.
|
||||
|
||||
## Promotion invariants
|
||||
|
||||
The pipeline must preserve these invariants.
|
||||
|
||||
### Invariant 1 - raw evidence is not trusted truth
|
||||
|
||||
No raw Discord or Discrawl signal can directly become trusted project_state.
|
||||
|
||||
### Invariant 2 - unreviewed signals can at most become candidates
|
||||
|
||||
Automatic processing stops at evidence or candidate creation.
|
||||
|
||||
### Invariant 3 - each fact has one canonical home
|
||||
|
||||
A fact may be supported by many evidence items, but after review it belongs in one canonical place.
|
||||
|
||||
### Invariant 4 - operator mutations require explicit approval
|
||||
|
||||
Registry mutation, refresh, ingest, promote, reject, and project_state writes are operator actions.
|
||||
|
||||
### Invariant 5 - OpenClaw orchestrates; it does not become storage
|
||||
|
||||
OpenClaw should coordinate the pipeline, not silently become the canonical data layer.
|
||||
|
||||
## Decision table
|
||||
|
||||
| Observed signal type | Default pipeline outcome | Canonical destination if accepted |
|
||||
|---|---|---|
|
||||
| Ambiguous or raw conversation | Evidence only | none |
|
||||
| Historical archive context | Evidence only or candidate | memory or entity only after review |
|
||||
| Personal preference | Memory candidate | AtoCore memory |
|
||||
| Episodic fact | Memory candidate | AtoCore memory |
|
||||
| Loose stable project signal | Memory candidate | AtoCore memory |
|
||||
| Structured decision / requirement / constraint | Entity candidate | future AtoCore entity |
|
||||
| Claimed current trusted answer | Needs explicit curation | project_state, but only after review |
|
||||
| Tool-origin engineering fact | Canonical source update first | repo / KB / PKM tool of origin |
|
||||
|
||||
## What the pipeline deliberately prevents
|
||||
|
||||
This V1 pipeline deliberately prevents these bad paths:
|
||||
|
||||
- Discord -> project_state directly
|
||||
- Discrawl archive -> project_state directly
|
||||
- Discord -> registry mutation directly
|
||||
- Discord -> refresh or ingest directly without explicit approval
|
||||
- raw chat -> promote or reject directly
|
||||
- OpenClaw turning evidence into truth without a review gate
|
||||
|
||||
## Deferred from V1
|
||||
|
||||
Screenpipe is deferred from V1. It is not an active input lane in this pipeline and it is not a runtime dependency of this pipeline. If it is revisited later, it should be handled in a separate future design and not treated as an implicit part of this pipeline.
|
||||
|
||||
## Bottom line
|
||||
|
||||
The promotion pipeline is intentionally conservative.
|
||||
|
||||
Its job is not to maximize writes.
|
||||
Its job is to preserve trust while still letting Discord, Discrawl, OpenClaw, PKM, and repos contribute useful signal.
|
||||
|
||||
That means the safe default path is:
|
||||
|
||||
- capture broadly
|
||||
- trust narrowly
|
||||
- promote deliberately
|
||||
@@ -1,96 +0,0 @@
|
||||
# OpenClaw x AtoCore Shared-Client Consolidation Preview
|
||||
|
||||
## Status
|
||||
|
||||
Proposal only. Not applied.
|
||||
|
||||
## Why this exists
|
||||
|
||||
The current OpenClaw helper script duplicates AtoCore-calling logic that already exists in the shared operator client:
|
||||
|
||||
- request handling
|
||||
- fail-open behavior
|
||||
- project detection
|
||||
- project lifecycle command surface
|
||||
|
||||
The preferred direction is to consolidate OpenClaw toward the shared operator client pattern documented in `docs/architecture/llm-client-integration.md`.
|
||||
|
||||
## Goal
|
||||
|
||||
Keep the OpenClaw skill and operator policy in OpenClaw, but stop maintaining a separate Bash implementation of the AtoCore client surface when the shared client already exists in `/home/papa/ATOCore/scripts/atocore_client.py`.
|
||||
|
||||
## Non-goals for this preview
|
||||
|
||||
- no implementation in this phase
|
||||
- no runtime change in this phase
|
||||
- no new helper command in this phase
|
||||
- no change to approval policy in this preview
|
||||
|
||||
## Preview diff
|
||||
|
||||
This is a conceptual diff preview only.
|
||||
It is not applied.
|
||||
|
||||
```diff
|
||||
--- a/skills/atocore-context/scripts/atocore.sh
|
||||
+++ b/skills/atocore-context/scripts/atocore.sh
|
||||
@@
|
||||
-#!/usr/bin/env bash
|
||||
-set -euo pipefail
|
||||
-
|
||||
-BASE_URL="${ATOCORE_BASE_URL:-http://dalidou:8100}"
|
||||
-TIMEOUT="${ATOCORE_TIMEOUT_SECONDS:-30}"
|
||||
-REFRESH_TIMEOUT="${ATOCORE_REFRESH_TIMEOUT_SECONDS:-1800}"
|
||||
-FAIL_OPEN="${ATOCORE_FAIL_OPEN:-true}"
|
||||
-
|
||||
-request() {
|
||||
- # local curl-based request logic
|
||||
-}
|
||||
-
|
||||
-detect_project() {
|
||||
- # local project detection logic
|
||||
-}
|
||||
-
|
||||
-case "$cmd" in
|
||||
- health) request GET /health ;;
|
||||
- projects) request GET /projects ;;
|
||||
- auto-context) ... ;;
|
||||
- register-project) ... ;;
|
||||
- refresh-project) ... ;;
|
||||
- ingest-sources) ... ;;
|
||||
-esac
|
||||
+#!/usr/bin/env bash
|
||||
+set -euo pipefail
|
||||
+
|
||||
+CLIENT="${ATOCORE_SHARED_CLIENT:-/home/papa/ATOCore/scripts/atocore_client.py}"
|
||||
+
|
||||
+if [[ ! -f "$CLIENT" ]]; then
|
||||
+ echo "Shared AtoCore client not found: $CLIENT" >&2
|
||||
+ exit 1
|
||||
+fi
|
||||
+
|
||||
+exec python3 "$CLIENT" "$@"
|
||||
```
|
||||
|
||||
## Recommended implementation shape later
|
||||
|
||||
If and when this is implemented, the safer shape is:
|
||||
|
||||
1. keep policy and approval guidance in OpenClaw instructions and skill text
|
||||
2. delegate actual AtoCore client behavior to the shared operator client
|
||||
3. avoid adding any new helper command unless explicitly approved
|
||||
4. keep read-path and approved-operator-path distinctions in the OpenClaw guidance layer
|
||||
|
||||
## Risk notes
|
||||
|
||||
Potential follow-up concerns to handle before applying:
|
||||
|
||||
- path dependency on `/home/papa/ATOCore/scripts/atocore_client.py`
|
||||
- what should happen if the AtoCore repo is unavailable from the OpenClaw machine
|
||||
- whether a thin compatibility wrapper is needed for help text or argument normalization
|
||||
- ensuring OpenClaw policy still blocks unapproved Discord-originated mutations even if the shared client exposes them
|
||||
|
||||
## Bottom line
|
||||
|
||||
The duplication is real and consolidation is still the right direction.
|
||||
But in this phase it remains a proposal only.
|
||||
@@ -1,362 +0,0 @@
|
||||
# OpenClaw x AtoCore V1 Architecture
|
||||
|
||||
## Purpose
|
||||
|
||||
This document defines the safe V1 operating model for how Discord, Discrawl, OpenClaw, PKM, repos, and AtoCore work together.
|
||||
|
||||
The goal is to let these systems contribute useful signal into AtoCore without turning AtoCore into a raw dump and without blurring trust boundaries.
|
||||
|
||||
## V1 scope
|
||||
|
||||
V1 active inputs are:
|
||||
|
||||
- Discord and Discrawl evidence
|
||||
- OpenClaw interaction evidence
|
||||
- PKM, repos, and KB sources
|
||||
- read-only AtoCore context for comparison and deduplication
|
||||
|
||||
## Core stance
|
||||
|
||||
The V1 stance is simple:
|
||||
|
||||
- Discord and Discrawl are evidence streams.
|
||||
- OpenClaw is the operator and orchestrator.
|
||||
- PKM, repos, and KB tools remain the canonical human and tool truth.
|
||||
- AtoCore memories hold reviewed episodic, personal, and loose project signal.
|
||||
- AtoCore project_state holds the current trusted project answer, manually or tightly gated only.
|
||||
- Future AtoCore entities hold reviewed structured decisions, requirements, constraints, and related facts.
|
||||
|
||||
## Architectural principles
|
||||
|
||||
1. AtoCore remains additive and fail-open from the OpenClaw side.
|
||||
2. Every fact type has exactly one canonical home.
|
||||
3. Raw evidence is not trusted truth.
|
||||
4. Unreviewed signals become evidence or candidates, not active truth.
|
||||
5. Discord-originated paths never directly mutate project_state, registry state, refresh state, ingestion state, or review decisions without explicit human approval.
|
||||
6. OpenClaw is not canonical storage. It retrieves, compares, summarizes, requests approval, and performs approved operator actions.
|
||||
7. The shared operator client is the canonical mutating operator surface. Frontends should reuse it instead of reimplementing AtoCore-calling logic.
|
||||
|
||||
## Explicit approval rule
|
||||
|
||||
In this V1 policy, explicit approval means all of the following:
|
||||
|
||||
- the human directly instructs the specific mutating action
|
||||
- the instruction appears in the current thread or current session
|
||||
- the approval is for that specific action, not vague intent
|
||||
- the approval is not inferred from Discord evidence, Discrawl recall, screener output, or general discussion
|
||||
|
||||
Examples of explicit approval:
|
||||
|
||||
- "refresh p05 now"
|
||||
- "register this project"
|
||||
- "promote that candidate"
|
||||
- "write this to project_state"
|
||||
|
||||
Examples that are not explicit approval:
|
||||
|
||||
- "we should probably refresh this sometime"
|
||||
- "I think this is the current answer"
|
||||
- archived discussion saying a mutation might be useful
|
||||
- a screener report recommending a mutation
|
||||
|
||||
## System roles
|
||||
|
||||
### Discord
|
||||
|
||||
Discord is a live conversational source.
|
||||
It contains fresh context, discussion, uncertainty, and project language grounded in real work.
|
||||
It is not authoritative by itself.
|
||||
|
||||
Discord-originated material should be treated as:
|
||||
|
||||
- raw evidence
|
||||
- candidate material after screening
|
||||
- possible justification for a later human-reviewed promotion into a canonical home
|
||||
|
||||
Discord should never be treated as direct trusted project truth just because someone said it in chat.
|
||||
|
||||
### Discrawl
|
||||
|
||||
Discrawl is a retrieval and archive layer over Discord history.
|
||||
It turns prior conversation into searchable evidence.
|
||||
That is useful for recall, context building, and finding prior decisions or open questions.
|
||||
|
||||
Discrawl is still evidence, not authority.
|
||||
A retrieved Discord thread may show what people thought or said. It does not by itself become trusted project_state.
|
||||
|
||||
### OpenClaw
|
||||
|
||||
OpenClaw is the orchestrator and operator.
|
||||
It is where the human interacts, where approvals happen, and where cross-source reasoning happens.
|
||||
|
||||
OpenClaw's job is to:
|
||||
|
||||
- retrieve
|
||||
- compare
|
||||
- summarize
|
||||
- ask for approval when mutation is requested
|
||||
- call the shared operator client for approved writes
|
||||
- fail open when AtoCore is unavailable
|
||||
|
||||
OpenClaw is not the canonical place where project facts live long-term.
|
||||
|
||||
### PKM
|
||||
|
||||
PKM is a canonical human-authored prose source.
|
||||
It is where notes, thinking, and ongoing project writing live.
|
||||
|
||||
PKM is the canonical home for:
|
||||
|
||||
- project prose notes
|
||||
- working notes
|
||||
- long-form summaries
|
||||
- journal-style project history
|
||||
|
||||
PKM is not the place where OpenClaw should be taught how to operate AtoCore. Operator instructions belong in repo docs and OpenClaw instructions and skills.
|
||||
|
||||
### Repos and KB tools
|
||||
|
||||
Repos and KB tools are canonical human and tool truth for code and structured engineering artifacts.
|
||||
|
||||
They are the canonical home for:
|
||||
|
||||
- source code
|
||||
- repo design docs
|
||||
- structured tool outputs
|
||||
- KB-CAD and KB-FEM facts where those systems are the tool of origin
|
||||
|
||||
### AtoCore memories
|
||||
|
||||
AtoCore memories are for reviewed, durable machine-usable signal that is still loose enough to belong in memory rather than in a stricter structured layer.
|
||||
|
||||
Examples:
|
||||
|
||||
- episodic facts
|
||||
- preferences
|
||||
- identity facts
|
||||
- reviewed loose project facts
|
||||
|
||||
AtoCore memories are not a place to dump raw Discord capture.
|
||||
|
||||
### AtoCore project_state
|
||||
|
||||
AtoCore project_state is the trusted current answer layer.
|
||||
It is the place for questions like:
|
||||
|
||||
- what is the current selected architecture?
|
||||
- what is the current next focus?
|
||||
- what is the trusted status answer right now?
|
||||
|
||||
Because this layer answers current-truth questions, it must remain manually curated or tightly gated.
|
||||
|
||||
### Future AtoCore entities
|
||||
|
||||
Future entities are the canonical home for structured engineering facts that deserve stronger representation than freeform memory.
|
||||
|
||||
Examples:
|
||||
|
||||
- decisions
|
||||
- requirements
|
||||
- constraints
|
||||
- validation claims
|
||||
- structured relationships later
|
||||
|
||||
These should be promoted from evidence or candidates only after review.
|
||||
|
||||
## Logical flow
|
||||
|
||||
```text
|
||||
Discord live chat --.
|
||||
Discrawl archive ----+--> evidence bundle / interactions / screener input
|
||||
OpenClaw evidence ---'
|
||||
|
|
||||
v
|
||||
nightly screener
|
||||
|
|
||||
.--------+--------.
|
||||
v v
|
||||
memory candidates entity candidates (later)
|
||||
| |
|
||||
'--------+--------'
|
||||
v
|
||||
human review in OpenClaw
|
||||
|
|
||||
.-----------------+-----------------.
|
||||
v v v
|
||||
active memory active entity explicit curation
|
||||
|
|
||||
v
|
||||
project_state
|
||||
```
|
||||
|
||||
The load-bearing rule is that review happens before trust.
|
||||
|
||||
## Canonical-home table
|
||||
|
||||
Every named fact type below has exactly one canonical home.
|
||||
|
||||
| Fact type | Canonical home | Why |
|
||||
|---|---|---|
|
||||
| Raw Discord message | Discord / Discrawl archive | It is conversational evidence, not normalized truth |
|
||||
| Archived Discord thread history | Discrawl archive | It is the retrieval form of Discord evidence |
|
||||
| OpenClaw operator instructions | OpenClaw repo docs / skills / instructions | Operating behavior should live in code-adjacent instructions, not PKM |
|
||||
| Project prose notes | PKM | Human-authored project prose belongs in PKM |
|
||||
| Source code | Repo | Code truth lives in version control |
|
||||
| Repo design or architecture doc | Repo | The documentation belongs with the code or system it describes |
|
||||
| Structured KB-CAD / KB-FEM fact | KB tool of origin | Tool-managed structured engineering facts belong in their tool of origin |
|
||||
| Personal identity fact | AtoCore memory (`identity`) | AtoCore memory is the durable machine-usable home |
|
||||
| Preference fact | AtoCore memory (`preference`) | Same reason |
|
||||
| Episodic fact | AtoCore memory (`episodic`) | It is durable recall, not project_state |
|
||||
| Loose reviewed project signal | AtoCore memory (`project`) | Good fit for reviewed but not fully structured project signal |
|
||||
| Engineering decision | Future AtoCore entity (`Decision`) | Decisions need structured lifecycle and supersession |
|
||||
| Requirement | Future AtoCore entity (`Requirement`) | Requirements need structured management |
|
||||
| Constraint | Future AtoCore entity (`Constraint`) | Constraints need structured management |
|
||||
| Current trusted project answer | AtoCore `project_state` | This layer is explicitly for current trusted truth |
|
||||
| Project registration metadata | AtoCore project registry | Registry state is its own canonical operator layer |
|
||||
| Review action (promote / reject / invalidate) | AtoCore audit trail / operator action log | Review decisions are operator events, not source facts |
|
||||
|
||||
## What this means for Discord-originated facts
|
||||
|
||||
A Discord-originated signal can end in more than one place, but not directly.
|
||||
|
||||
### If the signal is conversational, ambiguous, or historical
|
||||
|
||||
It stays in the evidence lane:
|
||||
|
||||
- Discord
|
||||
- Discrawl archive
|
||||
- optional screener artifact
|
||||
- optional candidate queue
|
||||
|
||||
It does not become trusted project_state.
|
||||
|
||||
### If the signal is a stable personal or episodic fact
|
||||
|
||||
It may be promoted to AtoCore memory after review.
|
||||
|
||||
Examples:
|
||||
|
||||
- "Antoine prefers concise operator summaries."
|
||||
- "We decided in discussion to keep AtoCore additive."
|
||||
|
||||
These belong in reviewed memory, not in project_state.
|
||||
|
||||
### If the signal expresses a structured engineering fact
|
||||
|
||||
It may become an entity candidate and later an active entity.
|
||||
|
||||
Examples:
|
||||
|
||||
- a requirement
|
||||
- a decision
|
||||
- a constraint
|
||||
|
||||
Again, not directly from raw chat. The chat is evidence for the candidate.
|
||||
|
||||
### If the signal is the current trusted answer
|
||||
|
||||
It still should not jump directly from Discord into project_state.
|
||||
Instead, a human should explicitly curate it into project_state after checking it against the right canonical home.
|
||||
|
||||
That canonical home may be:
|
||||
|
||||
- PKM for prose and project notes
|
||||
- repo for code and design docs
|
||||
- KB tools for structured engineering facts
|
||||
- active entity if the engineering layer is the canonical home
|
||||
|
||||
## Approval boundaries
|
||||
|
||||
### Reads
|
||||
|
||||
The following may be invoked automatically when useful:
|
||||
|
||||
- `health`
|
||||
- `projects`
|
||||
- `detect-project`
|
||||
- `auto-context`
|
||||
- `query`
|
||||
- `project-state` read
|
||||
- Discrawl retrieval
|
||||
|
||||
These are additive and fail-open.
|
||||
|
||||
### Mutations requiring explicit human approval
|
||||
|
||||
The following are operator actions, not conversational automation:
|
||||
|
||||
- `register-project`
|
||||
- `update-project`
|
||||
- `refresh-project`
|
||||
- `ingest-sources`
|
||||
- `project-state-set`
|
||||
- `project-state-invalidate`
|
||||
- `capture` when used as a durable write outside conservative logging policy
|
||||
- `extract` with persistence
|
||||
- `promote`
|
||||
- `reject`
|
||||
- future entity promotion or rejection
|
||||
|
||||
For Discord-originated paths, approval must satisfy the explicit approval rule above.
|
||||
|
||||
## Shared operator client rule
|
||||
|
||||
The preferred V1 architecture is:
|
||||
|
||||
- AtoCore HTTP API as system interface
|
||||
- shared operator client as reusable mutating surface
|
||||
- OpenClaw as a thin frontend and operator around that client
|
||||
|
||||
That avoids duplicating:
|
||||
|
||||
- project detection logic
|
||||
- request logic
|
||||
- failure handling
|
||||
- mutation surface behavior
|
||||
- approval wrappers
|
||||
|
||||
OpenClaw should keep its own high-level operating instructions, but it should not keep growing a parallel AtoCore mutation implementation.
|
||||
|
||||
## V1 boundary summary
|
||||
|
||||
### Allowed automatic behavior
|
||||
|
||||
- read-only retrieval
|
||||
- context build
|
||||
- Discrawl recall
|
||||
- evidence collection
|
||||
- nightly screening into reviewable output
|
||||
- fail-open fallback when AtoCore is unavailable
|
||||
|
||||
### Allowed only after explicit human review or approval
|
||||
|
||||
- candidate persistence from evidence
|
||||
- candidate promotion or rejection
|
||||
- project refresh or ingestion
|
||||
- registry mutation
|
||||
- trusted project_state writes
|
||||
|
||||
### Not allowed as automatic behavior
|
||||
|
||||
- direct Discord -> project_state writes
|
||||
- direct Discord -> register / update / refresh / ingest / promote / reject
|
||||
- hidden mutation inside the screener
|
||||
- treating PKM as the main operator-instruction layer for AtoCore behavior
|
||||
|
||||
## Deferred from V1
|
||||
|
||||
Screenpipe is deferred.
|
||||
It is not an active input lane in V1 and it must not become a runtime, skill, or policy dependency in V1.
|
||||
If it is revisited later, it must be treated as a separate future design decision, not as an implicit V1 extension.
|
||||
|
||||
## Bottom line
|
||||
|
||||
The safe V1 architecture is not "everything can write into AtoCore."
|
||||
It is a layered system where:
|
||||
|
||||
- evidence comes in broadly
|
||||
- trust rises slowly
|
||||
- canonical homes stay singular
|
||||
- OpenClaw remains the operator
|
||||
- AtoCore remains the additive machine-memory and trusted-state layer
|
||||
- the shared operator client becomes the one reusable write-capable surface
|
||||
@@ -1,207 +0,0 @@
|
||||
# OpenClaw x AtoCore V1 Proof Runbook
|
||||
|
||||
## Purpose
|
||||
|
||||
This is the concise proof and operator runbook for the final V1 policy.
|
||||
It shows, in concrete paths, that:
|
||||
|
||||
- a Discord-originated signal cannot reach `project_state` without candidate or review gating
|
||||
- Discord cannot directly execute `register-project`, `update-project`, `refresh-project`, `ingest-sources`, `promote`, or `reject` without explicit approval
|
||||
|
||||
## Explicit approval definition
|
||||
|
||||
For V1, explicit approval means:
|
||||
|
||||
- the human directly instructs the specific mutating action
|
||||
- the instruction is in the current thread or current session
|
||||
- the approval is for that exact action
|
||||
- the approval is not inferred from evidence, archives, or screener output
|
||||
|
||||
Examples:
|
||||
|
||||
- "refresh p05 now"
|
||||
- "register this project"
|
||||
- "promote that candidate"
|
||||
- "write this to project_state"
|
||||
|
||||
Non-examples:
|
||||
|
||||
- "this looks like the current answer"
|
||||
- "we should probably refresh this"
|
||||
- an old Discord thread saying a refresh might help
|
||||
- a screener report recommending a mutation
|
||||
|
||||
## Proof 1 - Discord cannot directly reach project_state
|
||||
|
||||
Blocked path:
|
||||
|
||||
```text
|
||||
Discord message
|
||||
-> evidence
|
||||
-> optional candidate
|
||||
-> review
|
||||
-> optional explicit curation
|
||||
-> project_state
|
||||
```
|
||||
|
||||
What is blocked:
|
||||
|
||||
- Discord -> project_state directly
|
||||
- Discrawl archive -> project_state directly
|
||||
- screener output -> project_state directly
|
||||
|
||||
What is allowed:
|
||||
|
||||
1. Discord message enters the evidence lane.
|
||||
2. It may become a memory or entity candidate after screening.
|
||||
3. A human reviews the candidate.
|
||||
4. If the fact is truly the current trusted answer, the human may explicitly curate it into `project_state`.
|
||||
|
||||
Conclusion:
|
||||
|
||||
`project_state` is reachable only after review and explicit curation. There is no direct Discord-originated write path.
|
||||
|
||||
## Proof 2 - Discord cannot directly execute mutating operator actions
|
||||
|
||||
Blocked direct actions:
|
||||
|
||||
- `register-project`
|
||||
- `update-project`
|
||||
- `refresh-project`
|
||||
- `ingest-sources`
|
||||
- `promote`
|
||||
- `reject`
|
||||
- `project-state-set`
|
||||
- `project-state-invalidate`
|
||||
|
||||
Blocked path:
|
||||
|
||||
```text
|
||||
Discord message
|
||||
-> evidence or operator request context
|
||||
-X-> direct mutation
|
||||
```
|
||||
|
||||
Allowed path:
|
||||
|
||||
```text
|
||||
Discord message
|
||||
-> OpenClaw recognizes requested operator action
|
||||
-> explicit approval check
|
||||
-> approved operator action
|
||||
-> shared operator client or helper call
|
||||
```
|
||||
|
||||
Conclusion:
|
||||
|
||||
Discord can request or justify a mutation, but it cannot perform it on its own.
|
||||
|
||||
## Proof 3 - Discrawl does not create approval
|
||||
|
||||
Discrawl is evidence retrieval.
|
||||
It may surface:
|
||||
|
||||
- prior discussions
|
||||
- earlier decisions
|
||||
- unresolved questions
|
||||
- prior suggestions to mutate state
|
||||
|
||||
It does not create approval for mutation.
|
||||
|
||||
Blocked path:
|
||||
|
||||
```text
|
||||
Discrawl recall
|
||||
-X-> refresh-project
|
||||
-X-> promote
|
||||
-X-> project_state write
|
||||
```
|
||||
|
||||
Allowed path:
|
||||
|
||||
```text
|
||||
Discrawl recall
|
||||
-> evidence for human review
|
||||
-> explicit approval in current thread/session if mutation is desired
|
||||
-> approved operator action
|
||||
```
|
||||
|
||||
Conclusion:
|
||||
|
||||
Archive recall informs review. It does not authorize writes.
|
||||
|
||||
## Proof 4 - Screener has no hidden mutation lane
|
||||
|
||||
The screener may:
|
||||
|
||||
- gather evidence
|
||||
- classify evidence
|
||||
- prepare candidates
|
||||
- prepare operator queues
|
||||
- report contradictions or missing context
|
||||
|
||||
The screener may not:
|
||||
|
||||
- write `project_state`
|
||||
- mutate registry state
|
||||
- refresh or ingest directly
|
||||
- promote or reject directly
|
||||
|
||||
Blocked path:
|
||||
|
||||
```text
|
||||
screener output
|
||||
-X-> hidden mutation
|
||||
```
|
||||
|
||||
Allowed path:
|
||||
|
||||
```text
|
||||
screener output
|
||||
-> review queue or operator queue
|
||||
-> explicit approval if mutation is wanted
|
||||
-> approved operator action
|
||||
```
|
||||
|
||||
Conclusion:
|
||||
|
||||
The screener is a filter, not a hidden writer.
|
||||
|
||||
## Minimal operator decision table
|
||||
|
||||
| Situation | Allowed next step | Blocked next step |
|
||||
|---|---|---|
|
||||
| Discord says "this is the current answer" | evidence, then review, then possible explicit curation | direct `project_state` write |
|
||||
| Discord says "refresh p05" without direct instruction | ask for explicit approval | direct `refresh-project` |
|
||||
| Discord says "refresh p05 now" | approved operator action may run | none, if approval is explicit |
|
||||
| Discrawl finds an old thread asking for registration | use as review context only | direct `register-project` |
|
||||
| Screener recommends promotion | ask for explicit review decision | direct `promote` |
|
||||
|
||||
## Practical runbook
|
||||
|
||||
### Case A - current-truth claim from Discord
|
||||
|
||||
1. Treat the message as evidence.
|
||||
2. Check the canonical home.
|
||||
3. If needed, prepare a candidate or review note.
|
||||
4. Do not write `project_state` unless the human explicitly approves that curation step.
|
||||
|
||||
### Case B - requested refresh from Discord
|
||||
|
||||
1. Determine whether the message is a direct instruction or only discussion.
|
||||
2. If not explicit, ask for approval.
|
||||
3. Only perform `refresh-project` after explicit approval in the current thread or session.
|
||||
|
||||
### Case C - candidate promotion request
|
||||
|
||||
1. Candidate exists or is proposed.
|
||||
2. Review the evidence and the candidate text.
|
||||
3. Only perform `promote` or `reject` after explicit review decision.
|
||||
|
||||
## Bottom line
|
||||
|
||||
The V1 rule is easy to test:
|
||||
|
||||
If the path starts from Discord or Discrawl and ends in trusted or operator state, there must be a visible approval or review step in the middle.
|
||||
|
||||
If that visible step is missing, the action is not allowed.
|
||||
@@ -1,184 +0,0 @@
|
||||
# OpenClaw x AtoCore V1 Write-Policy Matrix
|
||||
|
||||
## Purpose
|
||||
|
||||
This matrix defines what each source is allowed to write to each target in V1.
|
||||
|
||||
Policy meanings:
|
||||
|
||||
- `auto-write` = allowed automatically without a human approval gate
|
||||
- `candidate-only` = may create reviewable candidate material, but not active truth
|
||||
- `human-review` = allowed only after explicit human review or explicit human approval
|
||||
- `never-auto-write` = never allowed as an automatic write path
|
||||
|
||||
## Explicit approval rule
|
||||
|
||||
In this matrix, `human-review` is concrete, not vague.
|
||||
For Discord-originated or Discrawl-originated paths it means:
|
||||
|
||||
- the human directly instructs the specific mutating action
|
||||
- the instruction is in the current thread or current session
|
||||
- the approval is for that specific action
|
||||
- the approval is not inferred from evidence, archives, screener output, or general discussion
|
||||
|
||||
Examples of explicit approval:
|
||||
|
||||
- "refresh p05 now"
|
||||
- "register this project"
|
||||
- "promote this candidate"
|
||||
- "write this to project_state"
|
||||
|
||||
Non-examples:
|
||||
|
||||
- "this looks important"
|
||||
- "we should probably refresh this"
|
||||
- archived discussion that once mentioned a similar mutation
|
||||
- a screener note recommending promotion
|
||||
|
||||
## V1 scope note
|
||||
|
||||
V1 active inputs are:
|
||||
|
||||
- Discord and Discrawl
|
||||
- OpenClaw interaction evidence
|
||||
- PKM, repos, and KB sources
|
||||
- read-only AtoCore context for comparison and deduplication
|
||||
|
||||
## Targets
|
||||
|
||||
The targets below are the only ones that matter for this policy.
|
||||
|
||||
- Evidence artifacts
|
||||
- Memory candidates
|
||||
- Active memories
|
||||
- Entity candidates
|
||||
- Active entities
|
||||
- Trusted project_state
|
||||
- Registry / refresh / ingest mutations
|
||||
- Review actions
|
||||
|
||||
## Matrix
|
||||
|
||||
| Source | Target | Policy | Notes / gate |
|
||||
|---|---|---|---|
|
||||
| Discord live message | Evidence artifacts | auto-write | Safe evidence capture or archive only |
|
||||
| Discord live message | Memory candidates | candidate-only | Only after screening or extraction; never direct active write |
|
||||
| Discord live message | Active memories | human-review | Promote only after review of the candidate and evidence |
|
||||
| Discord live message | Entity candidates | candidate-only | Only when structured signal is extracted from evidence |
|
||||
| Discord live message | Active entities | human-review | Review required before promotion |
|
||||
| Discord live message | Trusted project_state | human-review | Only via explicit curation; never directly from raw chat |
|
||||
| Discord live message | Registry / refresh / ingest mutations | human-review | Requires explicit approval in the current thread or session |
|
||||
| Discord live message | Review actions | human-review | Discord cannot silently promote or reject on its own |
|
||||
| Discrawl archive result | Evidence artifacts | auto-write | Archive or search result is evidence by design |
|
||||
| Discrawl archive result | Memory candidates | candidate-only | Extract reviewed signal from archived conversation |
|
||||
| Discrawl archive result | Active memories | human-review | Promotion required |
|
||||
| Discrawl archive result | Entity candidates | candidate-only | Archived discussion may justify candidate creation |
|
||||
| Discrawl archive result | Active entities | human-review | Promotion required |
|
||||
| Discrawl archive result | Trusted project_state | human-review | Must be explicitly curated; never inferred directly from archive |
|
||||
| Discrawl archive result | Registry / refresh / ingest mutations | human-review | Archive recall cannot directly mutate operator state |
|
||||
| Discrawl archive result | Review actions | human-review | Archive evidence informs review; it does not perform review |
|
||||
| OpenClaw read/query flow | Evidence artifacts | auto-write | Conservative interaction or evidence logging is acceptable |
|
||||
| OpenClaw read/query flow | Memory candidates | candidate-only | Only through explicit extraction path |
|
||||
| OpenClaw read/query flow | Active memories | human-review | Requires operator review |
|
||||
| OpenClaw read/query flow | Entity candidates | candidate-only | Future extraction path |
|
||||
| OpenClaw read/query flow | Active entities | human-review | Requires operator review |
|
||||
| OpenClaw read/query flow | Trusted project_state | never-auto-write | Read/query flow must stay additive |
|
||||
| OpenClaw read/query flow | Registry / refresh / ingest mutations | never-auto-write | Read/query automation must not mutate operator state |
|
||||
| OpenClaw read/query flow | Review actions | never-auto-write | Read automation cannot silently promote or reject |
|
||||
| OpenClaw approved operator action | Evidence artifacts | auto-write | May create operator or audit artifacts |
|
||||
| OpenClaw approved operator action | Memory candidates | human-review | Candidate persistence is itself an approved operator action |
|
||||
| OpenClaw approved operator action | Active memories | human-review | Promotion allowed only through reviewed operator action |
|
||||
| OpenClaw approved operator action | Entity candidates | human-review | Same rule for future entities |
|
||||
| OpenClaw approved operator action | Active entities | human-review | Promotion allowed only through reviewed operator action |
|
||||
| OpenClaw approved operator action | Trusted project_state | human-review | Allowed only as explicit curation |
|
||||
| OpenClaw approved operator action | Registry / refresh / ingest mutations | human-review | Explicit approval required |
|
||||
| OpenClaw approved operator action | Review actions | human-review | Explicit review required |
|
||||
| PKM note | Evidence artifacts | human-review | Snapshotting into evidence is optional, not the primary path |
|
||||
| PKM note | Memory candidates | candidate-only | Extraction from PKM is allowed into the candidate lane |
|
||||
| PKM note | Active memories | human-review | Promotion required |
|
||||
| PKM note | Entity candidates | candidate-only | Extract structured signal into the candidate lane |
|
||||
| PKM note | Active entities | human-review | Promotion required |
|
||||
| PKM note | Trusted project_state | human-review | Only via explicit curation of current truth |
|
||||
| PKM note | Registry / refresh / ingest mutations | human-review | A human may choose to refresh based on PKM changes |
|
||||
| PKM note | Review actions | human-review | PKM may support the decision, but not execute it automatically |
|
||||
| Repo / KB source | Evidence artifacts | human-review | Optional audit or screener snapshot only |
|
||||
| Repo / KB source | Memory candidates | candidate-only | Extract loose durable signal if useful |
|
||||
| Repo / KB source | Active memories | human-review | Promotion required |
|
||||
| Repo / KB source | Entity candidates | candidate-only | Strong future path for structured facts |
|
||||
| Repo / KB source | Active entities | human-review | Promotion required |
|
||||
| Repo / KB source | Trusted project_state | human-review | Explicit curation only |
|
||||
| Repo / KB source | Registry / refresh / ingest mutations | human-review | A human may refresh or ingest based on source changes |
|
||||
| Repo / KB source | Review actions | human-review | Source can justify review; it does not perform review |
|
||||
| AtoCore active memory | Evidence artifacts | never-auto-write | Active memory is already above the evidence layer |
|
||||
| AtoCore active memory | Memory candidates | never-auto-write | Do not recursively re-candidate active memory |
|
||||
| AtoCore active memory | Active memories | never-auto-write | Already active |
|
||||
| AtoCore active memory | Entity candidates | human-review | Graduation proposal only with review |
|
||||
| AtoCore active memory | Active entities | human-review | Requires graduation plus promotion |
|
||||
| AtoCore active memory | Trusted project_state | human-review | A human may explicitly curate current truth from memory |
|
||||
| AtoCore active memory | Registry / refresh / ingest mutations | never-auto-write | Memory must not mutate registry or ingestion state |
|
||||
| AtoCore active memory | Review actions | human-review | Human reviewer decides |
|
||||
| AtoCore active entity | Evidence artifacts | never-auto-write | Already above the evidence layer |
|
||||
| AtoCore active entity | Memory candidates | never-auto-write | Do not backflow structured truth into memory candidates automatically |
|
||||
| AtoCore active entity | Active memories | never-auto-write | Canonical home is the entity, not a new memory |
|
||||
| AtoCore active entity | Entity candidates | never-auto-write | Already active |
|
||||
| AtoCore active entity | Active entities | never-auto-write | Already active |
|
||||
| AtoCore active entity | Trusted project_state | human-review | Explicit curation may publish the current trusted answer |
|
||||
| AtoCore active entity | Registry / refresh / ingest mutations | never-auto-write | Entities do not operate the registry |
|
||||
| AtoCore active entity | Review actions | human-review | Human reviewer decides |
|
||||
|
||||
## Discord-originated trace examples
|
||||
|
||||
### Example 1 - conversational decision in Discord
|
||||
|
||||
Allowed path:
|
||||
|
||||
1. Discord live message -> Evidence artifacts (`auto-write`)
|
||||
2. Evidence artifacts -> Memory candidates or Entity candidates (`candidate-only`)
|
||||
3. Candidate -> Active memory or Active entity (`human-review`)
|
||||
4. If it becomes the current trusted answer, a human may explicitly curate it into Trusted project_state (`human-review`)
|
||||
|
||||
There is no direct Discord -> project_state automatic path.
|
||||
|
||||
### Example 2 - archived Discord thread via Discrawl
|
||||
|
||||
Allowed path:
|
||||
|
||||
1. Discrawl result -> Evidence artifacts (`auto-write`)
|
||||
2. Discrawl result -> Memory candidates or Entity candidates (`candidate-only`)
|
||||
3. Human review decides promotion
|
||||
4. Optional explicit curation into project_state later
|
||||
|
||||
Again, there is no direct archive -> trusted truth path.
|
||||
|
||||
### Example 3 - Discord request to refresh a project
|
||||
|
||||
Allowed path:
|
||||
|
||||
1. Discord message is evidence of requested operator intent
|
||||
2. No mutation happens automatically
|
||||
3. OpenClaw requires explicit approval in the current thread or session for `refresh-project`
|
||||
4. Only then may OpenClaw perform the approved operator action
|
||||
|
||||
There is no direct Discord -> refresh path without explicit approval.
|
||||
|
||||
## V1 interpretation rules
|
||||
|
||||
1. Evidence can flow in broadly.
|
||||
2. Truth can only rise through review.
|
||||
3. project_state is the narrowest lane.
|
||||
4. Registry and ingestion operations are operator actions, not evidence effects.
|
||||
5. Discord-originated paths can inform operator actions, but they cannot silently execute them.
|
||||
6. Deferred sources that are out of V1 scope have no automatic or manual role in this V1 matrix.
|
||||
|
||||
## Deferred from V1
|
||||
|
||||
Screenpipe is deferred and intentionally omitted from this V1 matrix.
|
||||
|
||||
## Bottom line
|
||||
|
||||
If a source is noisy, conversational, or archived, its maximum automatic privilege in V1 is:
|
||||
|
||||
- evidence capture, or
|
||||
- candidate creation
|
||||
|
||||
Everything above that requires explicit human review or explicit human approval.
|
||||
274
docs/universal-consumption.md
Normal file
274
docs/universal-consumption.md
Normal file
@@ -0,0 +1,274 @@
|
||||
# Universal Consumption — Connecting LLM Clients to AtoCore
|
||||
|
||||
Phase 1 of the Master Brain plan. Every LLM interaction across the ecosystem
|
||||
pulls context from AtoCore automatically, without the user or agent having
|
||||
to remember to ask for it.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────┐
|
||||
│ AtoCore HTTP API │ ← single source of truth
|
||||
│ http://dalidou:8100│
|
||||
└──────────┬──────────┘
|
||||
│
|
||||
┌────────────────────┼────────────────────┐
|
||||
│ │ │
|
||||
┌───┴────┐ ┌─────┴────┐ ┌────┴────┐
|
||||
│ MCP │ │ OpenClaw │ │ HTTP │
|
||||
│ server │ │ plugin │ │ proxy │
|
||||
└───┬────┘ └──────┬───┘ └────┬────┘
|
||||
│ │ │
|
||||
Claude/Cursor/ OpenClaw Codex/Ollama/
|
||||
Zed/Windsurf any OpenAI-compat client
|
||||
```
|
||||
|
||||
Three adapters, one HTTP backend. Each adapter is a thin passthrough — no
|
||||
business logic duplicated.
|
||||
|
||||
---
|
||||
|
||||
## Adapter 1: MCP Server (Claude Desktop, Claude Code, Cursor, Zed, Windsurf)
|
||||
|
||||
The MCP server is `scripts/atocore_mcp.py` — stdlib-only Python, stdio
|
||||
transport, wraps the HTTP API. Claude-family clients see AtoCore as built-in
|
||||
tools just like `Read` or `Bash`.
|
||||
|
||||
### Tools exposed
|
||||
|
||||
- **`atocore_context`** (most important): Full context pack for a query —
|
||||
Trusted Project State + memories + retrieved chunks. Use at the start of
|
||||
any project-related conversation to ground it.
|
||||
- **`atocore_search`**: Semantic search over ingested documents (top-K chunks).
|
||||
- **`atocore_memory_list`**: List active memories, filterable by project + type.
|
||||
- **`atocore_memory_create`**: Propose a candidate memory (enters triage queue).
|
||||
- **`atocore_project_state`**: Get Trusted Project State entries by category.
|
||||
- **`atocore_projects`**: List registered projects + aliases.
|
||||
- **`atocore_health`**: Service status check.
|
||||
|
||||
### Registration
|
||||
|
||||
#### Claude Code (CLI)
|
||||
```bash
|
||||
claude mcp add atocore -- python C:/Users/antoi/ATOCore/scripts/atocore_mcp.py
|
||||
claude mcp list # verify: "atocore ... ✓ Connected"
|
||||
```
|
||||
|
||||
#### Claude Desktop (GUI)
|
||||
Edit `~/Library/Application Support/Claude/claude_desktop_config.json`
|
||||
(macOS) or `%APPDATA%\Claude\claude_desktop_config.json` (Windows):
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"atocore": {
|
||||
"command": "python",
|
||||
"args": ["C:/Users/antoi/ATOCore/scripts/atocore_mcp.py"],
|
||||
"env": {
|
||||
"ATOCORE_URL": "http://dalidou:8100"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
Restart Claude Desktop.
|
||||
|
||||
#### Cursor / Zed / Windsurf
|
||||
Similar JSON config in each tool's MCP settings. Consult their docs —
|
||||
the config schema is standard MCP.
|
||||
|
||||
### Configuration
|
||||
|
||||
Environment variables the MCP server honors:
|
||||
|
||||
| Var | Default | Purpose |
|
||||
|---|---|---|
|
||||
| `ATOCORE_URL` | `http://dalidou:8100` | Where to reach AtoCore |
|
||||
| `ATOCORE_TIMEOUT` | `10` | Per-request HTTP timeout (seconds) |
|
||||
|
||||
### Behavior
|
||||
|
||||
- Fail-open: if Dalidou is unreachable, tools return "AtoCore unavailable"
|
||||
error messages but don't crash the client.
|
||||
- Zero business logic: every tool is a direct HTTP passthrough.
|
||||
- stdlib only: no MCP SDK dependency.
|
||||
|
||||
---
|
||||
|
||||
## Adapter 2: OpenClaw Plugin (`openclaw-plugins/atocore-capture/handler.js`)
|
||||
|
||||
The plugin on T420 OpenClaw has two responsibilities:
|
||||
|
||||
1. **CAPTURE**: On `before_agent_start` + `llm_output`, POST completed turns
|
||||
to AtoCore `/interactions` (existing).
|
||||
2. **PULL**: On `before_prompt_build`, call `/context/build` and inject the
|
||||
context pack via `prependContext` so the agent's system prompt includes
|
||||
AtoCore knowledge.
|
||||
|
||||
### Deployment
|
||||
|
||||
The plugin is loaded from
|
||||
`/tmp/atocore-openclaw-capture-plugin/openclaw-plugins/atocore-capture/`
|
||||
on the T420 (per OpenClaw's plugin config at `~/.openclaw/openclaw.json`).
|
||||
|
||||
To update:
|
||||
```bash
|
||||
scp openclaw-plugins/atocore-capture/handler.js \
|
||||
papa@192.168.86.39:/tmp/atocore-openclaw-capture-plugin/openclaw-plugins/atocore-capture/index.js
|
||||
ssh papa@192.168.86.39 'systemctl --user restart openclaw-gateway'
|
||||
```
|
||||
|
||||
Verify in gateway logs: look for "ready (7 plugins: acpx, atocore-capture, ...)"
|
||||
|
||||
### Configuration (env vars set on T420)
|
||||
|
||||
| Var | Default | Purpose |
|
||||
|---|---|---|
|
||||
| `ATOCORE_BASE_URL` | `http://dalidou:8100` | AtoCore HTTP endpoint |
|
||||
| `ATOCORE_PULL_DISABLED` | (unset) | Set to `1` to disable context pull |
|
||||
|
||||
### Behavior
|
||||
|
||||
- Fail-open: AtoCore unreachable = no injection, no capture, agent runs
|
||||
normally.
|
||||
- 6s timeout on context pull, 10s on capture — won't stall the agent.
|
||||
- Context pack prepended as a clearly-bracketed block so the agent can see
|
||||
it's auto-injected grounding info.
|
||||
|
||||
---
|
||||
|
||||
## Adapter 3: HTTP Proxy (`scripts/atocore_proxy.py`)
|
||||
|
||||
A stdlib-only OpenAI-compatible HTTP proxy. Sits between any
|
||||
OpenAI-API-speaking client and the real provider, enriches every
|
||||
`/chat/completions` request with AtoCore context.
|
||||
|
||||
Works with:
|
||||
- **Codex CLI** (OpenAI-compatible endpoint)
|
||||
- **Ollama** (has OpenAI-compatible `/v1` endpoint since 0.1.24)
|
||||
- **LiteLLM**, **llama.cpp server**, custom agents
|
||||
- Anything that can be pointed at a custom base URL
|
||||
|
||||
### Start it
|
||||
|
||||
```bash
|
||||
# For Ollama (local models):
|
||||
ATOCORE_UPSTREAM=http://localhost:11434/v1 \
|
||||
python scripts/atocore_proxy.py
|
||||
|
||||
# For OpenAI cloud:
|
||||
ATOCORE_UPSTREAM=https://api.openai.com/v1 \
|
||||
ATOCORE_CLIENT_LABEL=codex \
|
||||
python scripts/atocore_proxy.py
|
||||
|
||||
# Test:
|
||||
curl http://127.0.0.1:11435/healthz
|
||||
```
|
||||
|
||||
### Point a client at it
|
||||
|
||||
Set the client's OpenAI base URL to `http://127.0.0.1:11435/v1`.
|
||||
|
||||
#### Ollama example:
|
||||
```bash
|
||||
OPENAI_BASE_URL=http://127.0.0.1:11435/v1 \
|
||||
some-openai-client --model llama3:8b
|
||||
```
|
||||
|
||||
#### Codex CLI:
|
||||
Set `OPENAI_BASE_URL=http://127.0.0.1:11435/v1` in your codex config.
|
||||
|
||||
### Configuration
|
||||
|
||||
| Var | Default | Purpose |
|
||||
|---|---|---|
|
||||
| `ATOCORE_URL` | `http://dalidou:8100` | AtoCore HTTP endpoint |
|
||||
| `ATOCORE_UPSTREAM` | (required) | Real provider base URL |
|
||||
| `ATOCORE_PROXY_PORT` | `11435` | Proxy listen port |
|
||||
| `ATOCORE_PROXY_HOST` | `127.0.0.1` | Proxy bind address |
|
||||
| `ATOCORE_CLIENT_LABEL` | `proxy` | Client id in captures |
|
||||
| `ATOCORE_INJECT` | `1` | Inject context (set `0` to disable) |
|
||||
| `ATOCORE_CAPTURE` | `1` | Capture interactions (set `0` to disable) |
|
||||
|
||||
### Behavior
|
||||
|
||||
- GET requests (model listing etc) pass through unchanged
|
||||
- POST to `/chat/completions` (or `/v1/chat/completions`) gets enriched:
|
||||
1. Last user message extracted as query
|
||||
2. AtoCore `/context/build` called with 6s timeout
|
||||
3. Pack injected as system message (or prepended to existing system)
|
||||
4. Enriched body forwarded to upstream
|
||||
5. After success, interaction POSTed to `/interactions` in background
|
||||
- Fail-open: AtoCore unreachable = pass through without injection
|
||||
- Streaming responses: currently buffered (not true stream). Good enough for
|
||||
most cases; can be upgraded later if needed.
|
||||
|
||||
### Running as a service
|
||||
|
||||
On Linux, create `~/.config/systemd/user/atocore-proxy.service`:
|
||||
```ini
|
||||
[Unit]
|
||||
Description=AtoCore HTTP proxy
|
||||
|
||||
[Service]
|
||||
Environment=ATOCORE_UPSTREAM=http://localhost:11434/v1
|
||||
Environment=ATOCORE_CLIENT_LABEL=ollama
|
||||
ExecStart=/usr/bin/python3 /path/to/scripts/atocore_proxy.py
|
||||
Restart=on-failure
|
||||
|
||||
[Install]
|
||||
WantedBy=default.target
|
||||
```
|
||||
Then: `systemctl --user enable --now atocore-proxy`
|
||||
|
||||
On Windows, register via Task Scheduler (similar pattern to backup task)
|
||||
or use NSSM to install as a service.
|
||||
|
||||
---
|
||||
|
||||
## Verification Checklist
|
||||
|
||||
Fresh end-to-end test to confirm Phase 1 is working:
|
||||
|
||||
### For Claude Code (MCP)
|
||||
1. Open a new Claude Code session (not this one).
|
||||
2. Ask: "what do we know about p06 polisher's control architecture?"
|
||||
3. Claude should invoke `atocore_context` or `atocore_project_state`
|
||||
on its own and answer grounded in AtoCore data.
|
||||
|
||||
### For OpenClaw (plugin pull)
|
||||
1. Send a Discord message to OpenClaw: "what's the status on p04?"
|
||||
2. Check T420 logs: `journalctl --user -u openclaw-gateway --since "1 min ago" | grep atocore-pull`
|
||||
3. Expect: `atocore-pull:injected project=p04-gigabit chars=NNN`
|
||||
|
||||
### For proxy (any OpenAI-compat client)
|
||||
1. Start proxy with appropriate upstream
|
||||
2. Run a client query through it
|
||||
3. Check stderr: `[atocore-proxy] inject: project=... chars=...`
|
||||
4. Check `curl http://127.0.0.1:8100/interactions?client=proxy` — should
|
||||
show the captured turn
|
||||
|
||||
---
|
||||
|
||||
## Why not just MCP everywhere?
|
||||
|
||||
MCP is great for Claude-family clients but:
|
||||
- Not supported natively by Codex CLI, Ollama, or OpenAI's own API
|
||||
- No universal "attach MCP" mechanism in all LLM runtimes
|
||||
- HTTP APIs are truly universal
|
||||
|
||||
HTTP API is the truth, each adapter is the thinnest possible shim for its
|
||||
ecosystem. When new adapters are needed (Gemini CLI, Claude Code plugin
|
||||
system, etc.), they follow the same pattern.
|
||||
|
||||
---
|
||||
|
||||
## Future enhancements
|
||||
|
||||
- **Streaming passthrough** in the proxy (currently buffered for simplicity)
|
||||
- **Response grounding check**: parse assistant output for references to
|
||||
injected context, count reinforcement events
|
||||
- **Per-client metrics** in the dashboard: how often each client pulls,
|
||||
context pack size, injection rate
|
||||
- **Smart project detection**: today we use keyword matching; could use
|
||||
AtoCore's own project resolver endpoint
|
||||
140
docs/windows-backup-setup.md
Normal file
140
docs/windows-backup-setup.md
Normal file
@@ -0,0 +1,140 @@
|
||||
# Windows Main-Computer Backup Setup
|
||||
|
||||
The AtoCore backup pipeline runs nightly on Dalidou and already pushes snapshots
|
||||
off-host to the T420 (`papa@192.168.86.39`). This doc sets up a **second**,
|
||||
pull-based daily backup to your Windows main computer at
|
||||
`C:\Users\antoi\Documents\ATOCore_Backups\`.
|
||||
|
||||
Pull-based means the Windows machine pulls from Dalidou. This is simpler than
|
||||
push because Dalidou doesn't need SSH keys to reach Windows, and the backup
|
||||
only runs when the Windows machine is powered on and can reach Dalidou.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Windows 10/11 with OpenSSH client (built-in since Win10 1809)
|
||||
- SSH key-based auth to `papa@dalidou` already working (you're using it today)
|
||||
- `C:\Users\antoi\ATOCore\scripts\windows\atocore-backup-pull.ps1` present
|
||||
|
||||
## Test the script manually
|
||||
|
||||
```powershell
|
||||
powershell.exe -ExecutionPolicy Bypass -File `
|
||||
C:\Users\antoi\ATOCore\scripts\windows\atocore-backup-pull.ps1
|
||||
```
|
||||
|
||||
Expected output:
|
||||
```
|
||||
[timestamp] === AtoCore backup pull starting ===
|
||||
[timestamp] Dalidou reachable.
|
||||
[timestamp] Pulling snapshots via scp...
|
||||
[timestamp] Pulled N snapshots successfully (total X MB, latest: ...)
|
||||
[timestamp] === backup complete ===
|
||||
```
|
||||
|
||||
Target directory: `C:\Users\antoi\Documents\ATOCore_Backups\snapshots\`
|
||||
Logs: `C:\Users\antoi\Documents\ATOCore_Backups\_logs\backup-*.log`
|
||||
|
||||
## Register the Task Scheduler task
|
||||
|
||||
### Option A — automatic registration (recommended)
|
||||
|
||||
Run this PowerShell command **as your user** (no admin needed — uses HKCU task):
|
||||
|
||||
```powershell
|
||||
$action = New-ScheduledTaskAction -Execute 'powershell.exe' `
|
||||
-Argument '-ExecutionPolicy Bypass -NonInteractive -WindowStyle Hidden -File C:\Users\antoi\ATOCore\scripts\windows\atocore-backup-pull.ps1'
|
||||
|
||||
# Run daily at 10:00 local time; if missed (computer off), run at next logon
|
||||
$trigger = New-ScheduledTaskTrigger -Daily -At 10:00AM
|
||||
$trigger.StartBoundary = (Get-Date -Format 'yyyy-MM-ddTHH:mm:ss')
|
||||
|
||||
$settings = New-ScheduledTaskSettingsSet `
|
||||
-AllowStartIfOnBatteries `
|
||||
-DontStopIfGoingOnBatteries `
|
||||
-StartWhenAvailable `
|
||||
-ExecutionTimeLimit (New-TimeSpan -Minutes 10) `
|
||||
-RestartCount 2 `
|
||||
-RestartInterval (New-TimeSpan -Minutes 30)
|
||||
|
||||
Register-ScheduledTask -TaskName 'AtoCore Backup Pull' `
|
||||
-Description 'Daily pull of AtoCore backup snapshots from Dalidou' `
|
||||
-Action $action -Trigger $trigger -Settings $settings `
|
||||
-User $env:USERNAME
|
||||
```
|
||||
|
||||
Key settings:
|
||||
- `-StartWhenAvailable`: if the computer was off at 10:00, run as soon as it
|
||||
comes online
|
||||
- `-AllowStartIfOnBatteries`: works on laptop battery too
|
||||
- `-ExecutionTimeLimit 10min`: kill hung tasks
|
||||
- `-RestartCount 2`: retry twice if it fails (Dalidou temporarily unreachable)
|
||||
|
||||
### Option B -- Task Scheduler GUI
|
||||
|
||||
1. Open Task Scheduler (`taskschd.msc`)
|
||||
2. Create Basic Task -> name: `AtoCore Backup Pull`
|
||||
3. Trigger: Daily, 10:00 AM, recur every 1 day
|
||||
4. Action: Start a program
|
||||
- Program: `powershell.exe`
|
||||
- Arguments: `-ExecutionPolicy Bypass -NonInteractive -WindowStyle Hidden -File "C:\Users\antoi\ATOCore\scripts\windows\atocore-backup-pull.ps1"`
|
||||
5. Finish, then edit the task:
|
||||
- Settings tab: check "Run task as soon as possible after a scheduled start is missed"
|
||||
- Settings tab: "If the task fails, restart every 30 minutes, up to 2 times"
|
||||
- Conditions tab: uncheck "Start only if computer is on AC power" (if you want it on battery)
|
||||
|
||||
## Verify
|
||||
|
||||
After the first scheduled run:
|
||||
|
||||
```powershell
|
||||
# Most recent log
|
||||
Get-ChildItem C:\Users\antoi\Documents\ATOCore_Backups\_logs\ |
|
||||
Sort-Object Name -Descending |
|
||||
Select-Object -First 1 |
|
||||
Get-Content
|
||||
|
||||
# Latest snapshot present?
|
||||
Get-ChildItem C:\Users\antoi\Documents\ATOCore_Backups\snapshots\ |
|
||||
Sort-Object Name -Descending |
|
||||
Select-Object -First 3
|
||||
```
|
||||
|
||||
## Unregister (if needed)
|
||||
|
||||
```powershell
|
||||
Unregister-ScheduledTask -TaskName 'AtoCore Backup Pull' -Confirm:$false
|
||||
```
|
||||
|
||||
## How it behaves
|
||||
|
||||
- **Computer on, Dalidou reachable**: pulls latest snapshots silently in ~15s
|
||||
- **Computer on, Dalidou unreachable** (remote work, network down): fail-open,
|
||||
exits without error, logs "Dalidou unreachable"
|
||||
- **Computer off at scheduled time**: Task Scheduler runs it as soon as the
|
||||
computer wakes up
|
||||
- **Many days off**: one run catches up; scp only transfers files not already
|
||||
present (snapshots are date-stamped directories, idempotent overwrites)
|
||||
|
||||
## What gets backed up
|
||||
|
||||
The snapshots tree contains:
|
||||
- `YYYYMMDDTHHMMSSZ/config/` — project registry, AtoCore config
|
||||
- `YYYYMMDDTHHMMSSZ/db/` — SQLite snapshot of all memory, state, interactions
|
||||
- `YYYYMMDDTHHMMSSZ/backup-metadata.json` — SHA, timestamp, source info
|
||||
|
||||
Chroma vectors are **not** in the snapshot by default
|
||||
(`ATOCORE_BACKUP_CHROMA=false` on Dalidou). They can be rebuilt from the
|
||||
source documents if lost. To include them, set `ATOCORE_BACKUP_CHROMA=true`
|
||||
in the Dalidou cron environment.
|
||||
|
||||
## Three-tier backup summary
|
||||
|
||||
After this setup:
|
||||
|
||||
| Tier | Location | Cadence | Purpose |
|
||||
|---|---|---|---|
|
||||
| Live | Dalidou `/srv/storage/atocore/backups/snapshots/` | Nightly 03:00 UTC | Fast restore |
|
||||
| Off-host | T420 `papa@192.168.86.39:/home/papa/atocore-backups/` | Nightly after Dalidou | Dalidou dies |
|
||||
| User machine | `C:\Users\antoi\Documents\ATOCore_Backups\` | Daily 10:00 local | Full home-network failure |
|
||||
|
||||
Three independent copies. Any two can be lost simultaneously without data loss.
|
||||
29
openclaw-plugins/atocore-capture/README.md
Normal file
29
openclaw-plugins/atocore-capture/README.md
Normal file
@@ -0,0 +1,29 @@
|
||||
# AtoCore Capture Plugin for OpenClaw
|
||||
|
||||
Minimal OpenClaw plugin that mirrors Claude Code's `capture_stop.py` behavior:
|
||||
|
||||
- watches user-triggered assistant turns
|
||||
- POSTs `prompt` + `response` to `POST /interactions`
|
||||
- sets `client="openclaw"`
|
||||
- sets `reinforce=true`
|
||||
- fails open on network or API errors
|
||||
|
||||
## Config
|
||||
|
||||
Optional plugin config:
|
||||
|
||||
```json
|
||||
{
|
||||
"baseUrl": "http://dalidou:8100",
|
||||
"minPromptLength": 15,
|
||||
"maxResponseLength": 50000
|
||||
}
|
||||
```
|
||||
|
||||
If `baseUrl` is omitted, the plugin uses `ATOCORE_BASE_URL` or defaults to `http://dalidou:8100`.
|
||||
|
||||
## Notes
|
||||
|
||||
- Project detection is intentionally left empty for now. Unscoped capture is acceptable because AtoCore's extraction pipeline handles unscoped interactions.
|
||||
- Extraction is **not** part of the capture path. This plugin only records interactions and lets AtoCore reinforcement run automatically.
|
||||
- The plugin captures only user-triggered turns, not heartbeats or system-only runs.
|
||||
146
openclaw-plugins/atocore-capture/handler.js
Normal file
146
openclaw-plugins/atocore-capture/handler.js
Normal file
@@ -0,0 +1,146 @@
|
||||
/**
|
||||
* AtoCore OpenClaw plugin — capture + pull.
|
||||
*
|
||||
* Two responsibilities:
|
||||
*
|
||||
* 1. CAPTURE (existing): On before_agent_start, buffer the user prompt.
|
||||
* On llm_output, POST prompt+response to AtoCore /interactions.
|
||||
* This is the "write" side — OpenClaw turns feed AtoCore's memory.
|
||||
*
|
||||
* 2. PULL (Phase 1 master brain): On before_prompt_build, call AtoCore
|
||||
* /context/build and inject the returned context via prependContext.
|
||||
* Every OpenClaw response is automatically grounded in what AtoCore
|
||||
* knows (project state, memories, relevant chunks).
|
||||
*
|
||||
* Fail-open throughout: AtoCore unreachable = no injection, no capture,
|
||||
* never blocks the agent.
|
||||
*/
|
||||
|
||||
import { definePluginEntry } from "openclaw/plugin-sdk/core";
|
||||
|
||||
const BASE_URL = process.env.ATOCORE_BASE_URL || "http://dalidou:8100";
|
||||
const MIN_LEN = 15;
|
||||
const MAX_RESP = 50000;
|
||||
const CONTEXT_TIMEOUT_MS = 6000;
|
||||
const CAPTURE_TIMEOUT_MS = 10000;
|
||||
|
||||
function trim(v) { return typeof v === "string" ? v.trim() : ""; }
|
||||
function trunc(t, m) { return !t || t.length <= m ? t : t.slice(0, m) + "\n\n[truncated]"; }
|
||||
|
||||
function detectProject(prompt) {
|
||||
const lower = (prompt || "").toLowerCase();
|
||||
const hints = [
|
||||
["p04", "p04-gigabit"],
|
||||
["gigabit", "p04-gigabit"],
|
||||
["p05", "p05-interferometer"],
|
||||
["interferometer", "p05-interferometer"],
|
||||
["p06", "p06-polisher"],
|
||||
["polisher", "p06-polisher"],
|
||||
["fullum", "p06-polisher"],
|
||||
["abb", "abb-space"],
|
||||
["atomizer", "atomizer-v2"],
|
||||
["atocore", "atocore"],
|
||||
];
|
||||
for (const [token, proj] of hints) {
|
||||
if (lower.includes(token)) return proj;
|
||||
}
|
||||
return "";
|
||||
}
|
||||
|
||||
export default definePluginEntry({
|
||||
register(api) {
|
||||
const log = api.logger;
|
||||
let lastPrompt = null;
|
||||
|
||||
// --- PULL: inject AtoCore context into every prompt ---
|
||||
api.on("before_prompt_build", async (event, ctx) => {
|
||||
if (process.env.ATOCORE_PULL_DISABLED === "1") return;
|
||||
const prompt = trim(event?.prompt || "");
|
||||
if (prompt.length < MIN_LEN) return;
|
||||
|
||||
const project = detectProject(prompt);
|
||||
|
||||
try {
|
||||
const res = await fetch(BASE_URL.replace(/\/$/, "") + "/context/build", {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
body: JSON.stringify({ prompt, project }),
|
||||
signal: AbortSignal.timeout(CONTEXT_TIMEOUT_MS),
|
||||
});
|
||||
if (!res.ok) {
|
||||
log.info("atocore-pull:http_error", { status: res.status });
|
||||
return;
|
||||
}
|
||||
const data = await res.json();
|
||||
const contextPack = data.formatted_context || "";
|
||||
if (!contextPack.trim()) return;
|
||||
|
||||
log.info("atocore-pull:injected", {
|
||||
project: project || "(none)",
|
||||
chars: contextPack.length,
|
||||
});
|
||||
|
||||
return {
|
||||
prependContext:
|
||||
"--- AtoCore Context (auto-injected) ---\n" +
|
||||
contextPack +
|
||||
"\n--- End AtoCore Context ---\n",
|
||||
};
|
||||
} catch (err) {
|
||||
log.info("atocore-pull:error", { error: String(err).slice(0, 200) });
|
||||
}
|
||||
});
|
||||
|
||||
// --- CAPTURE: buffer user prompts on agent start ---
|
||||
api.on("before_agent_start", async (event, ctx) => {
|
||||
const prompt = trim(event?.prompt || event?.cleanedBody || "");
|
||||
if (prompt.length < MIN_LEN || prompt.startsWith("<")) {
|
||||
lastPrompt = null;
|
||||
return;
|
||||
}
|
||||
lastPrompt = { text: prompt, sessionKey: ctx?.sessionKey || "", ts: Date.now() };
|
||||
log.info("atocore-capture:prompt_buffered", { len: prompt.length });
|
||||
});
|
||||
|
||||
// --- CAPTURE: send completed turns to AtoCore ---
|
||||
api.on("llm_output", async (event, ctx) => {
|
||||
if (!lastPrompt) return;
|
||||
const texts = Array.isArray(event?.assistantTexts) ? event.assistantTexts : [];
|
||||
const response = trunc(trim(texts.join("\n\n")), MAX_RESP);
|
||||
if (!response) return;
|
||||
|
||||
const prompt = lastPrompt.text;
|
||||
const sessionKey = lastPrompt.sessionKey || ctx?.sessionKey || "";
|
||||
const project = detectProject(prompt);
|
||||
lastPrompt = null;
|
||||
|
||||
log.info("atocore-capture:posting", {
|
||||
promptLen: prompt.length,
|
||||
responseLen: response.length,
|
||||
project: project || "(none)",
|
||||
});
|
||||
|
||||
fetch(BASE_URL.replace(/\/$/, "") + "/interactions", {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
body: JSON.stringify({
|
||||
prompt,
|
||||
response,
|
||||
client: "openclaw",
|
||||
session_id: sessionKey,
|
||||
project,
|
||||
reinforce: true,
|
||||
}),
|
||||
signal: AbortSignal.timeout(CAPTURE_TIMEOUT_MS),
|
||||
}).then(res => {
|
||||
log.info("atocore-capture:posted", { status: res.status });
|
||||
}).catch(err => {
|
||||
log.warn("atocore-capture:post_error", { error: String(err).slice(0, 200) });
|
||||
});
|
||||
});
|
||||
|
||||
api.on("session_end", async () => {
|
||||
lastPrompt = null;
|
||||
});
|
||||
}
|
||||
});
|
||||
94
openclaw-plugins/atocore-capture/index.js
Normal file
94
openclaw-plugins/atocore-capture/index.js
Normal file
@@ -0,0 +1,94 @@
|
||||
import { definePluginEntry } from "openclaw/plugin-sdk/core";
|
||||
|
||||
const DEFAULT_BASE_URL = process.env.ATOCORE_BASE_URL || "http://dalidou:8100";
|
||||
const DEFAULT_MIN_PROMPT_LENGTH = 15;
|
||||
const DEFAULT_MAX_RESPONSE_LENGTH = 50_000;
|
||||
|
||||
function trimText(value) {
|
||||
return typeof value === "string" ? value.trim() : "";
|
||||
}
|
||||
|
||||
function truncateResponse(text, maxLength) {
|
||||
if (!text || text.length <= maxLength) return text;
|
||||
return `${text.slice(0, maxLength)}\n\n[truncated]`;
|
||||
}
|
||||
|
||||
function shouldCapturePrompt(prompt, minLength) {
|
||||
const text = trimText(prompt);
|
||||
if (!text) return false;
|
||||
if (text.startsWith("<")) return false;
|
||||
return text.length >= minLength;
|
||||
}
|
||||
|
||||
async function postInteraction(baseUrl, payload, logger) {
|
||||
try {
|
||||
const res = await fetch(`${baseUrl.replace(/\/$/, "")}/interactions`, {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
body: JSON.stringify(payload),
|
||||
signal: AbortSignal.timeout(10_000)
|
||||
});
|
||||
if (!res.ok) {
|
||||
logger?.debug?.("atocore_capture_post_failed", { status: res.status });
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
} catch (error) {
|
||||
logger?.debug?.("atocore_capture_post_error", {
|
||||
error: error instanceof Error ? error.message : String(error)
|
||||
});
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
export default definePluginEntry({
|
||||
register(api) {
|
||||
const logger = api.logger;
|
||||
const pendingBySession = new Map();
|
||||
|
||||
api.on("before_agent_start", async (event, ctx) => {
|
||||
if (ctx?.trigger && ctx.trigger !== "user") return;
|
||||
const config = api.getConfig?.() || {};
|
||||
const minPromptLength = Number(config.minPromptLength || DEFAULT_MIN_PROMPT_LENGTH);
|
||||
const prompt = trimText(event?.prompt || "");
|
||||
if (!shouldCapturePrompt(prompt, minPromptLength)) {
|
||||
pendingBySession.delete(ctx.sessionId);
|
||||
return;
|
||||
}
|
||||
pendingBySession.set(ctx.sessionId, {
|
||||
prompt,
|
||||
sessionId: ctx.sessionId,
|
||||
sessionKey: ctx.sessionKey || "",
|
||||
project: ""
|
||||
});
|
||||
});
|
||||
|
||||
api.on("llm_output", async (event, ctx) => {
|
||||
if (ctx?.trigger && ctx.trigger !== "user") return;
|
||||
const pending = pendingBySession.get(ctx.sessionId);
|
||||
if (!pending) return;
|
||||
|
||||
const assistantTexts = Array.isArray(event?.assistantTexts) ? event.assistantTexts : [];
|
||||
const response = truncateResponse(trimText(assistantTexts.join("\n\n")), Number((api.getConfig?.() || {}).maxResponseLength || DEFAULT_MAX_RESPONSE_LENGTH));
|
||||
if (!response) return;
|
||||
|
||||
const config = api.getConfig?.() || {};
|
||||
const baseUrl = trimText(config.baseUrl) || DEFAULT_BASE_URL;
|
||||
const payload = {
|
||||
prompt: pending.prompt,
|
||||
response,
|
||||
client: "openclaw",
|
||||
session_id: pending.sessionKey || pending.sessionId,
|
||||
project: pending.project || "",
|
||||
reinforce: true
|
||||
};
|
||||
|
||||
await postInteraction(baseUrl, payload, logger);
|
||||
pendingBySession.delete(ctx.sessionId);
|
||||
});
|
||||
|
||||
api.on("session_end", async (event) => {
|
||||
if (event?.sessionId) pendingBySession.delete(event.sessionId);
|
||||
});
|
||||
}
|
||||
});
|
||||
29
openclaw-plugins/atocore-capture/openclaw.plugin.json
Normal file
29
openclaw-plugins/atocore-capture/openclaw.plugin.json
Normal file
@@ -0,0 +1,29 @@
|
||||
{
|
||||
"id": "atocore-capture",
|
||||
"name": "AtoCore Capture",
|
||||
"description": "Captures completed OpenClaw assistant turns to AtoCore interactions for reinforcement.",
|
||||
"configSchema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"baseUrl": {
|
||||
"type": "string",
|
||||
"description": "Override AtoCore base URL. Defaults to ATOCORE_BASE_URL or http://dalidou:8100"
|
||||
},
|
||||
"minPromptLength": {
|
||||
"type": "integer",
|
||||
"minimum": 1,
|
||||
"description": "Minimum user prompt length required before capture"
|
||||
},
|
||||
"maxResponseLength": {
|
||||
"type": "integer",
|
||||
"minimum": 100,
|
||||
"description": "Maximum assistant response length to store"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
},
|
||||
"uiHints": {
|
||||
"category": "automation",
|
||||
"displayName": "AtoCore Capture"
|
||||
}
|
||||
}
|
||||
7
openclaw-plugins/atocore-capture/package.json
Normal file
7
openclaw-plugins/atocore-capture/package.json
Normal file
@@ -0,0 +1,7 @@
|
||||
{
|
||||
"name": "@atomaste/atocore-openclaw-capture",
|
||||
"private": true,
|
||||
"version": "0.0.0",
|
||||
"type": "module",
|
||||
"description": "OpenClaw plugin that captures assistant turns to AtoCore interactions"
|
||||
}
|
||||
@@ -16,6 +16,7 @@ dependencies = [
|
||||
"pydantic>=2.6.0",
|
||||
"pydantic-settings>=2.1.0",
|
||||
"structlog>=24.1.0",
|
||||
"markdown>=3.5.0",
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
|
||||
@@ -6,3 +6,4 @@ sentence-transformers>=2.5.0
|
||||
pydantic>=2.6.0
|
||||
pydantic-settings>=2.1.0
|
||||
structlog>=24.1.0
|
||||
markdown>=3.5.0
|
||||
|
||||
479
scripts/atocore_mcp.py
Normal file
479
scripts/atocore_mcp.py
Normal file
@@ -0,0 +1,479 @@
|
||||
#!/usr/bin/env python3
|
||||
"""AtoCore MCP server — stdio transport, stdlib-only.
|
||||
|
||||
Exposes the AtoCore HTTP API as MCP tools so any MCP-aware client
|
||||
(Claude Desktop, Claude Code, Cursor, Zed, Windsurf) can pull
|
||||
context + memories automatically at prompt time.
|
||||
|
||||
Design:
|
||||
- stdlib only (no mcp SDK dep) — MCP protocol is simple JSON-RPC
|
||||
over stdio, and AtoCore's philosophy prefers stdlib.
|
||||
- Thin wrapper: every tool is a direct pass-through to an HTTP
|
||||
endpoint. Zero business logic here — the AtoCore server is
|
||||
the single source of truth.
|
||||
- Fail-open: if AtoCore is unreachable, tools return a graceful
|
||||
"unavailable" message rather than crashing the client.
|
||||
|
||||
Protocol: MCP 2024-11-05 / 2025-03-26 compatible
|
||||
https://spec.modelcontextprotocol.io/specification/
|
||||
|
||||
Usage (standalone test):
|
||||
echo '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"test","version":"0"}}}' | python atocore_mcp.py
|
||||
|
||||
Register with Claude Code:
|
||||
claude mcp add atocore -- python /path/to/atocore_mcp.py
|
||||
|
||||
Environment:
|
||||
ATOCORE_URL base URL of the AtoCore HTTP API (default http://dalidou:8100)
|
||||
ATOCORE_TIMEOUT per-request HTTP timeout seconds (default 10)
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import urllib.error
|
||||
import urllib.parse
|
||||
import urllib.request
|
||||
|
||||
# --- Configuration ---
|
||||
|
||||
ATOCORE_URL = os.environ.get("ATOCORE_URL", "http://dalidou:8100").rstrip("/")
|
||||
HTTP_TIMEOUT = float(os.environ.get("ATOCORE_TIMEOUT", "10"))
|
||||
SERVER_NAME = "atocore"
|
||||
SERVER_VERSION = "0.1.0"
|
||||
PROTOCOL_VERSION = "2024-11-05"
|
||||
|
||||
|
||||
# --- stderr logging (stdout is reserved for JSON-RPC) ---
|
||||
|
||||
def log(msg: str) -> None:
|
||||
print(f"[atocore-mcp] {msg}", file=sys.stderr, flush=True)
|
||||
|
||||
|
||||
# --- HTTP helpers ---
|
||||
|
||||
def http_get(path: str, params: dict | None = None) -> dict:
|
||||
"""GET a JSON response from AtoCore. Raises on HTTP error."""
|
||||
url = ATOCORE_URL + path
|
||||
if params:
|
||||
# Drop empty params so the URL stays clean
|
||||
clean = {k: v for k, v in params.items() if v not in (None, "", [], {})}
|
||||
if clean:
|
||||
url += "?" + urllib.parse.urlencode(clean)
|
||||
req = urllib.request.Request(url, headers={"Accept": "application/json"})
|
||||
with urllib.request.urlopen(req, timeout=HTTP_TIMEOUT) as resp:
|
||||
return json.loads(resp.read().decode("utf-8"))
|
||||
|
||||
|
||||
def http_post(path: str, body: dict) -> dict:
|
||||
url = ATOCORE_URL + path
|
||||
data = json.dumps(body).encode("utf-8")
|
||||
req = urllib.request.Request(
|
||||
url, data=data, method="POST",
|
||||
headers={"Content-Type": "application/json", "Accept": "application/json"},
|
||||
)
|
||||
with urllib.request.urlopen(req, timeout=HTTP_TIMEOUT) as resp:
|
||||
return json.loads(resp.read().decode("utf-8"))
|
||||
|
||||
|
||||
def safe_call(fn, *args, **kwargs) -> tuple[dict | None, str | None]:
|
||||
"""Run an HTTP call, return (result, error_message_or_None)."""
|
||||
try:
|
||||
return fn(*args, **kwargs), None
|
||||
except urllib.error.HTTPError as e:
|
||||
try:
|
||||
body = e.read().decode("utf-8", errors="replace")
|
||||
except Exception:
|
||||
body = ""
|
||||
return None, f"AtoCore HTTP {e.code}: {body[:200]}"
|
||||
except urllib.error.URLError as e:
|
||||
return None, f"AtoCore unreachable at {ATOCORE_URL}: {e.reason}"
|
||||
except Exception as e:
|
||||
return None, f"AtoCore error: {type(e).__name__}: {str(e)[:200]}"
|
||||
|
||||
|
||||
# --- Tool definitions ---
|
||||
# Each tool: name, description, inputSchema (JSON Schema), handler
|
||||
|
||||
def _tool_context(args: dict) -> str:
|
||||
"""Build a full context pack for a query — state + memories + retrieved chunks."""
|
||||
query = (args.get("query") or "").strip()
|
||||
project = args.get("project") or ""
|
||||
if not query:
|
||||
return "Error: 'query' is required."
|
||||
result, err = safe_call(http_post, "/context/build", {
|
||||
"prompt": query, "project": project,
|
||||
})
|
||||
if err:
|
||||
return f"AtoCore context unavailable: {err}"
|
||||
pack = result.get("formatted_context", "") or ""
|
||||
if not pack.strip():
|
||||
return "(AtoCore returned an empty context pack — no matching state, memories, or chunks.)"
|
||||
return pack
|
||||
|
||||
|
||||
def _tool_search(args: dict) -> str:
|
||||
"""Retrieval only — raw chunks ranked by semantic similarity."""
|
||||
query = (args.get("query") or "").strip()
|
||||
project = args.get("project") or ""
|
||||
top_k = int(args.get("top_k") or 5)
|
||||
if not query:
|
||||
return "Error: 'query' is required."
|
||||
result, err = safe_call(http_post, "/query", {
|
||||
"prompt": query, "project": project, "top_k": top_k,
|
||||
})
|
||||
if err:
|
||||
return f"AtoCore search unavailable: {err}"
|
||||
chunks = result.get("results", []) or []
|
||||
if not chunks:
|
||||
return "No results."
|
||||
lines = []
|
||||
for i, c in enumerate(chunks, 1):
|
||||
src = c.get("source_file") or c.get("title") or "unknown"
|
||||
heading = c.get("heading_path") or ""
|
||||
snippet = (c.get("content") or "")[:300]
|
||||
score = c.get("score", 0.0)
|
||||
head_str = f" ({heading})" if heading else ""
|
||||
lines.append(f"[{i}] score={score:.3f} source={src}{head_str}\n{snippet}")
|
||||
return "\n\n".join(lines)
|
||||
|
||||
|
||||
def _tool_memory_list(args: dict) -> str:
|
||||
"""List active memories, optionally filtered by project and type."""
|
||||
params = {
|
||||
"status": "active",
|
||||
"limit": int(args.get("limit") or 20),
|
||||
}
|
||||
if args.get("project"):
|
||||
params["project"] = args["project"]
|
||||
if args.get("memory_type"):
|
||||
params["memory_type"] = args["memory_type"]
|
||||
result, err = safe_call(http_get, "/memory", params=params)
|
||||
if err:
|
||||
return f"AtoCore memory list unavailable: {err}"
|
||||
memories = result.get("memories", []) or []
|
||||
if not memories:
|
||||
return "No memories match."
|
||||
lines = []
|
||||
for m in memories:
|
||||
mt = m.get("memory_type", "?")
|
||||
proj = m.get("project") or "(global)"
|
||||
conf = m.get("confidence", 0.0)
|
||||
refs = m.get("reference_count", 0)
|
||||
content = (m.get("content") or "")[:250]
|
||||
lines.append(f"[{mt}/{proj}] conf={conf:.2f} refs={refs}\n {content}")
|
||||
return "\n\n".join(lines)
|
||||
|
||||
|
||||
def _tool_memory_create(args: dict) -> str:
|
||||
"""Create a candidate memory (enters the triage queue)."""
|
||||
memory_type = (args.get("memory_type") or "").strip()
|
||||
content = (args.get("content") or "").strip()
|
||||
project = args.get("project") or ""
|
||||
confidence = float(args.get("confidence") or 0.5)
|
||||
if not memory_type or not content:
|
||||
return "Error: 'memory_type' and 'content' are required."
|
||||
valid_types = ["identity", "preference", "project", "episodic", "knowledge", "adaptation"]
|
||||
if memory_type not in valid_types:
|
||||
return f"Error: memory_type must be one of {valid_types}."
|
||||
result, err = safe_call(http_post, "/memory", {
|
||||
"memory_type": memory_type,
|
||||
"content": content,
|
||||
"project": project,
|
||||
"confidence": confidence,
|
||||
"status": "candidate",
|
||||
})
|
||||
if err:
|
||||
return f"AtoCore memory create failed: {err}"
|
||||
mid = result.get("id", "?")
|
||||
return f"Candidate memory created: id={mid} type={memory_type} project={project or '(global)'}"
|
||||
|
||||
|
||||
def _tool_project_state(args: dict) -> str:
|
||||
"""Get Trusted Project State entries for a project."""
|
||||
project = (args.get("project") or "").strip()
|
||||
category = args.get("category") or ""
|
||||
if not project:
|
||||
return "Error: 'project' is required."
|
||||
path = f"/project/state/{urllib.parse.quote(project)}"
|
||||
params = {"category": category} if category else None
|
||||
result, err = safe_call(http_get, path, params=params)
|
||||
if err:
|
||||
return f"AtoCore project state unavailable: {err}"
|
||||
entries = result.get("entries", []) or result.get("state", []) or []
|
||||
if not entries:
|
||||
return f"No state entries for project '{project}'."
|
||||
lines = []
|
||||
for e in entries:
|
||||
cat = e.get("category", "?")
|
||||
key = e.get("key", "?")
|
||||
value = (e.get("value") or "")[:300]
|
||||
src = e.get("source") or ""
|
||||
lines.append(f"[{cat}/{key}] (source: {src})\n {value}")
|
||||
return "\n\n".join(lines)
|
||||
|
||||
|
||||
def _tool_projects(args: dict) -> str:
|
||||
"""List registered AtoCore projects."""
|
||||
result, err = safe_call(http_get, "/projects")
|
||||
if err:
|
||||
return f"AtoCore projects unavailable: {err}"
|
||||
projects = result.get("projects", []) or []
|
||||
if not projects:
|
||||
return "No projects registered."
|
||||
lines = []
|
||||
for p in projects:
|
||||
pid = p.get("project_id") or p.get("id") or p.get("name") or "?"
|
||||
aliases = p.get("aliases", []) or []
|
||||
alias_str = f" (aliases: {', '.join(aliases)})" if aliases else ""
|
||||
lines.append(f"- {pid}{alias_str}")
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def _tool_health(args: dict) -> str:
|
||||
"""Check AtoCore service health."""
|
||||
result, err = safe_call(http_get, "/health")
|
||||
if err:
|
||||
return f"AtoCore unreachable: {err}"
|
||||
sha = result.get("build_sha", "?")[:8]
|
||||
vectors = result.get("vectors_count", "?")
|
||||
env = result.get("env", "?")
|
||||
return f"AtoCore healthy: sha={sha} vectors={vectors} env={env}"
|
||||
|
||||
|
||||
TOOLS = [
|
||||
{
|
||||
"name": "atocore_context",
|
||||
"description": (
|
||||
"Get the full AtoCore context pack for a user query. Returns "
|
||||
"Trusted Project State (high trust), relevant memories, and "
|
||||
"retrieved source chunks formatted for prompt injection. "
|
||||
"Use this FIRST on any project-related query to ground the "
|
||||
"conversation in what AtoCore already knows."
|
||||
),
|
||||
"inputSchema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"query": {"type": "string", "description": "The user's question or task"},
|
||||
"project": {"type": "string", "description": "Project hint (e.g. 'p04-gigabit'); optional"},
|
||||
},
|
||||
"required": ["query"],
|
||||
},
|
||||
"handler": _tool_context,
|
||||
},
|
||||
{
|
||||
"name": "atocore_search",
|
||||
"description": (
|
||||
"Semantic search over AtoCore's ingested source documents. "
|
||||
"Returns top-K ranked chunks. Use this when you need raw "
|
||||
"references rather than a full context pack."
|
||||
),
|
||||
"inputSchema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"query": {"type": "string"},
|
||||
"project": {"type": "string", "description": "optional project filter"},
|
||||
"top_k": {"type": "integer", "minimum": 1, "maximum": 20, "default": 5},
|
||||
},
|
||||
"required": ["query"],
|
||||
},
|
||||
"handler": _tool_search,
|
||||
},
|
||||
{
|
||||
"name": "atocore_memory_list",
|
||||
"description": (
|
||||
"List active memories (curated facts, decisions, preferences). "
|
||||
"Filter by project and/or memory_type. Use this to inspect what "
|
||||
"AtoCore currently remembers about a topic."
|
||||
),
|
||||
"inputSchema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"project": {"type": "string"},
|
||||
"memory_type": {
|
||||
"type": "string",
|
||||
"enum": ["identity", "preference", "project", "episodic", "knowledge", "adaptation"],
|
||||
},
|
||||
"limit": {"type": "integer", "minimum": 1, "maximum": 100, "default": 20},
|
||||
},
|
||||
},
|
||||
"handler": _tool_memory_list,
|
||||
},
|
||||
{
|
||||
"name": "atocore_memory_create",
|
||||
"description": (
|
||||
"Propose a new memory for AtoCore. Creates a CANDIDATE that "
|
||||
"enters the triage queue for human/auto review — not immediately "
|
||||
"active. Use this to capture durable facts/decisions that "
|
||||
"should persist across sessions. Do NOT use for transient state "
|
||||
"or session-specific notes."
|
||||
),
|
||||
"inputSchema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"memory_type": {
|
||||
"type": "string",
|
||||
"enum": ["identity", "preference", "project", "episodic", "knowledge", "adaptation"],
|
||||
},
|
||||
"content": {"type": "string", "description": "The fact/decision/preference to remember"},
|
||||
"project": {"type": "string", "description": "project id if project-scoped; empty for global"},
|
||||
"confidence": {"type": "number", "minimum": 0, "maximum": 1, "default": 0.5},
|
||||
},
|
||||
"required": ["memory_type", "content"],
|
||||
},
|
||||
"handler": _tool_memory_create,
|
||||
},
|
||||
{
|
||||
"name": "atocore_project_state",
|
||||
"description": (
|
||||
"Get Trusted Project State entries for a given project — the "
|
||||
"highest-trust tier with curated decisions, requirements, "
|
||||
"facts, contacts, milestones. Use this to look up authoritative "
|
||||
"project info."
|
||||
),
|
||||
"inputSchema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"project": {"type": "string"},
|
||||
"category": {
|
||||
"type": "string",
|
||||
"enum": ["status", "decision", "requirement", "contact", "milestone", "fact", "config"],
|
||||
},
|
||||
},
|
||||
"required": ["project"],
|
||||
},
|
||||
"handler": _tool_project_state,
|
||||
},
|
||||
{
|
||||
"name": "atocore_projects",
|
||||
"description": "List all registered AtoCore projects (id + aliases).",
|
||||
"inputSchema": {"type": "object", "properties": {}},
|
||||
"handler": _tool_projects,
|
||||
},
|
||||
{
|
||||
"name": "atocore_health",
|
||||
"description": "Check AtoCore service health (build SHA, vector count, env).",
|
||||
"inputSchema": {"type": "object", "properties": {}},
|
||||
"handler": _tool_health,
|
||||
},
|
||||
]
|
||||
|
||||
|
||||
# --- JSON-RPC handlers ---
|
||||
|
||||
def handle_initialize(params: dict) -> dict:
|
||||
return {
|
||||
"protocolVersion": PROTOCOL_VERSION,
|
||||
"capabilities": {
|
||||
"tools": {"listChanged": False},
|
||||
},
|
||||
"serverInfo": {"name": SERVER_NAME, "version": SERVER_VERSION},
|
||||
}
|
||||
|
||||
|
||||
def handle_tools_list(params: dict) -> dict:
|
||||
return {
|
||||
"tools": [
|
||||
{"name": t["name"], "description": t["description"], "inputSchema": t["inputSchema"]}
|
||||
for t in TOOLS
|
||||
]
|
||||
}
|
||||
|
||||
|
||||
def handle_tools_call(params: dict) -> dict:
|
||||
tool_name = params.get("name", "")
|
||||
args = params.get("arguments", {}) or {}
|
||||
tool = next((t for t in TOOLS if t["name"] == tool_name), None)
|
||||
if tool is None:
|
||||
return {
|
||||
"content": [{"type": "text", "text": f"Unknown tool: {tool_name}"}],
|
||||
"isError": True,
|
||||
}
|
||||
try:
|
||||
text = tool["handler"](args)
|
||||
except Exception as e:
|
||||
log(f"tool {tool_name} raised: {e}")
|
||||
return {
|
||||
"content": [{"type": "text", "text": f"Tool error: {type(e).__name__}: {e}"}],
|
||||
"isError": True,
|
||||
}
|
||||
return {"content": [{"type": "text", "text": text}]}
|
||||
|
||||
|
||||
def handle_ping(params: dict) -> dict:
|
||||
return {}
|
||||
|
||||
|
||||
METHODS = {
|
||||
"initialize": handle_initialize,
|
||||
"tools/list": handle_tools_list,
|
||||
"tools/call": handle_tools_call,
|
||||
"ping": handle_ping,
|
||||
}
|
||||
|
||||
|
||||
# --- stdio main loop ---
|
||||
|
||||
def send(obj: dict) -> None:
|
||||
"""Write a single-line JSON message to stdout and flush."""
|
||||
sys.stdout.write(json.dumps(obj, ensure_ascii=False) + "\n")
|
||||
sys.stdout.flush()
|
||||
|
||||
|
||||
def make_response(req_id, result=None, error=None) -> dict:
|
||||
resp = {"jsonrpc": "2.0", "id": req_id}
|
||||
if error is not None:
|
||||
resp["error"] = error
|
||||
else:
|
||||
resp["result"] = result if result is not None else {}
|
||||
return resp
|
||||
|
||||
|
||||
def main() -> int:
|
||||
log(f"starting (AtoCore at {ATOCORE_URL})")
|
||||
for line in sys.stdin:
|
||||
line = line.strip()
|
||||
if not line:
|
||||
continue
|
||||
try:
|
||||
msg = json.loads(line)
|
||||
except json.JSONDecodeError as e:
|
||||
log(f"parse error: {e}")
|
||||
continue
|
||||
|
||||
method = msg.get("method", "")
|
||||
req_id = msg.get("id")
|
||||
params = msg.get("params", {}) or {}
|
||||
|
||||
# Notifications (no id) don't need a response
|
||||
if req_id is None:
|
||||
if method == "notifications/initialized":
|
||||
log("client initialized")
|
||||
continue
|
||||
|
||||
handler = METHODS.get(method)
|
||||
if handler is None:
|
||||
send(make_response(req_id, error={
|
||||
"code": -32601,
|
||||
"message": f"Method not found: {method}",
|
||||
}))
|
||||
continue
|
||||
|
||||
try:
|
||||
result = handler(params)
|
||||
send(make_response(req_id, result=result))
|
||||
except Exception as e:
|
||||
log(f"handler {method} raised: {e}")
|
||||
send(make_response(req_id, error={
|
||||
"code": -32603,
|
||||
"message": f"Internal error: {type(e).__name__}: {e}",
|
||||
}))
|
||||
|
||||
log("stdin closed, exiting")
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
321
scripts/atocore_proxy.py
Normal file
321
scripts/atocore_proxy.py
Normal file
@@ -0,0 +1,321 @@
|
||||
#!/usr/bin/env python3
|
||||
"""AtoCore Proxy — OpenAI-compatible HTTP middleware.
|
||||
|
||||
Acts as a drop-in layer for any client that speaks the OpenAI Chat
|
||||
Completions API (Codex, Ollama, LiteLLM, custom agents). Sits between
|
||||
the client and the real model provider:
|
||||
|
||||
client -> atocore_proxy -> real_provider (OpenAI, Ollama, Anthropic, ...)
|
||||
|
||||
For each chat completion request:
|
||||
1. Extract the user's last message as the "query"
|
||||
2. Call AtoCore /context/build to get a context pack
|
||||
3. Inject the pack as a system message (or prepend to existing system)
|
||||
4. Forward the enriched request to the real provider
|
||||
5. Capture the full interaction back to AtoCore /interactions
|
||||
|
||||
Fail-open: if AtoCore is unreachable, the request passes through
|
||||
unchanged. If the real provider fails, the error is propagated to the
|
||||
client as-is.
|
||||
|
||||
Configuration (env vars):
|
||||
ATOCORE_URL AtoCore base URL (default http://dalidou:8100)
|
||||
ATOCORE_UPSTREAM real provider base URL (e.g. http://localhost:11434/v1 for Ollama)
|
||||
ATOCORE_PROXY_PORT port to listen on (default 11435)
|
||||
ATOCORE_PROXY_HOST bind address (default 127.0.0.1)
|
||||
ATOCORE_CLIENT_LABEL client id recorded in captures (default "proxy")
|
||||
ATOCORE_CAPTURE "1" to capture interactions back (default "1")
|
||||
ATOCORE_INJECT "1" to inject context (default "1")
|
||||
|
||||
Usage:
|
||||
# Proxy for Ollama:
|
||||
ATOCORE_UPSTREAM=http://localhost:11434/v1 python atocore_proxy.py
|
||||
|
||||
# Then point your client at http://localhost:11435/v1 instead of the
|
||||
# real provider.
|
||||
|
||||
Stdlib only — deliberate to keep the dependency footprint at zero.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import http.server
|
||||
import json
|
||||
import os
|
||||
import socketserver
|
||||
import sys
|
||||
import threading
|
||||
import urllib.error
|
||||
import urllib.parse
|
||||
import urllib.request
|
||||
from typing import Any
|
||||
|
||||
ATOCORE_URL = os.environ.get("ATOCORE_URL", "http://dalidou:8100").rstrip("/")
|
||||
UPSTREAM_URL = os.environ.get("ATOCORE_UPSTREAM", "").rstrip("/")
|
||||
PROXY_PORT = int(os.environ.get("ATOCORE_PROXY_PORT", "11435"))
|
||||
PROXY_HOST = os.environ.get("ATOCORE_PROXY_HOST", "127.0.0.1")
|
||||
CLIENT_LABEL = os.environ.get("ATOCORE_CLIENT_LABEL", "proxy")
|
||||
CAPTURE_ENABLED = os.environ.get("ATOCORE_CAPTURE", "1") == "1"
|
||||
INJECT_ENABLED = os.environ.get("ATOCORE_INJECT", "1") == "1"
|
||||
ATOCORE_TIMEOUT = float(os.environ.get("ATOCORE_TIMEOUT", "6"))
|
||||
UPSTREAM_TIMEOUT = float(os.environ.get("ATOCORE_UPSTREAM_TIMEOUT", "300"))
|
||||
|
||||
PROJECT_HINTS = [
|
||||
("p04-gigabit", ["p04", "gigabit"]),
|
||||
("p05-interferometer", ["p05", "interferometer"]),
|
||||
("p06-polisher", ["p06", "polisher", "fullum"]),
|
||||
("abb-space", ["abb"]),
|
||||
("atomizer-v2", ["atomizer"]),
|
||||
("atocore", ["atocore", "dalidou"]),
|
||||
]
|
||||
|
||||
|
||||
def log(msg: str) -> None:
|
||||
print(f"[atocore-proxy] {msg}", file=sys.stderr, flush=True)
|
||||
|
||||
|
||||
def detect_project(text: str) -> str:
|
||||
lower = (text or "").lower()
|
||||
for proj, tokens in PROJECT_HINTS:
|
||||
if any(t in lower for t in tokens):
|
||||
return proj
|
||||
return ""
|
||||
|
||||
|
||||
def get_last_user_message(body: dict) -> str:
|
||||
messages = body.get("messages", []) or []
|
||||
for m in reversed(messages):
|
||||
if m.get("role") == "user":
|
||||
content = m.get("content", "")
|
||||
if isinstance(content, list):
|
||||
# OpenAI multi-part content: extract text parts
|
||||
parts = [p.get("text", "") for p in content if p.get("type") == "text"]
|
||||
return "\n".join(parts)
|
||||
return str(content)
|
||||
return ""
|
||||
|
||||
|
||||
def get_assistant_text(response: dict) -> str:
|
||||
"""Extract assistant text from an OpenAI-style completion response."""
|
||||
choices = response.get("choices", []) or []
|
||||
if not choices:
|
||||
return ""
|
||||
msg = choices[0].get("message", {}) or {}
|
||||
content = msg.get("content", "")
|
||||
if isinstance(content, list):
|
||||
parts = [p.get("text", "") for p in content if p.get("type") == "text"]
|
||||
return "\n".join(parts)
|
||||
return str(content)
|
||||
|
||||
|
||||
def fetch_context(query: str, project: str) -> str:
|
||||
"""Pull a context pack from AtoCore. Returns '' on any failure."""
|
||||
if not INJECT_ENABLED or not query:
|
||||
return ""
|
||||
try:
|
||||
data = json.dumps({"prompt": query, "project": project}).encode("utf-8")
|
||||
req = urllib.request.Request(
|
||||
ATOCORE_URL + "/context/build",
|
||||
data=data,
|
||||
method="POST",
|
||||
headers={"Content-Type": "application/json"},
|
||||
)
|
||||
with urllib.request.urlopen(req, timeout=ATOCORE_TIMEOUT) as resp:
|
||||
result = json.loads(resp.read().decode("utf-8"))
|
||||
return result.get("formatted_context", "") or ""
|
||||
except Exception as e:
|
||||
log(f"context fetch failed: {type(e).__name__}: {e}")
|
||||
return ""
|
||||
|
||||
|
||||
def capture_interaction(prompt: str, response: str, project: str) -> None:
|
||||
"""POST the completed turn back to AtoCore. Fire-and-forget."""
|
||||
if not CAPTURE_ENABLED or not prompt or not response:
|
||||
return
|
||||
|
||||
def _post():
|
||||
try:
|
||||
data = json.dumps({
|
||||
"prompt": prompt,
|
||||
"response": response,
|
||||
"client": CLIENT_LABEL,
|
||||
"project": project,
|
||||
"reinforce": True,
|
||||
}).encode("utf-8")
|
||||
req = urllib.request.Request(
|
||||
ATOCORE_URL + "/interactions",
|
||||
data=data,
|
||||
method="POST",
|
||||
headers={"Content-Type": "application/json"},
|
||||
)
|
||||
urllib.request.urlopen(req, timeout=ATOCORE_TIMEOUT)
|
||||
except Exception as e:
|
||||
log(f"capture failed: {type(e).__name__}: {e}")
|
||||
|
||||
threading.Thread(target=_post, daemon=True).start()
|
||||
|
||||
|
||||
def inject_context(body: dict, context_pack: str) -> dict:
|
||||
"""Prepend the AtoCore context as a system message, or augment existing."""
|
||||
if not context_pack.strip():
|
||||
return body
|
||||
header = "--- AtoCore Context (auto-injected) ---\n"
|
||||
footer = "\n--- End AtoCore Context ---\n"
|
||||
injection = header + context_pack + footer
|
||||
|
||||
messages = list(body.get("messages", []) or [])
|
||||
if messages and messages[0].get("role") == "system":
|
||||
# Augment existing system message
|
||||
existing = messages[0].get("content", "") or ""
|
||||
if isinstance(existing, list):
|
||||
# multi-part: prepend a text part
|
||||
messages[0]["content"] = [{"type": "text", "text": injection}] + existing
|
||||
else:
|
||||
messages[0]["content"] = injection + "\n" + str(existing)
|
||||
else:
|
||||
messages.insert(0, {"role": "system", "content": injection})
|
||||
|
||||
body["messages"] = messages
|
||||
return body
|
||||
|
||||
|
||||
def forward_to_upstream(body: dict, headers: dict[str, str], path: str) -> tuple[int, dict]:
|
||||
"""Forward the enriched body to the upstream provider. Returns (status, response_dict)."""
|
||||
if not UPSTREAM_URL:
|
||||
return 503, {"error": {"message": "ATOCORE_UPSTREAM not configured"}}
|
||||
url = UPSTREAM_URL + path
|
||||
data = json.dumps(body).encode("utf-8")
|
||||
# Strip hop-by-hop / host-specific headers
|
||||
fwd_headers = {"Content-Type": "application/json"}
|
||||
for k, v in headers.items():
|
||||
lk = k.lower()
|
||||
if lk in ("authorization", "x-api-key", "anthropic-version"):
|
||||
fwd_headers[k] = v
|
||||
req = urllib.request.Request(url, data=data, method="POST", headers=fwd_headers)
|
||||
try:
|
||||
with urllib.request.urlopen(req, timeout=UPSTREAM_TIMEOUT) as resp:
|
||||
return resp.status, json.loads(resp.read().decode("utf-8"))
|
||||
except urllib.error.HTTPError as e:
|
||||
try:
|
||||
body_bytes = e.read()
|
||||
payload = json.loads(body_bytes.decode("utf-8"))
|
||||
except Exception:
|
||||
payload = {"error": {"message": f"upstream HTTP {e.code}"}}
|
||||
return e.code, payload
|
||||
except Exception as e:
|
||||
log(f"upstream error: {e}")
|
||||
return 502, {"error": {"message": f"upstream unreachable: {e}"}}
|
||||
|
||||
|
||||
class ProxyHandler(http.server.BaseHTTPRequestHandler):
|
||||
# Silence default request logging (we log what matters ourselves)
|
||||
def log_message(self, format: str, *args: Any) -> None:
|
||||
pass
|
||||
|
||||
def _read_body(self) -> dict:
|
||||
length = int(self.headers.get("Content-Length", "0") or "0")
|
||||
if length <= 0:
|
||||
return {}
|
||||
raw = self.rfile.read(length)
|
||||
try:
|
||||
return json.loads(raw.decode("utf-8"))
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
def _send_json(self, status: int, payload: dict) -> None:
|
||||
body = json.dumps(payload).encode("utf-8")
|
||||
self.send_response(status)
|
||||
self.send_header("Content-Type", "application/json")
|
||||
self.send_header("Content-Length", str(len(body)))
|
||||
self.send_header("Access-Control-Allow-Origin", "*")
|
||||
self.end_headers()
|
||||
self.wfile.write(body)
|
||||
|
||||
def do_OPTIONS(self) -> None: # CORS preflight
|
||||
self.send_response(204)
|
||||
self.send_header("Access-Control-Allow-Origin", "*")
|
||||
self.send_header("Access-Control-Allow-Methods", "POST, GET, OPTIONS")
|
||||
self.send_header("Access-Control-Allow-Headers", "Content-Type, Authorization, X-API-Key")
|
||||
self.end_headers()
|
||||
|
||||
def do_GET(self) -> None:
|
||||
parsed = urllib.parse.urlparse(self.path)
|
||||
if parsed.path == "/healthz":
|
||||
self._send_json(200, {
|
||||
"status": "ok",
|
||||
"atocore": ATOCORE_URL,
|
||||
"upstream": UPSTREAM_URL or "(not configured)",
|
||||
"inject": INJECT_ENABLED,
|
||||
"capture": CAPTURE_ENABLED,
|
||||
})
|
||||
return
|
||||
# Pass through GET to upstream (model listing etc)
|
||||
if not UPSTREAM_URL:
|
||||
self._send_json(503, {"error": {"message": "ATOCORE_UPSTREAM not configured"}})
|
||||
return
|
||||
try:
|
||||
req = urllib.request.Request(UPSTREAM_URL + parsed.path + (f"?{parsed.query}" if parsed.query else ""))
|
||||
for k in ("Authorization", "X-API-Key"):
|
||||
v = self.headers.get(k)
|
||||
if v:
|
||||
req.add_header(k, v)
|
||||
with urllib.request.urlopen(req, timeout=UPSTREAM_TIMEOUT) as resp:
|
||||
data = resp.read()
|
||||
self.send_response(resp.status)
|
||||
self.send_header("Content-Type", resp.headers.get("Content-Type", "application/json"))
|
||||
self.send_header("Content-Length", str(len(data)))
|
||||
self.end_headers()
|
||||
self.wfile.write(data)
|
||||
except Exception as e:
|
||||
self._send_json(502, {"error": {"message": f"upstream error: {e}"}})
|
||||
|
||||
def do_POST(self) -> None:
|
||||
parsed = urllib.parse.urlparse(self.path)
|
||||
body = self._read_body()
|
||||
|
||||
# Only enrich chat completions; other endpoints pass through
|
||||
if parsed.path.endswith("/chat/completions") or parsed.path == "/v1/chat/completions":
|
||||
prompt = get_last_user_message(body)
|
||||
project = detect_project(prompt)
|
||||
context = fetch_context(prompt, project) if prompt else ""
|
||||
if context:
|
||||
log(f"inject: project={project or '(none)'} chars={len(context)}")
|
||||
body = inject_context(body, context)
|
||||
|
||||
status, response = forward_to_upstream(body, dict(self.headers), parsed.path)
|
||||
self._send_json(status, response)
|
||||
|
||||
if status == 200:
|
||||
assistant_text = get_assistant_text(response)
|
||||
capture_interaction(prompt, assistant_text, project)
|
||||
else:
|
||||
# Non-chat endpoints (embeddings, completions, etc.) — pure passthrough
|
||||
status, response = forward_to_upstream(body, dict(self.headers), parsed.path)
|
||||
self._send_json(status, response)
|
||||
|
||||
|
||||
class ThreadedServer(socketserver.ThreadingMixIn, http.server.HTTPServer):
|
||||
daemon_threads = True
|
||||
allow_reuse_address = True
|
||||
|
||||
|
||||
def main() -> int:
|
||||
if not UPSTREAM_URL:
|
||||
log("WARNING: ATOCORE_UPSTREAM not set. Chat completions will fail.")
|
||||
log("Example: ATOCORE_UPSTREAM=http://localhost:11434/v1 for Ollama")
|
||||
server = ThreadedServer((PROXY_HOST, PROXY_PORT), ProxyHandler)
|
||||
log(f"listening on {PROXY_HOST}:{PROXY_PORT}")
|
||||
log(f"AtoCore: {ATOCORE_URL} inject={INJECT_ENABLED} capture={CAPTURE_ENABLED}")
|
||||
log(f"Upstream: {UPSTREAM_URL or '(not configured)'}")
|
||||
log(f"Client label: {CLIENT_LABEL}")
|
||||
log("Ready. Point your OpenAI-compatible client at /v1/chat/completions")
|
||||
try:
|
||||
server.serve_forever()
|
||||
except KeyboardInterrupt:
|
||||
log("stopping")
|
||||
server.server_close()
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
79
scripts/auto_promote_reinforced.py
Normal file
79
scripts/auto_promote_reinforced.py
Normal file
@@ -0,0 +1,79 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Auto-promote reinforced candidates + expire stale ones.
|
||||
|
||||
Phase 10: reinforcement-based auto-promotion. Candidates referenced
|
||||
by 3+ interactions with confidence >= 0.7 graduate to active.
|
||||
Candidates unreinforced for 14+ days are auto-rejected.
|
||||
|
||||
Usage:
|
||||
python3 scripts/auto_promote_reinforced.py [--base-url URL] [--dry-run]
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
|
||||
# Allow importing from src/ when run from repo root
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "src"))
|
||||
|
||||
from atocore.memory.service import auto_promote_reinforced, expire_stale_candidates
|
||||
|
||||
|
||||
def main() -> None:
|
||||
parser = argparse.ArgumentParser(description="Auto-promote + expire candidates")
|
||||
parser.add_argument("--dry-run", action="store_true", help="Report only, don't change anything")
|
||||
parser.add_argument("--min-refs", type=int, default=3, help="Min reference_count for promotion")
|
||||
parser.add_argument("--min-confidence", type=float, default=0.7, help="Min confidence for promotion")
|
||||
parser.add_argument("--expire-days", type=int, default=14, help="Days before unreinforced candidates expire")
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.dry_run:
|
||||
print("DRY RUN — no changes will be made")
|
||||
# For dry-run, query directly and report
|
||||
from atocore.models.database import get_connection
|
||||
from datetime import datetime, timedelta, timezone
|
||||
|
||||
cutoff_promote = (datetime.now(timezone.utc) - timedelta(days=args.expire_days)).strftime("%Y-%m-%d %H:%M:%S")
|
||||
cutoff_expire = cutoff_promote
|
||||
|
||||
with get_connection() as conn:
|
||||
promotable = conn.execute(
|
||||
"SELECT id, content, memory_type, project, confidence, reference_count "
|
||||
"FROM memories WHERE status = 'candidate' "
|
||||
"AND COALESCE(reference_count, 0) >= ? AND confidence >= ? "
|
||||
"AND last_referenced_at >= ?",
|
||||
(args.min_refs, args.min_confidence, cutoff_promote),
|
||||
).fetchall()
|
||||
expirable = conn.execute(
|
||||
"SELECT id, content, memory_type, project "
|
||||
"FROM memories WHERE status = 'candidate' "
|
||||
"AND COALESCE(reference_count, 0) = 0 AND created_at < ?",
|
||||
(cutoff_expire,),
|
||||
).fetchall()
|
||||
|
||||
print(f"\nWould promote {len(promotable)} candidates:")
|
||||
for r in promotable:
|
||||
print(f" [{r['memory_type']}] refs={r['reference_count']} conf={r['confidence']:.2f} | {r['content'][:80]}...")
|
||||
print(f"\nWould expire {len(expirable)} stale candidates:")
|
||||
for r in expirable:
|
||||
print(f" [{r['memory_type']}] {r['project'] or 'global'} | {r['content'][:80]}...")
|
||||
return
|
||||
|
||||
promoted = auto_promote_reinforced(
|
||||
min_reference_count=args.min_refs,
|
||||
min_confidence=args.min_confidence,
|
||||
)
|
||||
expired = expire_stale_candidates(max_age_days=args.expire_days)
|
||||
|
||||
print(f"promoted={len(promoted)} expired={len(expired)}")
|
||||
if promoted:
|
||||
print(f"Promoted IDs: {promoted}")
|
||||
if expired:
|
||||
print(f"Expired IDs: {expired}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
284
scripts/auto_triage.py
Normal file
284
scripts/auto_triage.py
Normal file
@@ -0,0 +1,284 @@
|
||||
"""Auto-triage: LLM second-pass over candidate memories.
|
||||
|
||||
Fetches all status=candidate memories from the AtoCore API, asks
|
||||
a triage model (via claude -p) to classify each as promote / reject /
|
||||
needs_human, and executes the verdict via the promote/reject endpoints.
|
||||
Only needs_human candidates remain in the queue for manual review.
|
||||
|
||||
Trust model:
|
||||
- Auto-promote: model says promote AND confidence >= 0.8 AND no
|
||||
duplicate content in existing active memories
|
||||
- Auto-reject: model says reject
|
||||
- needs_human: everything else stays in queue
|
||||
|
||||
Runs host-side (same as batch extraction) because it needs the
|
||||
claude CLI. Intended to be called after batch-extract.sh in the
|
||||
nightly cron, or manually.
|
||||
|
||||
Usage:
|
||||
|
||||
python3 scripts/auto_triage.py --base-url http://localhost:8100
|
||||
python3 scripts/auto_triage.py --dry-run # preview without executing
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
import tempfile
|
||||
import urllib.error
|
||||
import urllib.parse
|
||||
import urllib.request
|
||||
|
||||
DEFAULT_BASE_URL = os.environ.get("ATOCORE_BASE_URL", "http://localhost:8100")
|
||||
DEFAULT_MODEL = os.environ.get("ATOCORE_TRIAGE_MODEL", "sonnet")
|
||||
DEFAULT_TIMEOUT_S = float(os.environ.get("ATOCORE_TRIAGE_TIMEOUT_S", "60"))
|
||||
AUTO_PROMOTE_MIN_CONFIDENCE = 0.8
|
||||
|
||||
TRIAGE_SYSTEM_PROMPT = """You are a memory triage reviewer for a personal context engine called AtoCore. You review candidate memories extracted from LLM conversations and decide whether each should be promoted to active status, rejected, or flagged for human review.
|
||||
|
||||
You will receive:
|
||||
- The candidate memory content and type
|
||||
- A list of existing active memories for the same project (to check for duplicates)
|
||||
|
||||
For each candidate, output exactly one JSON object:
|
||||
|
||||
{"verdict": "promote|reject|needs_human|contradicts", "confidence": 0.0-1.0, "reason": "one sentence", "conflicts_with": "id of existing memory if contradicts"}
|
||||
|
||||
Rules:
|
||||
|
||||
1. PROMOTE when the candidate states a durable architectural fact, ratified decision, standing rule, or engineering constraint that is NOT already covered by an existing active memory. Confidence should reflect how certain you are this is worth keeping.
|
||||
|
||||
2. REJECT when the candidate is:
|
||||
- A stale point-in-time snapshot ("live SHA is X", "36 active memories")
|
||||
- An implementation detail too granular to be useful as standalone context
|
||||
- A planned-but-not-implemented feature description
|
||||
- A duplicate or near-duplicate of an existing active memory
|
||||
- A session observation or conversational filler
|
||||
- A process rule that belongs in DEV-LEDGER.md or AGENTS.md, not memory
|
||||
|
||||
3. CONTRADICTS when the candidate *conflicts* with an existing active memory (not a duplicate, but states something that can't both be true). Set `conflicts_with` to the existing memory id. This flags the tension for human review instead of silently rejecting or double-storing. Examples: "Option A selected" vs "Option B selected" for the same decision; "uses material X" vs "uses material Y" for the same component.
|
||||
|
||||
4. OPENCLAW-CURATED content (candidate content starts with "From OpenClaw/"): apply a MUCH LOWER bar. OpenClaw's SOUL.md, USER.md, MEMORY.md, MODEL-ROUTING.md, and dated memory/*.md files are ALREADY curated by OpenClaw as canonical continuity. Promote unless clearly wrong or a genuine duplicate. Do NOT reject OpenClaw content as "process rule belongs elsewhere" or "session log" — that's exactly what AtoCore wants to absorb. Session events, project updates, stakeholder notes, and decisions from OpenClaw daily memory files ARE valuable context and should promote.
|
||||
|
||||
5. NEEDS_HUMAN when you're genuinely unsure — the candidate might be valuable but you can't tell without domain knowledge. This should be rare (< 20% of candidates).
|
||||
|
||||
6. Output ONLY the JSON object. No prose, no markdown, no explanation outside the reason field."""
|
||||
|
||||
_sandbox_cwd = None
|
||||
|
||||
|
||||
def get_sandbox_cwd():
|
||||
global _sandbox_cwd
|
||||
if _sandbox_cwd is None:
|
||||
_sandbox_cwd = tempfile.mkdtemp(prefix="ato-triage-")
|
||||
return _sandbox_cwd
|
||||
|
||||
|
||||
def api_get(base_url, path, timeout=10):
|
||||
req = urllib.request.Request(f"{base_url}{path}")
|
||||
with urllib.request.urlopen(req, timeout=timeout) as resp:
|
||||
return json.loads(resp.read().decode("utf-8"))
|
||||
|
||||
|
||||
def api_post(base_url, path, body=None, timeout=10):
|
||||
data = json.dumps(body or {}).encode("utf-8")
|
||||
req = urllib.request.Request(
|
||||
f"{base_url}{path}", method="POST",
|
||||
headers={"Content-Type": "application/json"}, data=data,
|
||||
)
|
||||
with urllib.request.urlopen(req, timeout=timeout) as resp:
|
||||
return json.loads(resp.read().decode("utf-8"))
|
||||
|
||||
|
||||
def fetch_active_memories_for_project(base_url, project):
|
||||
"""Fetch active memories for dedup checking."""
|
||||
params = "active_only=true&limit=50"
|
||||
if project:
|
||||
params += f"&project={urllib.parse.quote(project)}"
|
||||
result = api_get(base_url, f"/memory?{params}")
|
||||
return result.get("memories", [])
|
||||
|
||||
|
||||
def triage_one(candidate, active_memories, model, timeout_s):
|
||||
"""Ask the triage model to classify one candidate."""
|
||||
if not shutil.which("claude"):
|
||||
return {"verdict": "needs_human", "confidence": 0.0, "reason": "claude CLI not available"}
|
||||
|
||||
active_summary = "\n".join(
|
||||
f"- [{m['memory_type']}] {m['content'][:150]}"
|
||||
for m in active_memories[:20]
|
||||
) or "(no active memories for this project)"
|
||||
|
||||
user_message = (
|
||||
f"CANDIDATE TO TRIAGE:\n"
|
||||
f" type: {candidate['memory_type']}\n"
|
||||
f" project: {candidate.get('project') or '(none)'}\n"
|
||||
f" content: {candidate['content']}\n\n"
|
||||
f"EXISTING ACTIVE MEMORIES FOR THIS PROJECT:\n{active_summary}\n\n"
|
||||
f"Return the JSON verdict now."
|
||||
)
|
||||
|
||||
args = [
|
||||
"claude", "-p",
|
||||
"--model", model,
|
||||
"--append-system-prompt", TRIAGE_SYSTEM_PROMPT,
|
||||
"--disable-slash-commands",
|
||||
user_message,
|
||||
]
|
||||
|
||||
# Retry with exponential backoff on transient failures (rate limits etc)
|
||||
last_error = ""
|
||||
for attempt in range(3):
|
||||
if attempt > 0:
|
||||
time.sleep(2 ** attempt) # 2s, 4s
|
||||
try:
|
||||
completed = subprocess.run(
|
||||
args, capture_output=True, text=True,
|
||||
timeout=timeout_s, cwd=get_sandbox_cwd(),
|
||||
encoding="utf-8", errors="replace",
|
||||
)
|
||||
except subprocess.TimeoutExpired:
|
||||
last_error = "triage model timed out"
|
||||
continue
|
||||
except Exception as exc:
|
||||
last_error = f"subprocess error: {exc}"
|
||||
continue
|
||||
|
||||
if completed.returncode == 0:
|
||||
raw = (completed.stdout or "").strip()
|
||||
return parse_verdict(raw)
|
||||
|
||||
# Capture stderr for diagnostics (truncate to 200 chars)
|
||||
stderr = (completed.stderr or "").strip()[:200]
|
||||
last_error = f"claude exit {completed.returncode}: {stderr}" if stderr else f"claude exit {completed.returncode}"
|
||||
|
||||
return {"verdict": "needs_human", "confidence": 0.0, "reason": last_error}
|
||||
|
||||
|
||||
def parse_verdict(raw):
|
||||
"""Parse the triage model's JSON verdict."""
|
||||
text = raw.strip()
|
||||
if text.startswith("```"):
|
||||
text = text.strip("`")
|
||||
nl = text.find("\n")
|
||||
if nl >= 0:
|
||||
text = text[nl + 1:]
|
||||
if text.endswith("```"):
|
||||
text = text[:-3]
|
||||
text = text.strip()
|
||||
|
||||
if not text.lstrip().startswith("{"):
|
||||
start = text.find("{")
|
||||
end = text.rfind("}")
|
||||
if start >= 0 and end > start:
|
||||
text = text[start:end + 1]
|
||||
|
||||
try:
|
||||
parsed = json.loads(text)
|
||||
except json.JSONDecodeError:
|
||||
return {"verdict": "needs_human", "confidence": 0.0, "reason": "failed to parse triage output"}
|
||||
|
||||
verdict = str(parsed.get("verdict", "needs_human")).strip().lower()
|
||||
if verdict not in {"promote", "reject", "needs_human", "contradicts"}:
|
||||
verdict = "needs_human"
|
||||
|
||||
confidence = parsed.get("confidence", 0.5)
|
||||
try:
|
||||
confidence = max(0.0, min(1.0, float(confidence)))
|
||||
except (TypeError, ValueError):
|
||||
confidence = 0.5
|
||||
|
||||
reason = str(parsed.get("reason", "")).strip()[:200]
|
||||
conflicts_with = str(parsed.get("conflicts_with", "")).strip()
|
||||
return {
|
||||
"verdict": verdict,
|
||||
"confidence": confidence,
|
||||
"reason": reason,
|
||||
"conflicts_with": conflicts_with,
|
||||
}
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Auto-triage candidate memories")
|
||||
parser.add_argument("--base-url", default=DEFAULT_BASE_URL)
|
||||
parser.add_argument("--model", default=DEFAULT_MODEL)
|
||||
parser.add_argument("--dry-run", action="store_true", help="preview without executing")
|
||||
args = parser.parse_args()
|
||||
|
||||
# Fetch candidates
|
||||
result = api_get(args.base_url, "/memory?status=candidate&limit=100")
|
||||
candidates = result.get("memories", [])
|
||||
print(f"candidates: {len(candidates)} model: {args.model} dry_run: {args.dry_run}")
|
||||
|
||||
if not candidates:
|
||||
print("queue empty, nothing to triage")
|
||||
return
|
||||
|
||||
# Cache active memories per project for dedup
|
||||
active_cache = {}
|
||||
promoted = rejected = needs_human = errors = 0
|
||||
|
||||
for i, cand in enumerate(candidates, 1):
|
||||
# Light rate-limit pacing: 0.5s between triage calls so a burst
|
||||
# doesn't overwhelm the claude CLI's backend. With ~60s per call
|
||||
# this is negligible overhead but avoids the "all-failed" pattern
|
||||
# we saw on large batches.
|
||||
if i > 1:
|
||||
time.sleep(0.5)
|
||||
|
||||
project = cand.get("project") or ""
|
||||
if project not in active_cache:
|
||||
active_cache[project] = fetch_active_memories_for_project(args.base_url, project)
|
||||
|
||||
verdict_obj = triage_one(cand, active_cache[project], args.model, DEFAULT_TIMEOUT_S)
|
||||
verdict = verdict_obj["verdict"]
|
||||
conf = verdict_obj["confidence"]
|
||||
reason = verdict_obj["reason"]
|
||||
conflicts_with = verdict_obj.get("conflicts_with", "")
|
||||
|
||||
mid = cand["id"]
|
||||
label = f"[{i:2d}/{len(candidates)}] {mid[:8]} [{cand['memory_type']}]"
|
||||
|
||||
if verdict == "promote" and conf >= AUTO_PROMOTE_MIN_CONFIDENCE:
|
||||
if args.dry_run:
|
||||
print(f" WOULD PROMOTE {label} conf={conf:.2f} {reason}")
|
||||
else:
|
||||
try:
|
||||
api_post(args.base_url, f"/memory/{mid}/promote")
|
||||
print(f" PROMOTED {label} conf={conf:.2f} {reason}")
|
||||
active_cache[project].append(cand)
|
||||
except Exception:
|
||||
errors += 1
|
||||
promoted += 1
|
||||
elif verdict == "reject":
|
||||
if args.dry_run:
|
||||
print(f" WOULD REJECT {label} conf={conf:.2f} {reason}")
|
||||
else:
|
||||
try:
|
||||
api_post(args.base_url, f"/memory/{mid}/reject")
|
||||
print(f" REJECTED {label} conf={conf:.2f} {reason}")
|
||||
except Exception:
|
||||
errors += 1
|
||||
rejected += 1
|
||||
elif verdict == "contradicts":
|
||||
# Leave candidate in queue but flag the conflict in content
|
||||
# so the wiki/triage shows it. This is conservative: we
|
||||
# don't silently merge or reject when sources disagree.
|
||||
print(f" CONTRADICTS {label} vs {conflicts_with[:8] if conflicts_with else '?'} {reason}")
|
||||
contradicts_count = locals().get('contradicts_count', 0) + 1
|
||||
needs_human += 1
|
||||
else:
|
||||
print(f" NEEDS_HUMAN {label} conf={conf:.2f} {reason}")
|
||||
needs_human += 1
|
||||
|
||||
print(f"\npromoted={promoted} rejected={rejected} needs_human={needs_human} errors={errors}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
268
scripts/batch_llm_extract_live.py
Normal file
268
scripts/batch_llm_extract_live.py
Normal file
@@ -0,0 +1,268 @@
|
||||
"""Host-side LLM batch extraction — HTTP client + shared prompt module.
|
||||
|
||||
Fetches interactions from the AtoCore API, runs ``claude -p`` locally
|
||||
for each, and POSTs candidates back. Uses stdlib + the ``claude`` CLI
|
||||
on PATH, plus the stdlib-only shared prompt/parser module at
|
||||
``atocore.memory._llm_prompt`` to eliminate prompt/parser drift
|
||||
against the in-container extractor (R12).
|
||||
|
||||
This is necessary because the ``claude`` CLI is on the Dalidou HOST
|
||||
but not inside the Docker container, and the host's Python doesn't
|
||||
have the container's dependencies (pydantic_settings, etc.) — so we
|
||||
only import the one stdlib-only module, not the full atocore package.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
import urllib.error
|
||||
import urllib.parse
|
||||
import urllib.request
|
||||
from datetime import datetime, timezone
|
||||
|
||||
# R12: share the prompt + parser with the in-container extractor so
|
||||
# the two paths can't drift. The imported module is stdlib-only by
|
||||
# design; see src/atocore/memory/_llm_prompt.py.
|
||||
_SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
_SRC_DIR = os.path.abspath(os.path.join(_SCRIPT_DIR, "..", "src"))
|
||||
if _SRC_DIR not in sys.path:
|
||||
sys.path.insert(0, _SRC_DIR)
|
||||
|
||||
from atocore.memory._llm_prompt import ( # noqa: E402
|
||||
MEMORY_TYPES,
|
||||
SYSTEM_PROMPT,
|
||||
build_user_message,
|
||||
normalize_candidate_item,
|
||||
parse_llm_json_array,
|
||||
)
|
||||
|
||||
DEFAULT_BASE_URL = os.environ.get("ATOCORE_BASE_URL", "http://localhost:8100")
|
||||
DEFAULT_MODEL = os.environ.get("ATOCORE_LLM_EXTRACTOR_MODEL", "sonnet")
|
||||
DEFAULT_TIMEOUT_S = float(os.environ.get("ATOCORE_LLM_EXTRACTOR_TIMEOUT_S", "90"))
|
||||
|
||||
|
||||
_sandbox_cwd = None
|
||||
|
||||
|
||||
def get_sandbox_cwd():
|
||||
global _sandbox_cwd
|
||||
if _sandbox_cwd is None:
|
||||
_sandbox_cwd = tempfile.mkdtemp(prefix="ato-llm-extract-")
|
||||
return _sandbox_cwd
|
||||
|
||||
|
||||
def api_get(base_url, path, timeout=10):
|
||||
req = urllib.request.Request(f"{base_url}{path}")
|
||||
with urllib.request.urlopen(req, timeout=timeout) as resp:
|
||||
return json.loads(resp.read().decode("utf-8"))
|
||||
|
||||
|
||||
def api_post(base_url, path, body, timeout=10):
|
||||
data = json.dumps(body).encode("utf-8")
|
||||
req = urllib.request.Request(
|
||||
f"{base_url}{path}", method="POST",
|
||||
headers={"Content-Type": "application/json"}, data=data,
|
||||
)
|
||||
with urllib.request.urlopen(req, timeout=timeout) as resp:
|
||||
return json.loads(resp.read().decode("utf-8"))
|
||||
|
||||
|
||||
def get_last_run(base_url):
|
||||
try:
|
||||
state = api_get(base_url, "/project/state/atocore?category=status")
|
||||
for entry in state.get("entries", []):
|
||||
if entry.get("key") == "last_extract_batch_run":
|
||||
return entry["value"]
|
||||
except Exception:
|
||||
pass
|
||||
return None
|
||||
|
||||
|
||||
def set_last_run(base_url, timestamp):
|
||||
try:
|
||||
api_post(base_url, "/project/state", {
|
||||
"project": "atocore", "category": "status",
|
||||
"key": "last_extract_batch_run", "value": timestamp,
|
||||
"source": "batch_llm_extract_live.py",
|
||||
})
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
_known_projects: set[str] = set()
|
||||
|
||||
|
||||
def _load_known_projects(base_url):
|
||||
"""Fetch registered project IDs from the API for R9 validation."""
|
||||
global _known_projects
|
||||
try:
|
||||
data = api_get(base_url, "/projects")
|
||||
_known_projects = {p["id"] for p in data.get("projects", [])}
|
||||
for p in data.get("projects", []):
|
||||
for alias in p.get("aliases", []):
|
||||
_known_projects.add(alias)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
def extract_one(prompt, response, project, model, timeout_s):
|
||||
"""Run claude -p on one interaction, return parsed candidates."""
|
||||
if not shutil.which("claude"):
|
||||
return [], "claude_cli_missing"
|
||||
|
||||
user_message = build_user_message(prompt, response, project)
|
||||
|
||||
args = [
|
||||
"claude", "-p",
|
||||
"--model", model,
|
||||
"--append-system-prompt", SYSTEM_PROMPT,
|
||||
"--disable-slash-commands",
|
||||
user_message,
|
||||
]
|
||||
|
||||
# Retry with exponential backoff on transient failures (rate limits etc)
|
||||
import time as _time
|
||||
last_error = ""
|
||||
for attempt in range(3):
|
||||
if attempt > 0:
|
||||
_time.sleep(2 ** attempt) # 2s, 4s
|
||||
try:
|
||||
completed = subprocess.run(
|
||||
args, capture_output=True, text=True,
|
||||
timeout=timeout_s, cwd=get_sandbox_cwd(),
|
||||
encoding="utf-8", errors="replace",
|
||||
)
|
||||
except subprocess.TimeoutExpired:
|
||||
last_error = "timeout"
|
||||
continue
|
||||
except Exception as exc:
|
||||
last_error = f"subprocess_error: {exc}"
|
||||
continue
|
||||
|
||||
if completed.returncode == 0:
|
||||
raw = (completed.stdout or "").strip()
|
||||
return parse_candidates(raw, project), ""
|
||||
|
||||
# Capture stderr for diagnostics (truncate to 200 chars)
|
||||
stderr = (completed.stderr or "").strip()[:200]
|
||||
last_error = f"exit_{completed.returncode}: {stderr}" if stderr else f"exit_{completed.returncode}"
|
||||
|
||||
return [], last_error
|
||||
|
||||
|
||||
def parse_candidates(raw, interaction_project):
|
||||
"""Parse model JSON output into candidate dicts.
|
||||
|
||||
Stripping + per-item normalization come from the shared
|
||||
``_llm_prompt`` module. Host-side project attribution: interaction
|
||||
scope wins, otherwise keep the model's tag (the API's own R9
|
||||
registry-check will happen server-side in the container on write;
|
||||
here we preserve the signal instead of dropping it).
|
||||
"""
|
||||
results = []
|
||||
for item in parse_llm_json_array(raw):
|
||||
normalized = normalize_candidate_item(item)
|
||||
if normalized is None:
|
||||
continue
|
||||
project = interaction_project or normalized["project"] or ""
|
||||
results.append({
|
||||
"memory_type": normalized["type"],
|
||||
"content": normalized["content"],
|
||||
"project": project,
|
||||
"confidence": normalized["confidence"],
|
||||
})
|
||||
return results
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Host-side LLM batch extraction")
|
||||
parser.add_argument("--base-url", default=DEFAULT_BASE_URL)
|
||||
parser.add_argument("--limit", type=int, default=50)
|
||||
parser.add_argument("--since", default=None)
|
||||
parser.add_argument("--model", default=DEFAULT_MODEL)
|
||||
args = parser.parse_args()
|
||||
|
||||
_load_known_projects(args.base_url)
|
||||
since = args.since or get_last_run(args.base_url)
|
||||
print(f"since={since or '(first run)'} limit={args.limit} model={args.model} known_projects={len(_known_projects)}")
|
||||
|
||||
params = [f"limit={args.limit}"]
|
||||
if since:
|
||||
params.append(f"since={urllib.parse.quote(since)}")
|
||||
listing = api_get(args.base_url, f"/interactions?{'&'.join(params)}")
|
||||
interaction_summaries = listing.get("interactions", [])
|
||||
print(f"listed {len(interaction_summaries)} interactions")
|
||||
|
||||
processed = 0
|
||||
total_candidates = 0
|
||||
total_persisted = 0
|
||||
errors = 0
|
||||
|
||||
import time as _time
|
||||
for ix, summary in enumerate(interaction_summaries):
|
||||
resp_chars = summary.get("response_chars", 0) or 0
|
||||
if resp_chars < 50:
|
||||
continue
|
||||
# Light pacing between calls to avoid bursting the claude CLI
|
||||
if ix > 0:
|
||||
_time.sleep(0.5)
|
||||
iid = summary["id"]
|
||||
try:
|
||||
raw = api_get(
|
||||
args.base_url,
|
||||
f"/interactions/{urllib.parse.quote(iid, safe='')}",
|
||||
)
|
||||
except Exception as exc:
|
||||
print(f" ! {iid[:8]}: fetch failed: {exc}", file=sys.stderr)
|
||||
errors += 1
|
||||
continue
|
||||
response_text = raw.get("response", "") or ""
|
||||
if not response_text.strip() or len(response_text) < 50:
|
||||
continue
|
||||
|
||||
candidates, error = extract_one(
|
||||
prompt=raw.get("prompt", "") or "",
|
||||
response=response_text,
|
||||
project=raw.get("project", "") or "",
|
||||
model=args.model,
|
||||
timeout_s=DEFAULT_TIMEOUT_S,
|
||||
)
|
||||
|
||||
if error:
|
||||
print(f" ! {raw['id'][:8]}: {error}", file=sys.stderr)
|
||||
errors += 1
|
||||
continue
|
||||
|
||||
processed += 1
|
||||
total_candidates += len(candidates)
|
||||
|
||||
for c in candidates:
|
||||
try:
|
||||
api_post(args.base_url, "/memory", {
|
||||
"memory_type": c["memory_type"],
|
||||
"content": c["content"],
|
||||
"project": c["project"],
|
||||
"confidence": c["confidence"],
|
||||
"status": "candidate",
|
||||
})
|
||||
total_persisted += 1
|
||||
except urllib.error.HTTPError as exc:
|
||||
if exc.code != 400:
|
||||
errors += 1
|
||||
except Exception:
|
||||
errors += 1
|
||||
|
||||
now = datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S")
|
||||
set_last_run(args.base_url, now)
|
||||
|
||||
print(f"processed={processed} candidates={total_candidates} persisted={total_persisted} errors={errors}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
188
scripts/bootstrap_entities.py
Normal file
188
scripts/bootstrap_entities.py
Normal file
@@ -0,0 +1,188 @@
|
||||
"""Bootstrap engineering entities from existing project knowledge.
|
||||
|
||||
One-shot script that seeds the entity/relationship graph from what
|
||||
AtoCore already knows via memories, project state, and vault docs.
|
||||
Safe to re-run — uses name+project dedup.
|
||||
|
||||
Usage:
|
||||
|
||||
python3 scripts/bootstrap_entities.py --base-url http://localhost:8100
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import urllib.request
|
||||
|
||||
DEFAULT_BASE_URL = os.environ.get("ATOCORE_BASE_URL", "http://dalidou:8100")
|
||||
|
||||
|
||||
def post(base_url, path, body):
|
||||
data = json.dumps(body).encode("utf-8")
|
||||
req = urllib.request.Request(
|
||||
f"{base_url}{path}", method="POST",
|
||||
headers={"Content-Type": "application/json"}, data=data,
|
||||
)
|
||||
try:
|
||||
with urllib.request.urlopen(req, timeout=10) as resp:
|
||||
return json.loads(resp.read().decode("utf-8"))
|
||||
except Exception as e:
|
||||
return {"error": str(e)}
|
||||
|
||||
|
||||
def entity(base_url, etype, name, project="", desc="", props=None):
|
||||
result = post(base_url, "/entities", {
|
||||
"entity_type": etype, "name": name, "project": project,
|
||||
"description": desc, "properties": props or {},
|
||||
})
|
||||
eid = result.get("id", "")
|
||||
status = "+" if eid else "skip"
|
||||
print(f" {status} [{etype}] {name}")
|
||||
return eid
|
||||
|
||||
|
||||
def rel(base_url, src, tgt, rtype):
|
||||
if not src or not tgt:
|
||||
return
|
||||
result = post(base_url, "/relationships", {
|
||||
"source_entity_id": src, "target_entity_id": tgt,
|
||||
"relationship_type": rtype,
|
||||
})
|
||||
print(f" -> {rtype}")
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("--base-url", default=DEFAULT_BASE_URL)
|
||||
args = parser.parse_args()
|
||||
b = args.base_url
|
||||
|
||||
print("=== P04 GigaBIT M1 ===")
|
||||
p04 = entity(b, "project", "GigaBIT M1", "p04-gigabit",
|
||||
"1.2m primary mirror for stratospheric balloon telescope")
|
||||
|
||||
p04_m1 = entity(b, "system", "M1 Mirror Assembly", "p04-gigabit",
|
||||
"Primary mirror blank + support system + reference frame")
|
||||
rel(b, p04, p04_m1, "contains")
|
||||
|
||||
p04_vs = entity(b, "subsystem", "Vertical Support", "p04-gigabit",
|
||||
"18-point whiffletree axial support from below")
|
||||
p04_ls = entity(b, "subsystem", "Lateral Support", "p04-gigabit",
|
||||
"Circumferential constraint system with GF-PTFE pads")
|
||||
p04_rf = entity(b, "subsystem", "Reference Frame", "p04-gigabit",
|
||||
"Structural mounting interface between mirror and OTA")
|
||||
p04_blank = entity(b, "component", "M1 Blank", "p04-gigabit",
|
||||
"1.2m Zerodur aspheric blank from Schott",
|
||||
{"material": "Zerodur", "diameter_m": 1.2, "focal_ratio": "F/1.2"})
|
||||
rel(b, p04_m1, p04_vs, "contains")
|
||||
rel(b, p04_m1, p04_ls, "contains")
|
||||
rel(b, p04_m1, p04_rf, "contains")
|
||||
rel(b, p04_m1, p04_blank, "contains")
|
||||
|
||||
p04_zerodur = entity(b, "material", "Zerodur", "p04-gigabit",
|
||||
"Glass-ceramic with near-zero CTE for mirror blanks")
|
||||
p04_ptfe = entity(b, "material", "GF-PTFE", "p04-gigabit",
|
||||
"Glass-filled PTFE for thermal stability on lateral pads")
|
||||
rel(b, p04_blank, p04_zerodur, "uses_material")
|
||||
rel(b, p04_ls, p04_ptfe, "uses_material")
|
||||
|
||||
p04_optb = entity(b, "decision", "Option B Conical Back", "p04-gigabit",
|
||||
"Selected mirror architecture: conical-back lightweighting")
|
||||
rel(b, p04_optb, p04_blank, "affected_by_decision")
|
||||
|
||||
p04_wfe = entity(b, "requirement", "WFE < 15nm RMS filtered", "p04-gigabit",
|
||||
"Filtered mechanical wavefront error below 15 nm across 20-60 deg elevation")
|
||||
p04_mass = entity(b, "requirement", "Mass < 103.5 kg", "p04-gigabit",
|
||||
"Total mirror assembly mass constraint")
|
||||
rel(b, p04_m1, p04_wfe, "constrained_by")
|
||||
rel(b, p04_m1, p04_mass, "constrained_by")
|
||||
|
||||
print("\n=== P05 Interferometer ===")
|
||||
p05 = entity(b, "project", "Interferometer System", "p05-interferometer",
|
||||
"Metrology system for GigaBIT M1 figuring")
|
||||
|
||||
p05_rig = entity(b, "system", "Test Rig", "p05-interferometer",
|
||||
"Folded-beam interferometric test setup for M1 measurement")
|
||||
rel(b, p05, p05_rig, "contains")
|
||||
|
||||
p05_ifm = entity(b, "component", "Interferometer", "p05-interferometer",
|
||||
"Fixed horizontal Twyman-Green dynamic interferometer")
|
||||
p05_fold = entity(b, "component", "Fold Mirror", "p05-interferometer",
|
||||
"45-degree beam redirect, <= lambda/20 surface quality")
|
||||
p05_cgh = entity(b, "component", "CGH Null Corrector", "p05-interferometer",
|
||||
"6-inch transmission CGH for F/1.2 asphere null test",
|
||||
{"diameter": "6 inch", "substrate": "fused silica", "error_budget_nm": 5.5})
|
||||
p05_tilt = entity(b, "subsystem", "Tilting Platform", "p05-interferometer",
|
||||
"Mirror tilting platform, co-tilts with interferometer")
|
||||
rel(b, p05_rig, p05_ifm, "contains")
|
||||
rel(b, p05_rig, p05_fold, "contains")
|
||||
rel(b, p05_rig, p05_cgh, "contains")
|
||||
rel(b, p05_rig, p05_tilt, "contains")
|
||||
rel(b, p05_ifm, p05_fold, "interfaces_with")
|
||||
rel(b, p05_cgh, p05_tilt, "interfaces_with")
|
||||
|
||||
p05_vendor_dec = entity(b, "decision", "Vendor Path: Twyman-Green preferred", "p05-interferometer",
|
||||
"4D technical lead but cost-challenged; Zygo Verifire SV at 55K is value path")
|
||||
p05_vendor_zygo = entity(b, "vendor", "Zygo / AMETEK", "p05-interferometer",
|
||||
"Certified used Verifire SV, 55K, Nabeel Sufi contact")
|
||||
p05_vendor_4d = entity(b, "vendor", "4D Technology", "p05-interferometer",
|
||||
"PC6110/PC4030, above budget but strongest technical option")
|
||||
p05_vendor_aom = entity(b, "vendor", "AOM (CGH)", "p05-interferometer",
|
||||
"CGH design and fabrication, 28-30K package")
|
||||
rel(b, p05_vendor_dec, p05_ifm, "affected_by_decision")
|
||||
|
||||
print("\n=== P06 Polisher ===")
|
||||
p06 = entity(b, "project", "Polisher System", "p06-polisher",
|
||||
"Machine overhaul + software suite for optical polishing")
|
||||
|
||||
p06_machine = entity(b, "system", "Polisher Machine", "p06-polisher",
|
||||
"Swing-arm polishing machine with force modulation")
|
||||
p06_sw = entity(b, "system", "Software Suite", "p06-polisher",
|
||||
"Three-layer software: polisher-sim, polisher-post, polisher-control")
|
||||
rel(b, p06, p06_machine, "contains")
|
||||
rel(b, p06, p06_sw, "contains")
|
||||
|
||||
p06_sim = entity(b, "subsystem", "polisher-sim", "p06-polisher",
|
||||
"Digital twin: surface assimilation, removal simulation, planning")
|
||||
p06_post = entity(b, "subsystem", "polisher-post", "p06-polisher",
|
||||
"Bridge: validation, translation, packaging for machine")
|
||||
p06_ctrl = entity(b, "subsystem", "polisher-control", "p06-polisher",
|
||||
"Executor: state machine, interlocks, telemetry, run logs")
|
||||
rel(b, p06_sw, p06_sim, "contains")
|
||||
rel(b, p06_sw, p06_post, "contains")
|
||||
rel(b, p06_sw, p06_ctrl, "contains")
|
||||
rel(b, p06_sim, p06_post, "interfaces_with")
|
||||
rel(b, p06_post, p06_ctrl, "interfaces_with")
|
||||
|
||||
p06_fc = entity(b, "subsystem", "Force Control", "p06-polisher",
|
||||
"Frame-grounded counterweight actuator with cable tension modulation",
|
||||
{"actuator_capacity_N": "150-200", "compliance_spring_Nmm": "3-5"})
|
||||
p06_zaxis = entity(b, "component", "Z-Axis", "p06-polisher",
|
||||
"Binary engage/retract mechanism, not continuous position")
|
||||
p06_cam = entity(b, "component", "Cam Mechanism", "p06-polisher",
|
||||
"Mechanically set by operator, read by encoders, not actuated")
|
||||
rel(b, p06_machine, p06_fc, "contains")
|
||||
rel(b, p06_machine, p06_zaxis, "contains")
|
||||
rel(b, p06_machine, p06_cam, "contains")
|
||||
|
||||
p06_fw = entity(b, "decision", "Firmware Interface Contract", "p06-polisher",
|
||||
"controller-job.v1 in, run-log.v1 + telemetry out — invariant")
|
||||
p06_offline = entity(b, "decision", "Offline-First Design", "p06-polisher",
|
||||
"Machine works fully offline; network is for remote access only")
|
||||
p06_usb = entity(b, "decision", "USB SSD Storage", "p06-polisher",
|
||||
"USB SSD mandatory on RPi, not SD card")
|
||||
|
||||
p06_contracts = entity(b, "constraint", "Shared Contracts", "p06-polisher",
|
||||
"Stable IDs, explicit versions, hashable artifacts, planned-vs-executed separation")
|
||||
rel(b, p06_sw, p06_contracts, "constrained_by")
|
||||
|
||||
p06_preston = entity(b, "parameter", "Preston Coefficient kp", "p06-polisher",
|
||||
"Calibrated from before/after surface measurements, multi-run inverse-variance weighting")
|
||||
|
||||
print(f"\nDone.")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
1
scripts/eval_data/candidate_queue_2026-04-12.json
Normal file
1
scripts/eval_data/candidate_queue_2026-04-12.json
Normal file
File diff suppressed because one or more lines are too long
29
scripts/eval_data/candidate_queue_2026-04-12.txt
Normal file
29
scripts/eval_data/candidate_queue_2026-04-12.txt
Normal file
@@ -0,0 +1,29 @@
|
||||
1. [project ] proj=atocore AtoCore extraction must stay off the hot capture path; batch endpoint only
|
||||
2. [project ] proj=atocore Auto-promote gate: confidence ≥0.8 AND no duplicate in active memories
|
||||
3. [project ] proj=atocore AtoCore LLM extraction pipeline deployed on Dalidou host, runs via cron at 03:00 UTC via scripts/batch_llm_extract_live.py
|
||||
4. [project ] proj=atocore LLM extractor runs host-side (not in container) because claude CLI not available in container environment
|
||||
5. [project ] proj=atocore Host-side extraction script scripts/batch_llm_extract_live.py uses pure stdlib, no atocore imports for deployment simplicity
|
||||
6. [project ] proj=atocore POST /admin/extract-batch accepts mode: rule|llm, POST /interactions/{id}/extract now mode-aware
|
||||
7. [knowledge ] proj=atocore claude CLI 2.0.60 removed --no-session-persistence flag, extraction sessions now persist in claude history
|
||||
8. [adaptation ] proj=atocore Durable memory extraction candidates must be <200 chars, stand-alone, typed as project|knowledge|preference|adaptation
|
||||
9. [adaptation ] proj=atocore Memory extraction confidence defaults to 0.5, raise to 0.6 only for unambiguous committed claims
|
||||
10. [project ] proj=atocore Live Dalidou is on commit 39d73e9, not e2895b5
|
||||
11. [project ] proj=atocore Live harness is reproducible at 16/18 PASS
|
||||
12. [project ] proj=atocore Live active memories count is 36
|
||||
13. [project ] proj=atocore Wave 2 project-state entries on live: p04=5, p05=6, p06=6
|
||||
14. [project ] proj=atocore R6 is fixed by commit 39d73e9
|
||||
15. [project ] proj=atocore R9: R6 fix only covers empty project fallback; wrong non-empty model project can still override known interaction scope
|
||||
16. [project ] proj=atocore R10: Phase 8 is baseline-complete but not primary-complete; OpenClaw client covers narrow read-oriented slice of API
|
||||
17. [project ] proj=atocore Phase 8 is decent baseline integration milestone but not primary-ready yet
|
||||
18. [project ] proj=atocore 4-step roadmap complete: extractor → harness → Wave 2 → OpenClaw
|
||||
19. [project ] proj=atocore Codex audit loop proven across two full round-trips in one session
|
||||
20. [project ] proj=atocore Session end state: 36 active memories, 17 project-state entries, 16/18 harness, 280 tests, main at 54d84b5
|
||||
21. [project ] proj=atocore AtoCore extraction stays off the hot capture path; LLM extraction runs as scheduled batch, not inline with POST /interactions.
|
||||
22. [project ] proj=atocore AtoCore auto-triage trust model: auto-promote only when confidence ≥0.8 AND no duplicate active memory; else needs_human.
|
||||
23. [project ] proj=atocore Multi-model triage: use different model for triage reviewer than extractor (sonnet for extract)
|
||||
24. [project ] proj=atocore R9 fix: when interaction has known project, prefer it over model's non-matching project unless model's is registered
|
||||
25. [project ] proj=atocore R7 ranking fix: add overlap-density as secondary signal (overlap_count / memory_token_count)
|
||||
26. [project ] proj=atocore Extraction pipeline skips interactions with response_chars < 50 to avoid low-signal content
|
||||
27. [project ] proj=atocore AtoCore triage uses independent model from extractor (extractor: sonnet, triage: different model or different prompt).
|
||||
28. [project ] proj=atocore AtoCore ranking scorer adds overlap-density (overlap_count / memory_tokens) as secondary signal to fix short-memory ranking.
|
||||
29. [project ] proj=atocore AtoCore project trust: when interaction has known project and model returns different project, prefer interaction's project unless
|
||||
254
scripts/import_openclaw_state.py
Normal file
254
scripts/import_openclaw_state.py
Normal file
@@ -0,0 +1,254 @@
|
||||
"""OpenClaw state importer — one-way pull from clawdbot into AtoCore.
|
||||
|
||||
Reads OpenClaw's file continuity layer (SOUL.md, USER.md, MODEL-ROUTING.md,
|
||||
MEMORY.md, memory/YYYY-MM-DD.md) from the T420 via SSH and imports them
|
||||
into AtoCore as candidate memories. Hash-based delta detection — only
|
||||
re-imports files that changed since the last run.
|
||||
|
||||
Classification per codex's integration proposal:
|
||||
- SOUL.md -> identity candidates
|
||||
- USER.md -> identity + preference candidates
|
||||
- MODEL-ROUTING.md -> adaptation candidates (routing rules)
|
||||
- MEMORY.md -> long-term memory candidates (type varies)
|
||||
- memory/YYYY-MM-DD.md -> episodic memory candidates (daily logs)
|
||||
- heartbeat-state.json -> skipped (ops metadata only)
|
||||
|
||||
All candidates land as status=candidate. Auto-triage filters noise.
|
||||
This importer is conservative: it doesn't promote directly, it just
|
||||
feeds signal. The triage pipeline decides what graduates to active.
|
||||
|
||||
Usage:
|
||||
python3 scripts/import_openclaw_state.py \
|
||||
--base-url http://localhost:8100 \
|
||||
--openclaw-host papa@192.168.86.39 \
|
||||
--openclaw-path /home/papa/openclaw-workspace
|
||||
|
||||
Runs nightly via cron (added as Step 2c in cron-backup.sh).
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import hashlib
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
import urllib.error
|
||||
import urllib.request
|
||||
from pathlib import Path
|
||||
|
||||
DEFAULT_BASE_URL = os.environ.get("ATOCORE_BASE_URL", "http://localhost:8100")
|
||||
DEFAULT_OPENCLAW_HOST = os.environ.get("ATOCORE_OPENCLAW_HOST", "papa@192.168.86.39")
|
||||
DEFAULT_OPENCLAW_PATH = os.environ.get("ATOCORE_OPENCLAW_PATH", "/home/papa/clawd")
|
||||
|
||||
# Files to pull and how to classify them
|
||||
DURABLE_FILES = [
|
||||
("SOUL.md", "identity"),
|
||||
("USER.md", "identity"),
|
||||
("MODEL-ROUTING.md", "adaptation"),
|
||||
("MEMORY.md", "memory"), # type parsed from entries
|
||||
]
|
||||
DAILY_MEMORY_GLOB = "memory/*.md"
|
||||
HASH_STATE_KEY = "openclaw_import_hashes"
|
||||
|
||||
|
||||
def api_get(base_url, path):
|
||||
try:
|
||||
with urllib.request.urlopen(f"{base_url}{path}", timeout=15) as r:
|
||||
return json.loads(r.read())
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def api_post(base_url, path, body):
|
||||
data = json.dumps(body).encode("utf-8")
|
||||
req = urllib.request.Request(
|
||||
f"{base_url}{path}", method="POST",
|
||||
headers={"Content-Type": "application/json"}, data=data,
|
||||
)
|
||||
try:
|
||||
with urllib.request.urlopen(req, timeout=15) as r:
|
||||
return json.loads(r.read())
|
||||
except urllib.error.HTTPError as exc:
|
||||
if exc.code == 400:
|
||||
return {"skipped": True}
|
||||
raise
|
||||
|
||||
|
||||
def ssh_cat(host, remote_path):
|
||||
"""Cat a remote file via SSH. Returns content or None if missing."""
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["ssh", "-o", "ConnectTimeout=5", "-o", "BatchMode=yes",
|
||||
host, f"cat {remote_path}"],
|
||||
capture_output=True, text=True, timeout=30,
|
||||
encoding="utf-8", errors="replace",
|
||||
)
|
||||
if result.returncode == 0:
|
||||
return result.stdout
|
||||
except Exception:
|
||||
pass
|
||||
return None
|
||||
|
||||
|
||||
def ssh_ls(host, remote_glob):
|
||||
"""List files matching a glob on the remote host."""
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["ssh", "-o", "ConnectTimeout=5", "-o", "BatchMode=yes",
|
||||
host, f"ls -1 {remote_glob} 2>/dev/null"],
|
||||
capture_output=True, text=True, timeout=10,
|
||||
encoding="utf-8", errors="replace",
|
||||
)
|
||||
if result.returncode == 0:
|
||||
return [line.strip() for line in result.stdout.splitlines() if line.strip()]
|
||||
except Exception:
|
||||
pass
|
||||
return []
|
||||
|
||||
|
||||
def content_hash(text):
|
||||
return hashlib.sha256(text.encode("utf-8")).hexdigest()[:16]
|
||||
|
||||
|
||||
def load_hash_state(base_url):
|
||||
"""Load the hash state from project_state so we know what's changed."""
|
||||
state = api_get(base_url, "/project/state/atocore?category=status")
|
||||
if not state:
|
||||
return {}
|
||||
for entry in state.get("entries", []):
|
||||
if entry.get("key") == HASH_STATE_KEY:
|
||||
try:
|
||||
return json.loads(entry["value"])
|
||||
except Exception:
|
||||
return {}
|
||||
return {}
|
||||
|
||||
|
||||
def save_hash_state(base_url, hashes):
|
||||
api_post(base_url, "/project/state", {
|
||||
"project": "atocore",
|
||||
"category": "status",
|
||||
"key": HASH_STATE_KEY,
|
||||
"value": json.dumps(hashes),
|
||||
"source": "import_openclaw_state.py",
|
||||
})
|
||||
|
||||
|
||||
def import_file_as_memory(base_url, filename, content, memory_type, source_tag):
|
||||
"""Import a file's content as a single candidate memory for triage."""
|
||||
# Trim to reasonable size — auto-triage can handle long content but
|
||||
# we don't want single mega-memories dominating the queue
|
||||
trimmed = content[:2000]
|
||||
if len(content) > 2000:
|
||||
trimmed += f"\n\n[...truncated from {len(content)} chars]"
|
||||
|
||||
body = {
|
||||
"memory_type": memory_type,
|
||||
"content": f"From OpenClaw/{filename}: {trimmed}",
|
||||
"project": "", # global/identity, not project-scoped
|
||||
"confidence": 0.5,
|
||||
"status": "candidate",
|
||||
}
|
||||
return api_post(base_url, "/memory", body)
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("--base-url", default=DEFAULT_BASE_URL)
|
||||
parser.add_argument("--openclaw-host", default=DEFAULT_OPENCLAW_HOST)
|
||||
parser.add_argument("--openclaw-path", default=DEFAULT_OPENCLAW_PATH)
|
||||
parser.add_argument("--dry-run", action="store_true")
|
||||
args = parser.parse_args()
|
||||
|
||||
print(f"openclaw_host={args.openclaw_host} openclaw_path={args.openclaw_path}")
|
||||
print(f"dry_run={args.dry_run}")
|
||||
|
||||
# Check SSH connectivity first
|
||||
test = ssh_cat(args.openclaw_host, f"{args.openclaw_path}/SOUL.md")
|
||||
if test is None:
|
||||
print("ERROR: cannot reach OpenClaw workspace via SSH or SOUL.md not found")
|
||||
print("Check: ssh key installed? path correct? workspace exists?")
|
||||
return 1
|
||||
|
||||
hashes = load_hash_state(args.base_url)
|
||||
imported = skipped = errors = 0
|
||||
|
||||
# 1. Durable files
|
||||
for filename, mem_type in DURABLE_FILES:
|
||||
remote = f"{args.openclaw_path}/{filename}"
|
||||
content = ssh_cat(args.openclaw_host, remote)
|
||||
if content is None or not content.strip():
|
||||
print(f" - {filename}: not found or empty")
|
||||
continue
|
||||
|
||||
h = content_hash(content)
|
||||
if hashes.get(filename) == h:
|
||||
print(f" = {filename}: unchanged (hash {h})")
|
||||
skipped += 1
|
||||
continue
|
||||
|
||||
print(f" + {filename}: changed (hash {h}, {len(content)}ch)")
|
||||
if not args.dry_run:
|
||||
try:
|
||||
result = import_file_as_memory(
|
||||
args.base_url, filename, content, mem_type,
|
||||
source_tag="openclaw-durable",
|
||||
)
|
||||
if result.get("skipped"):
|
||||
print(f" (duplicate content, skipped)")
|
||||
else:
|
||||
print(f" -> candidate {result.get('id', '?')[:8]}")
|
||||
imported += 1
|
||||
hashes[filename] = h
|
||||
except Exception as e:
|
||||
print(f" ! error: {e}")
|
||||
errors += 1
|
||||
|
||||
# 2. Daily memory logs (memory/YYYY-MM-DD.md)
|
||||
daily_glob = f"{args.openclaw_path}/{DAILY_MEMORY_GLOB}"
|
||||
daily_files = ssh_ls(args.openclaw_host, daily_glob)
|
||||
print(f"\ndaily memory files: {len(daily_files)}")
|
||||
|
||||
# Only process the most recent 7 daily files to avoid flooding
|
||||
for remote_path in sorted(daily_files)[-7:]:
|
||||
filename = Path(remote_path).name
|
||||
content = ssh_cat(args.openclaw_host, remote_path)
|
||||
if content is None or not content.strip():
|
||||
continue
|
||||
|
||||
h = content_hash(content)
|
||||
key = f"daily/{filename}"
|
||||
if hashes.get(key) == h:
|
||||
print(f" = {filename}: unchanged")
|
||||
skipped += 1
|
||||
continue
|
||||
|
||||
print(f" + {filename}: changed ({len(content)}ch)")
|
||||
if not args.dry_run:
|
||||
try:
|
||||
result = import_file_as_memory(
|
||||
args.base_url, filename, content, "episodic",
|
||||
source_tag="openclaw-daily",
|
||||
)
|
||||
if not result.get("skipped"):
|
||||
print(f" -> candidate {result.get('id', '?')[:8]}")
|
||||
imported += 1
|
||||
hashes[key] = h
|
||||
except Exception as e:
|
||||
print(f" ! error: {e}")
|
||||
errors += 1
|
||||
|
||||
# Save hash state
|
||||
if not args.dry_run and imported > 0:
|
||||
save_hash_state(args.base_url, hashes)
|
||||
|
||||
print(f"\nimported={imported} skipped={skipped} errors={errors}")
|
||||
print("Candidates queued — auto-triage will filter them on next run.")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main() or 0)
|
||||
170
scripts/lint_knowledge_base.py
Normal file
170
scripts/lint_knowledge_base.py
Normal file
@@ -0,0 +1,170 @@
|
||||
"""Weekly lint pass — health check for the AtoCore knowledge base.
|
||||
|
||||
Inspired by Karpathy's LLM Wiki pattern (the 'lint' operation).
|
||||
Checks for orphans, stale claims, contradictions, and gaps.
|
||||
Outputs a report that can be posted to the wiki as needs_review.
|
||||
|
||||
Usage:
|
||||
python3 scripts/lint_knowledge_base.py --base-url http://dalidou:8100
|
||||
|
||||
Run weekly via cron, or on-demand when the knowledge base feels stale.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import urllib.request
|
||||
from datetime import datetime, timezone, timedelta
|
||||
|
||||
DEFAULT_BASE_URL = os.environ.get("ATOCORE_BASE_URL", "http://localhost:8100")
|
||||
ORPHAN_AGE_DAYS = 14
|
||||
|
||||
|
||||
def api_get(base_url: str, path: str):
|
||||
with urllib.request.urlopen(f"{base_url}{path}", timeout=15) as r:
|
||||
return json.loads(r.read())
|
||||
|
||||
|
||||
def parse_ts(ts: str) -> datetime | None:
|
||||
if not ts:
|
||||
return None
|
||||
try:
|
||||
return datetime.strptime(ts[:19], "%Y-%m-%d %H:%M:%S").replace(tzinfo=timezone.utc)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("--base-url", default=DEFAULT_BASE_URL)
|
||||
args = parser.parse_args()
|
||||
b = args.base_url
|
||||
now = datetime.now(timezone.utc)
|
||||
orphan_threshold = now - timedelta(days=ORPHAN_AGE_DAYS)
|
||||
|
||||
print(f"=== AtoCore Lint — {now.strftime('%Y-%m-%d %H:%M UTC')} ===\n")
|
||||
|
||||
findings = {
|
||||
"orphan_memories": [],
|
||||
"stale_candidates": [],
|
||||
"unused_entities": [],
|
||||
"empty_state_projects": [],
|
||||
"unregistered_projects": [],
|
||||
}
|
||||
|
||||
# 1. Orphan memories: active but never reinforced after N days
|
||||
memories = api_get(b, "/memory?active_only=true&limit=500").get("memories", [])
|
||||
for m in memories:
|
||||
updated = parse_ts(m.get("updated_at", ""))
|
||||
if m.get("reference_count", 0) == 0 and updated and updated < orphan_threshold:
|
||||
findings["orphan_memories"].append({
|
||||
"id": m["id"],
|
||||
"type": m["memory_type"],
|
||||
"project": m.get("project") or "(none)",
|
||||
"age_days": (now - updated).days,
|
||||
"content": m["content"][:120],
|
||||
})
|
||||
|
||||
# 2. Stale candidates: been in queue > 7 days without triage
|
||||
candidates = api_get(b, "/memory?status=candidate&limit=500").get("memories", [])
|
||||
stale_threshold = now - timedelta(days=7)
|
||||
for c in candidates:
|
||||
updated = parse_ts(c.get("updated_at", ""))
|
||||
if updated and updated < stale_threshold:
|
||||
findings["stale_candidates"].append({
|
||||
"id": c["id"],
|
||||
"age_days": (now - updated).days,
|
||||
"content": c["content"][:120],
|
||||
})
|
||||
|
||||
# 3. Unused entities: no relationships in either direction
|
||||
entities = api_get(b, "/entities?limit=500").get("entities", [])
|
||||
for e in entities:
|
||||
try:
|
||||
detail = api_get(b, f"/entities/{e['id']}")
|
||||
if not detail.get("relationships"):
|
||||
findings["unused_entities"].append({
|
||||
"id": e["id"],
|
||||
"type": e["entity_type"],
|
||||
"name": e["name"],
|
||||
"project": e.get("project") or "(none)",
|
||||
})
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# 4. Registered projects with no state entries
|
||||
try:
|
||||
projects = api_get(b, "/projects").get("projects", [])
|
||||
for p in projects:
|
||||
state = api_get(b, f"/project/state/{p['id']}").get("entries", [])
|
||||
if not state:
|
||||
findings["empty_state_projects"].append(p["id"])
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# 5. Memories tagged to unregistered projects (auto-detection candidates)
|
||||
registered_ids = {p["id"] for p in projects} | {
|
||||
a for p in projects for a in p.get("aliases", [])
|
||||
}
|
||||
all_mems = api_get(b, "/memory?limit=500").get("memories", [])
|
||||
for m in all_mems:
|
||||
proj = m.get("project", "")
|
||||
if proj and proj not in registered_ids and proj != "(none)":
|
||||
if proj not in findings["unregistered_projects"]:
|
||||
findings["unregistered_projects"].append(proj)
|
||||
|
||||
# Print report
|
||||
print(f"## Orphan memories (active, no reinforcement, >{ORPHAN_AGE_DAYS} days old)")
|
||||
if findings["orphan_memories"]:
|
||||
print(f" Found: {len(findings['orphan_memories'])}")
|
||||
for o in findings["orphan_memories"][:10]:
|
||||
print(f" - [{o['type']}] {o['project']} ({o['age_days']}d): {o['content']}")
|
||||
else:
|
||||
print(" (none)")
|
||||
|
||||
print(f"\n## Stale candidates (>7 days in queue)")
|
||||
if findings["stale_candidates"]:
|
||||
print(f" Found: {len(findings['stale_candidates'])}")
|
||||
for s in findings["stale_candidates"][:10]:
|
||||
print(f" - ({s['age_days']}d): {s['content']}")
|
||||
else:
|
||||
print(" (none)")
|
||||
|
||||
print(f"\n## Unused entities (no relationships)")
|
||||
if findings["unused_entities"]:
|
||||
print(f" Found: {len(findings['unused_entities'])}")
|
||||
for u in findings["unused_entities"][:10]:
|
||||
print(f" - [{u['type']}] {u['project']}: {u['name']}")
|
||||
else:
|
||||
print(" (none)")
|
||||
|
||||
print(f"\n## Empty-state projects")
|
||||
if findings["empty_state_projects"]:
|
||||
print(f" Found: {len(findings['empty_state_projects'])}")
|
||||
for p in findings["empty_state_projects"]:
|
||||
print(f" - {p}")
|
||||
else:
|
||||
print(" (none)")
|
||||
|
||||
print(f"\n## Unregistered projects detected in memories")
|
||||
if findings["unregistered_projects"]:
|
||||
print(f" Found: {len(findings['unregistered_projects'])}")
|
||||
print(" These were auto-detected by extraction — consider registering them:")
|
||||
for p in findings["unregistered_projects"]:
|
||||
print(f" - {p}")
|
||||
else:
|
||||
print(" (none)")
|
||||
|
||||
total_findings = sum(
|
||||
len(v) if isinstance(v, list) else 0 for v in findings.values()
|
||||
)
|
||||
print(f"\n=== Total findings: {total_findings} ===")
|
||||
|
||||
# Return exit code based on findings count (for CI)
|
||||
return 0 if total_findings == 0 else 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
@@ -218,8 +218,8 @@
|
||||
"Tailscale"
|
||||
],
|
||||
"expect_absent": [
|
||||
"GigaBIT"
|
||||
"[Source: p04-gigabit/"
|
||||
],
|
||||
"notes": "New p06 memory: Tailscale mesh for RPi remote access"
|
||||
"notes": "New p06 memory: Tailscale mesh for RPi remote access. Cross-project guard is a source-path check, not a word blacklist: the polisher ARCHITECTURE.md legitimately mentions the GigaBIT M1 mirror (it is what the polisher is built for), so testing for absence of that word produces false positives. The real invariant is that no p04 source chunks are retrieved into p06 context."
|
||||
}
|
||||
]
|
||||
|
||||
159
scripts/seed_project_state.py
Normal file
159
scripts/seed_project_state.py
Normal file
@@ -0,0 +1,159 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Seed Trusted Project State entries for all active projects.
|
||||
|
||||
Populates the project_state table with curated decisions, requirements,
|
||||
facts, contacts, and milestones so context packs have real content
|
||||
in the highest-trust tier.
|
||||
|
||||
Usage:
|
||||
python3 scripts/seed_project_state.py --base-url http://dalidou:8100
|
||||
python3 scripts/seed_project_state.py --base-url http://dalidou:8100 --dry-run
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import urllib.request
|
||||
import sys
|
||||
|
||||
# Each entry: (project, category, key, value, source)
|
||||
SEED_ENTRIES: list[tuple[str, str, str, str, str]] = [
|
||||
# ---- p04-gigabit (GigaBIT M1 1.2m Primary Mirror) ----
|
||||
("p04-gigabit", "fact", "mirror-spec",
|
||||
"1.2m borosilicate primary mirror for GigaBIT telescope. F/1.5, lightweight isogrid back structure.",
|
||||
"CDR docs + vault"),
|
||||
("p04-gigabit", "decision", "back-structure",
|
||||
"Option B selected: conical isogrid back structure with variable rib density. Chosen over flat-back for stiffness-to-weight ratio.",
|
||||
"CDR 2026-01"),
|
||||
("p04-gigabit", "decision", "polishing-vendor",
|
||||
"ABB Space (formerly INO) selected as polishing vendor. Contract includes computer-controlled polishing (CCP) and ion beam figuring (IBF).",
|
||||
"Entente de service 2026-01"),
|
||||
("p04-gigabit", "requirement", "surface-quality",
|
||||
"Surface figure accuracy: < 25nm RMS after final figuring. Microroughness: < 2nm RMS.",
|
||||
"CDR requirements"),
|
||||
("p04-gigabit", "contact", "abb-space",
|
||||
"ABB Space (INO), Quebec City. Primary contact for mirror polishing, CCP, and IBF. Project lead: coordinating FDR deliverables.",
|
||||
"vendor records"),
|
||||
("p04-gigabit", "milestone", "fdr",
|
||||
"Final Design Review (FDR) in preparation. Deliverables include interface drawings, thermal analysis, and updated error budget.",
|
||||
"project timeline"),
|
||||
|
||||
# ---- p05-interferometer (Fullum Interferometer) ----
|
||||
("p05-interferometer", "fact", "system-overview",
|
||||
"Custom Fizeau interferometer for in-situ metrology of large optics. Designed for the Fullum observatory polishing facility.",
|
||||
"vault docs"),
|
||||
("p05-interferometer", "decision", "cgh-design",
|
||||
"Computer-generated hologram (CGH) selected for null testing of the 1.2m mirror. Vendor: Diffraction International.",
|
||||
"vendor correspondence"),
|
||||
("p05-interferometer", "requirement", "measurement-accuracy",
|
||||
"Measurement accuracy target: lambda/20 (< 30nm PV) for surface figure verification.",
|
||||
"system requirements"),
|
||||
("p05-interferometer", "fact", "laser-source",
|
||||
"HeNe laser source at 632.8nm. Beam expansion to cover full 1.2m aperture via diverger + CGH.",
|
||||
"optical design docs"),
|
||||
("p05-interferometer", "contact", "diffraction-intl",
|
||||
"Diffraction International: CGH vendor. Fabricates the computer-generated hologram for null testing.",
|
||||
"vendor records"),
|
||||
|
||||
# ---- p06-polisher (Polisher Suite / P11-Polisher-Fullum) ----
|
||||
("p06-polisher", "fact", "suite-overview",
|
||||
"Integrated CNC polishing suite for the Fullum observatory. Includes 3-axis polishing machine, metrology integration, and real-time process control.",
|
||||
"vault docs"),
|
||||
("p06-polisher", "decision", "control-architecture",
|
||||
"Beckhoff TwinCAT 3 selected for real-time motion control. EtherCAT fieldbus for servo drives and I/O.",
|
||||
"architecture docs"),
|
||||
("p06-polisher", "decision", "firmware-split",
|
||||
"Firmware split into safety layer (PLC-level interlocks) and application layer (trajectory generation, adaptive dwell-time).",
|
||||
"architecture docs"),
|
||||
("p06-polisher", "requirement", "axis-travel",
|
||||
"Z-axis: 200mm travel for tool engagement. X/Y: covers 1.2m mirror diameter plus overshoot margin.",
|
||||
"mechanical requirements"),
|
||||
("p06-polisher", "fact", "telemetry",
|
||||
"Real-time telemetry via MQTT. Metrics: spindle RPM, force sensor, temperature probes, position feedback at 1kHz.",
|
||||
"control design docs"),
|
||||
("p06-polisher", "contact", "fullum-observatory",
|
||||
"Fullum Observatory: site where the polishing suite will be installed. Provides infrastructure (power, vibration isolation, clean environment).",
|
||||
"project records"),
|
||||
|
||||
# ---- atomizer-v2 ----
|
||||
("atomizer-v2", "fact", "product-overview",
|
||||
"Atomizer V2: internal project management and multi-agent orchestration platform. War-room based task coordination.",
|
||||
"repo docs"),
|
||||
("atomizer-v2", "decision", "projects-first-architecture",
|
||||
"Migration to projects-first architecture: each project is a workspace with its own agents, tasks, and knowledge.",
|
||||
"war-room-migration-plan-v2.md"),
|
||||
|
||||
# ---- abb-space (P08) ----
|
||||
("abb-space", "fact", "contract-overview",
|
||||
"ABB Space mirror polishing contract. Phase 1: spherical mirror polishing (200mm). Schott Zerodur substrate.",
|
||||
"quotes + correspondence"),
|
||||
("abb-space", "contact", "schott",
|
||||
"Schott AG: substrate supplier for Zerodur mirror blanks. Quote received for 200mm blank.",
|
||||
"vendor records"),
|
||||
|
||||
# ---- atocore ----
|
||||
("atocore", "fact", "architecture",
|
||||
"AtoCore: runtime memory and knowledge layer. FastAPI + SQLite + ChromaDB. Hosted on Dalidou (Docker). Nightly pipeline: backup, extract, triage, synthesis.",
|
||||
"codebase"),
|
||||
("atocore", "decision", "no-api-keys",
|
||||
"No API keys allowed in AtoCore. LLM-assisted features use OAuth via 'claude -p' CLI or equivalent CLI-authenticated paths.",
|
||||
"DEV-LEDGER 2026-04-12"),
|
||||
("atocore", "decision", "storage-separation",
|
||||
"Human-readable sources (vault, drive) and machine operational storage (SQLite, ChromaDB) must remain separate. Machine DB is derived state.",
|
||||
"AGENTS.md"),
|
||||
("atocore", "decision", "extraction-off-hot-path",
|
||||
"Extraction stays off the capture hot path. Batch/manual only. Never block interaction recording with extraction.",
|
||||
"DEV-LEDGER 2026-04-11"),
|
||||
]
|
||||
|
||||
|
||||
def main() -> None:
|
||||
parser = argparse.ArgumentParser(description="Seed Trusted Project State")
|
||||
parser.add_argument("--base-url", default="http://dalidou:8100")
|
||||
parser.add_argument("--dry-run", action="store_true")
|
||||
args = parser.parse_args()
|
||||
|
||||
base = args.base_url.rstrip("/")
|
||||
created = 0
|
||||
skipped = 0
|
||||
errors = 0
|
||||
|
||||
for project, category, key, value, source in SEED_ENTRIES:
|
||||
if args.dry_run:
|
||||
print(f" [DRY] {project}/{category}/{key}: {value[:60]}...")
|
||||
created += 1
|
||||
continue
|
||||
|
||||
body = json.dumps({
|
||||
"project": project,
|
||||
"category": category,
|
||||
"key": key,
|
||||
"value": value,
|
||||
"source": source,
|
||||
"confidence": 1.0,
|
||||
}).encode()
|
||||
req = urllib.request.Request(
|
||||
f"{base}/project/state",
|
||||
data=body,
|
||||
headers={"Content-Type": "application/json"},
|
||||
method="POST",
|
||||
)
|
||||
try:
|
||||
resp = urllib.request.urlopen(req, timeout=10)
|
||||
result = json.loads(resp.read())
|
||||
if result.get("created"):
|
||||
created += 1
|
||||
print(f" + {project}/{category}/{key}")
|
||||
else:
|
||||
skipped += 1
|
||||
print(f" = {project}/{category}/{key} (already exists)")
|
||||
except Exception as e:
|
||||
errors += 1
|
||||
print(f" ! {project}/{category}/{key}: {e}", file=sys.stderr)
|
||||
|
||||
print(f"\nDone: {created} created, {skipped} skipped, {errors} errors")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
168
scripts/synthesize_projects.py
Normal file
168
scripts/synthesize_projects.py
Normal file
@@ -0,0 +1,168 @@
|
||||
"""Weekly project synthesis — LLM-generated 'current state' paragraph per project.
|
||||
|
||||
Reads each registered project's state entries, memories, and entities,
|
||||
asks sonnet for a 3-5 sentence synthesis, and caches it under
|
||||
project_state/status/synthesis_cache. The wiki's project page reads
|
||||
this cached synthesis as the top band.
|
||||
|
||||
Runs weekly via cron (or manually). Cheap — one LLM call per project.
|
||||
|
||||
Usage:
|
||||
python3 scripts/synthesize_projects.py --base-url http://localhost:8100
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
import tempfile
|
||||
import urllib.request
|
||||
|
||||
DEFAULT_BASE_URL = os.environ.get("ATOCORE_BASE_URL", "http://localhost:8100")
|
||||
DEFAULT_MODEL = os.environ.get("ATOCORE_SYNTHESIS_MODEL", "sonnet")
|
||||
TIMEOUT_S = 60
|
||||
|
||||
SYSTEM_PROMPT = """You are summarizing the current state of an engineering project for a personal context engine called AtoCore.
|
||||
|
||||
You will receive:
|
||||
- Project state entries (decisions, requirements, status)
|
||||
- Active memories tagged to this project
|
||||
- Entity graph (subsystems, components, materials, decisions)
|
||||
|
||||
Write a 3-5 sentence synthesis covering:
|
||||
1. What the project is and its current stage
|
||||
2. The key locked-in decisions and architecture
|
||||
3. What the next focus is
|
||||
|
||||
Rules:
|
||||
- Plain prose, no bullet lists
|
||||
- Factual, grounded in what the data says — don't invent or speculate
|
||||
- Present tense
|
||||
- Under 500 characters total
|
||||
- No markdown formatting, just prose
|
||||
- If the data is sparse, say so honestly ("limited project data available")
|
||||
|
||||
Output ONLY the synthesis paragraph. No preamble, no JSON, no markdown headers."""
|
||||
|
||||
|
||||
_cwd = None
|
||||
|
||||
|
||||
def get_cwd():
|
||||
global _cwd
|
||||
if _cwd is None:
|
||||
_cwd = tempfile.mkdtemp(prefix="ato-synth-")
|
||||
return _cwd
|
||||
|
||||
|
||||
def api_get(base_url, path):
|
||||
with urllib.request.urlopen(f"{base_url}{path}", timeout=15) as r:
|
||||
return json.loads(r.read())
|
||||
|
||||
|
||||
def api_post(base_url, path, body):
|
||||
data = json.dumps(body).encode("utf-8")
|
||||
req = urllib.request.Request(
|
||||
f"{base_url}{path}", method="POST",
|
||||
headers={"Content-Type": "application/json"}, data=data,
|
||||
)
|
||||
with urllib.request.urlopen(req, timeout=15) as r:
|
||||
return json.loads(r.read())
|
||||
|
||||
|
||||
def synthesize_project(base_url, project_id, model):
|
||||
# Gather context
|
||||
state = api_get(base_url, f"/project/state/{project_id}").get("entries", [])
|
||||
memories = api_get(base_url, f"/memory?project={project_id}&active_only=true&limit=20").get("memories", [])
|
||||
entities = api_get(base_url, f"/entities?project={project_id}&limit=50").get("entities", [])
|
||||
|
||||
if not (state or memories or entities):
|
||||
return None
|
||||
|
||||
lines = [f"PROJECT: {project_id}\n"]
|
||||
if state:
|
||||
lines.append("STATE ENTRIES:")
|
||||
for e in state[:15]:
|
||||
if e.get("key") == "synthesis_cache":
|
||||
continue
|
||||
lines.append(f" [{e['category']}] {e['key']}: {e['value'][:200]}")
|
||||
|
||||
if memories:
|
||||
lines.append("\nACTIVE MEMORIES:")
|
||||
for m in memories[:10]:
|
||||
lines.append(f" [{m['memory_type']}] {m['content'][:200]}")
|
||||
|
||||
if entities:
|
||||
lines.append("\nENTITIES:")
|
||||
by_type = {}
|
||||
for e in entities:
|
||||
by_type.setdefault(e["entity_type"], []).append(e["name"])
|
||||
for t, names in by_type.items():
|
||||
lines.append(f" {t}: {', '.join(names[:8])}")
|
||||
|
||||
user_msg = "\n".join(lines) + "\n\nWrite the synthesis paragraph now."
|
||||
|
||||
if not shutil.which("claude"):
|
||||
print(f" ! claude CLI not available, skipping {project_id}")
|
||||
return None
|
||||
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["claude", "-p", "--model", model,
|
||||
"--append-system-prompt", SYSTEM_PROMPT,
|
||||
"--disable-slash-commands",
|
||||
user_msg],
|
||||
capture_output=True, text=True, timeout=TIMEOUT_S,
|
||||
cwd=get_cwd(), encoding="utf-8", errors="replace",
|
||||
)
|
||||
except Exception as e:
|
||||
print(f" ! subprocess failed for {project_id}: {e}")
|
||||
return None
|
||||
|
||||
if result.returncode != 0:
|
||||
print(f" ! claude exit {result.returncode} for {project_id}")
|
||||
return None
|
||||
|
||||
synthesis = (result.stdout or "").strip()
|
||||
if not synthesis or len(synthesis) < 50:
|
||||
return None
|
||||
return synthesis[:1000]
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("--base-url", default=DEFAULT_BASE_URL)
|
||||
parser.add_argument("--model", default=DEFAULT_MODEL)
|
||||
parser.add_argument("--project", default=None, help="single project to synthesize")
|
||||
args = parser.parse_args()
|
||||
|
||||
projects = api_get(args.base_url, "/projects").get("projects", [])
|
||||
if args.project:
|
||||
projects = [p for p in projects if p["id"] == args.project]
|
||||
|
||||
print(f"Synthesizing {len(projects)} project(s) with {args.model}...")
|
||||
|
||||
for p in projects:
|
||||
pid = p["id"]
|
||||
print(f"\n- {pid}")
|
||||
synthesis = synthesize_project(args.base_url, pid, args.model)
|
||||
if synthesis:
|
||||
print(f" {synthesis[:200]}...")
|
||||
try:
|
||||
api_post(args.base_url, "/project/state", {
|
||||
"project": pid,
|
||||
"category": "status",
|
||||
"key": "synthesis_cache",
|
||||
"value": synthesis,
|
||||
"source": "weekly synthesis pass",
|
||||
})
|
||||
print(f" + cached")
|
||||
except Exception as e:
|
||||
print(f" ! save failed: {e}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
87
scripts/windows/atocore-backup-pull.ps1
Normal file
87
scripts/windows/atocore-backup-pull.ps1
Normal file
@@ -0,0 +1,87 @@
|
||||
# atocore-backup-pull.ps1
|
||||
#
|
||||
# Pull the latest AtoCore backup snapshot from Dalidou to this Windows machine.
|
||||
# Designed to be run by Windows Task Scheduler. Fail-open by design -- if
|
||||
# Dalidou is unreachable (laptop on the road, etc.), exit cleanly without error.
|
||||
#
|
||||
# Usage (manual test):
|
||||
# powershell.exe -ExecutionPolicy Bypass -File atocore-backup-pull.ps1
|
||||
#
|
||||
# Scheduled task: see docs/windows-backup-setup.md for Task Scheduler config.
|
||||
|
||||
$ErrorActionPreference = "Continue"
|
||||
|
||||
# --- Configuration ---
|
||||
$Remote = "papa@dalidou"
|
||||
$RemoteSnapshots = "/srv/storage/atocore/backups/snapshots"
|
||||
$LocalBackupDir = "$env:USERPROFILE\Documents\ATOCore_Backups"
|
||||
$LogDir = "$LocalBackupDir\_logs"
|
||||
$ReachabilityTest = 5 # seconds timeout for SSH probe
|
||||
|
||||
# --- Setup ---
|
||||
if (-not (Test-Path $LocalBackupDir)) {
|
||||
New-Item -ItemType Directory -Path $LocalBackupDir -Force | Out-Null
|
||||
}
|
||||
if (-not (Test-Path $LogDir)) {
|
||||
New-Item -ItemType Directory -Path $LogDir -Force | Out-Null
|
||||
}
|
||||
|
||||
$Timestamp = Get-Date -Format "yyyy-MM-dd_HHmmss"
|
||||
$LogFile = "$LogDir\backup-$Timestamp.log"
|
||||
|
||||
function Log($msg) {
|
||||
$line = "[{0}] {1}" -f (Get-Date -Format "yyyy-MM-dd HH:mm:ss"), $msg
|
||||
Write-Host $line
|
||||
Add-Content -Path $LogFile -Value $line
|
||||
}
|
||||
|
||||
Log "=== AtoCore backup pull starting ==="
|
||||
Log "Remote: $Remote"
|
||||
Log "Local target: $LocalBackupDir"
|
||||
|
||||
# --- Reachability check: fail open if Dalidou is offline ---
|
||||
Log "Checking Dalidou reachability..."
|
||||
$probe = & ssh -o ConnectTimeout=$ReachabilityTest -o BatchMode=yes `
|
||||
-o StrictHostKeyChecking=accept-new `
|
||||
$Remote "echo ok" 2>&1
|
||||
if ($LASTEXITCODE -ne 0 -or $probe -ne "ok") {
|
||||
Log "Dalidou unreachable ($probe) -- fail-open exit"
|
||||
exit 0
|
||||
}
|
||||
Log "Dalidou reachable."
|
||||
|
||||
# --- Pull the entire snapshots directory ---
|
||||
# Dalidou's retention policy (7 daily + 4 weekly + 6 monthly) already caps
|
||||
# the snapshot count, so pulling the whole dir is bounded and simple. scp
|
||||
# will overwrite local files -- we rely on this to pick up new snapshots.
|
||||
Log "Pulling snapshots via scp..."
|
||||
$LocalSnapshotsDir = Join-Path $LocalBackupDir "snapshots"
|
||||
if (-not (Test-Path $LocalSnapshotsDir)) {
|
||||
New-Item -ItemType Directory -Path $LocalSnapshotsDir -Force | Out-Null
|
||||
}
|
||||
|
||||
& scp -o BatchMode=yes -r "${Remote}:${RemoteSnapshots}/*" "$LocalSnapshotsDir\" 2>&1 |
|
||||
ForEach-Object { Add-Content -Path $LogFile -Value $_ }
|
||||
|
||||
if ($LASTEXITCODE -ne 0) {
|
||||
Log "scp failed with exit $LASTEXITCODE"
|
||||
exit 0 # fail-open
|
||||
}
|
||||
|
||||
# --- Stats ---
|
||||
$snapshots = Get-ChildItem -Path $LocalSnapshotsDir -Directory |
|
||||
Where-Object { $_.Name -match "^\d{8}T\d{6}Z$" } |
|
||||
Sort-Object Name -Descending
|
||||
|
||||
$totalSize = (Get-ChildItem $LocalSnapshotsDir -Recurse -File | Measure-Object -Property Length -Sum).Sum
|
||||
$SizeMB = [math]::Round($totalSize / 1MB, 2)
|
||||
$latest = if ($snapshots.Count -gt 0) { $snapshots[0].Name } else { "(none)" }
|
||||
|
||||
Log ("Pulled {0} snapshots successfully (total {1} MB, latest: {2})" -f $snapshots.Count, $SizeMB, $latest)
|
||||
Log "=== backup complete ==="
|
||||
|
||||
# --- Log retention: keep last 30 log files ---
|
||||
Get-ChildItem -Path $LogDir -Filter "backup-*.log" |
|
||||
Sort-Object Name -Descending |
|
||||
Select-Object -Skip 30 |
|
||||
ForEach-Object { Remove-Item $_.FullName -Force -ErrorAction SilentlyContinue }
|
||||
@@ -3,6 +3,7 @@
|
||||
from pathlib import Path
|
||||
|
||||
from fastapi import APIRouter, HTTPException
|
||||
from fastapi.responses import HTMLResponse
|
||||
from pydantic import BaseModel
|
||||
|
||||
import atocore.config as _config
|
||||
@@ -30,11 +31,33 @@ from atocore.interactions.service import (
|
||||
list_interactions,
|
||||
record_interaction,
|
||||
)
|
||||
from atocore.engineering.mirror import generate_project_overview
|
||||
from atocore.engineering.wiki import (
|
||||
render_entity,
|
||||
render_homepage,
|
||||
render_project,
|
||||
render_search,
|
||||
)
|
||||
from atocore.engineering.service import (
|
||||
ENTITY_TYPES,
|
||||
RELATIONSHIP_TYPES,
|
||||
create_entity,
|
||||
create_relationship,
|
||||
get_entities,
|
||||
get_entity,
|
||||
get_entity_with_context,
|
||||
get_relationships,
|
||||
)
|
||||
from atocore.memory.extractor import (
|
||||
EXTRACTOR_VERSION,
|
||||
MemoryCandidate,
|
||||
extract_candidates_from_interaction,
|
||||
)
|
||||
from atocore.memory.extractor_llm import (
|
||||
LLM_EXTRACTOR_VERSION,
|
||||
_cli_available as _llm_cli_available,
|
||||
extract_candidates_llm,
|
||||
)
|
||||
from atocore.memory.reinforcement import reinforce_from_interaction
|
||||
from atocore.memory.service import (
|
||||
MEMORY_STATUSES,
|
||||
@@ -69,6 +92,33 @@ router = APIRouter()
|
||||
log = get_logger("api")
|
||||
|
||||
|
||||
# --- Wiki routes (HTML, served first for clean URLs) ---
|
||||
|
||||
|
||||
@router.get("/wiki", response_class=HTMLResponse)
|
||||
def wiki_home() -> HTMLResponse:
|
||||
return HTMLResponse(content=render_homepage())
|
||||
|
||||
|
||||
@router.get("/wiki/projects/{project_name}", response_class=HTMLResponse)
|
||||
def wiki_project(project_name: str) -> HTMLResponse:
|
||||
from atocore.projects.registry import resolve_project_name as _resolve
|
||||
return HTMLResponse(content=render_project(_resolve(project_name)))
|
||||
|
||||
|
||||
@router.get("/wiki/entities/{entity_id}", response_class=HTMLResponse)
|
||||
def wiki_entity(entity_id: str) -> HTMLResponse:
|
||||
html = render_entity(entity_id)
|
||||
if html is None:
|
||||
raise HTTPException(status_code=404, detail="Entity not found")
|
||||
return HTMLResponse(content=html)
|
||||
|
||||
|
||||
@router.get("/wiki/search", response_class=HTMLResponse)
|
||||
def wiki_search(q: str = "") -> HTMLResponse:
|
||||
return HTMLResponse(content=render_search(q))
|
||||
|
||||
|
||||
# --- Request/Response models ---
|
||||
|
||||
|
||||
@@ -580,6 +630,7 @@ def api_reinforce_interaction(interaction_id: str) -> dict:
|
||||
|
||||
class InteractionExtractRequest(BaseModel):
|
||||
persist: bool = False
|
||||
mode: str = "rule" # "rule" or "llm"
|
||||
|
||||
|
||||
@router.post("/interactions/{interaction_id}/extract")
|
||||
@@ -601,7 +652,10 @@ def api_extract_from_interaction(
|
||||
if interaction is None:
|
||||
raise HTTPException(status_code=404, detail=f"Interaction not found: {interaction_id}")
|
||||
payload = req or InteractionExtractRequest()
|
||||
candidates: list[MemoryCandidate] = extract_candidates_from_interaction(interaction)
|
||||
if payload.mode == "llm":
|
||||
candidates: list[MemoryCandidate] = extract_candidates_llm(interaction)
|
||||
else:
|
||||
candidates: list[MemoryCandidate] = extract_candidates_from_interaction(interaction)
|
||||
|
||||
persisted_ids: list[str] = []
|
||||
if payload.persist:
|
||||
@@ -755,6 +809,460 @@ def api_cleanup_backups(req: BackupCleanupRequest | None = None) -> dict:
|
||||
raise HTTPException(status_code=500, detail=f"Cleanup failed: {e}")
|
||||
|
||||
|
||||
class ExtractBatchRequest(BaseModel):
|
||||
since: str | None = None
|
||||
mode: str = "llm"
|
||||
limit: int = 50
|
||||
persist: bool = True
|
||||
|
||||
|
||||
@router.post("/admin/extract-batch")
|
||||
def api_extract_batch(req: ExtractBatchRequest | None = None) -> dict:
|
||||
"""Run batch extraction across recent interactions.
|
||||
|
||||
Fetches interactions since ``since`` (or since the last recorded
|
||||
batch run), runs the extractor (rule or LLM) on each, and persists
|
||||
any candidates as ``status=candidate``. The last-run timestamp is
|
||||
stored in project state under ``atocore / status /
|
||||
last_extract_batch_run`` so subsequent calls without ``since``
|
||||
automatically pick up where the last run left off.
|
||||
|
||||
This endpoint is the operational home for R1 / R5 — it makes the
|
||||
LLM extractor accessible as an API operation rather than a
|
||||
script-only eval tool. Still NOT on the capture hot path: callers
|
||||
invoke this endpoint explicitly (cron, manual curl, CLI).
|
||||
"""
|
||||
payload = req or ExtractBatchRequest()
|
||||
|
||||
if payload.mode == "llm" and not _llm_cli_available():
|
||||
raise HTTPException(
|
||||
status_code=503,
|
||||
detail=(
|
||||
"LLM extraction unavailable in this runtime: the `claude` CLI "
|
||||
"is not on PATH. Run host-side via "
|
||||
"`scripts/batch_llm_extract_live.py` instead, or call this "
|
||||
"endpoint with mode=\"rule\"."
|
||||
),
|
||||
)
|
||||
|
||||
since = payload.since
|
||||
|
||||
if not since:
|
||||
state_entries = get_state("atocore")
|
||||
for entry in state_entries:
|
||||
if entry.category == "status" and entry.key == "last_extract_batch_run":
|
||||
since = entry.value
|
||||
break
|
||||
|
||||
interactions = list_interactions(since=since, limit=min(payload.limit, 200))
|
||||
|
||||
processed = 0
|
||||
total_candidates = 0
|
||||
total_persisted = 0
|
||||
errors: list[dict] = []
|
||||
|
||||
for interaction in interactions:
|
||||
if not (interaction.response or interaction.response_summary):
|
||||
continue
|
||||
try:
|
||||
if payload.mode == "llm":
|
||||
candidates = extract_candidates_llm(interaction)
|
||||
else:
|
||||
candidates = extract_candidates_from_interaction(interaction)
|
||||
except Exception as exc:
|
||||
errors.append({"interaction_id": interaction.id, "error": str(exc)})
|
||||
continue
|
||||
|
||||
processed += 1
|
||||
total_candidates += len(candidates)
|
||||
|
||||
if payload.persist and candidates:
|
||||
for candidate in candidates:
|
||||
try:
|
||||
create_memory(
|
||||
memory_type=candidate.memory_type,
|
||||
content=candidate.content,
|
||||
project=candidate.project,
|
||||
confidence=candidate.confidence,
|
||||
status="candidate",
|
||||
)
|
||||
total_persisted += 1
|
||||
except ValueError:
|
||||
pass # duplicate — skip silently
|
||||
|
||||
from datetime import datetime, timezone
|
||||
|
||||
now = datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S")
|
||||
try:
|
||||
set_state(
|
||||
project="atocore",
|
||||
category="status",
|
||||
key="last_extract_batch_run",
|
||||
value=now,
|
||||
source="admin/extract-batch endpoint",
|
||||
)
|
||||
except Exception:
|
||||
pass # best-effort timestamp tracking
|
||||
|
||||
log.info(
|
||||
"extract_batch_complete",
|
||||
mode=payload.mode,
|
||||
processed=processed,
|
||||
total_candidates=total_candidates,
|
||||
total_persisted=total_persisted,
|
||||
errors=len(errors),
|
||||
)
|
||||
|
||||
return {
|
||||
"processed": processed,
|
||||
"total_candidates": total_candidates,
|
||||
"total_persisted": total_persisted,
|
||||
"mode": payload.mode,
|
||||
"persist": payload.persist,
|
||||
"since": since or "(first run)",
|
||||
"errors": errors,
|
||||
}
|
||||
|
||||
|
||||
@router.get("/admin/dashboard")
|
||||
def api_dashboard() -> dict:
|
||||
"""One-shot system observability dashboard.
|
||||
|
||||
Returns memory counts by type/project/status, project state
|
||||
entry counts, interaction volume by client, pipeline health
|
||||
(harness, triage stats, last run), and extraction pipeline
|
||||
status — everything an operator needs to understand AtoCore's
|
||||
health beyond the basic /health endpoint.
|
||||
"""
|
||||
import json as _json
|
||||
from collections import Counter
|
||||
from datetime import datetime as _dt, timezone as _tz
|
||||
|
||||
all_memories = get_memories(active_only=False, limit=500)
|
||||
active = [m for m in all_memories if m.status == "active"]
|
||||
candidates = [m for m in all_memories if m.status == "candidate"]
|
||||
|
||||
type_counts = dict(Counter(m.memory_type for m in active))
|
||||
project_counts = dict(Counter(m.project or "(none)" for m in active))
|
||||
reinforced = [m for m in active if m.reference_count > 0]
|
||||
|
||||
# Interaction stats — total + by_client from DB directly
|
||||
interaction_stats: dict = {"most_recent": None, "total": 0, "by_client": {}}
|
||||
try:
|
||||
from atocore.models.database import get_connection as _gc
|
||||
|
||||
with _gc() as conn:
|
||||
row = conn.execute("SELECT count(*) FROM interactions").fetchone()
|
||||
interaction_stats["total"] = row[0] if row else 0
|
||||
rows = conn.execute(
|
||||
"SELECT client, count(*) FROM interactions GROUP BY client"
|
||||
).fetchall()
|
||||
interaction_stats["by_client"] = {r[0]: r[1] for r in rows}
|
||||
row = conn.execute(
|
||||
"SELECT created_at FROM interactions ORDER BY created_at DESC LIMIT 1"
|
||||
).fetchone()
|
||||
interaction_stats["most_recent"] = row[0] if row else None
|
||||
except Exception:
|
||||
interactions = list_interactions(limit=1)
|
||||
interaction_stats["most_recent"] = (
|
||||
interactions[0].created_at if interactions else None
|
||||
)
|
||||
|
||||
# Pipeline health from project state
|
||||
pipeline: dict = {}
|
||||
extract_state: dict = {}
|
||||
try:
|
||||
state_entries = get_state("atocore")
|
||||
for entry in state_entries:
|
||||
if entry.category != "status":
|
||||
continue
|
||||
if entry.key == "last_extract_batch_run":
|
||||
extract_state["last_run"] = entry.value
|
||||
elif entry.key == "pipeline_last_run":
|
||||
pipeline["last_run"] = entry.value
|
||||
try:
|
||||
last = _dt.fromisoformat(entry.value.replace("Z", "+00:00"))
|
||||
delta = _dt.now(_tz.utc) - last
|
||||
pipeline["hours_since_last_run"] = round(
|
||||
delta.total_seconds() / 3600, 1
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
elif entry.key == "pipeline_summary":
|
||||
try:
|
||||
pipeline["summary"] = _json.loads(entry.value)
|
||||
except Exception:
|
||||
pipeline["summary_raw"] = entry.value
|
||||
elif entry.key == "retrieval_harness_result":
|
||||
try:
|
||||
pipeline["harness"] = _json.loads(entry.value)
|
||||
except Exception:
|
||||
pipeline["harness_raw"] = entry.value
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Project state counts — include all registered projects
|
||||
ps_counts = {}
|
||||
try:
|
||||
from atocore.projects.registry import load_project_registry as _lpr
|
||||
|
||||
for proj in _lpr():
|
||||
try:
|
||||
entries = get_state(proj.project_id)
|
||||
ps_counts[proj.project_id] = len(entries)
|
||||
except Exception:
|
||||
pass
|
||||
except Exception:
|
||||
for proj_id in [
|
||||
"p04-gigabit", "p05-interferometer", "p06-polisher", "atocore",
|
||||
]:
|
||||
try:
|
||||
entries = get_state(proj_id)
|
||||
ps_counts[proj_id] = len(entries)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return {
|
||||
"memories": {
|
||||
"active": len(active),
|
||||
"candidates": len(candidates),
|
||||
"by_type": type_counts,
|
||||
"by_project": project_counts,
|
||||
"reinforced": len(reinforced),
|
||||
},
|
||||
"project_state": {
|
||||
"counts": ps_counts,
|
||||
"total": sum(ps_counts.values()),
|
||||
},
|
||||
"interactions": interaction_stats,
|
||||
"extraction_pipeline": extract_state,
|
||||
"pipeline": pipeline,
|
||||
}
|
||||
|
||||
|
||||
# --- Engineering Knowledge Layer (Layer 2) ---
|
||||
|
||||
|
||||
class EntityCreateRequest(BaseModel):
|
||||
entity_type: str
|
||||
name: str
|
||||
project: str = ""
|
||||
description: str = ""
|
||||
properties: dict | None = None
|
||||
status: str = "active"
|
||||
confidence: float = 1.0
|
||||
source_refs: list[str] | None = None
|
||||
|
||||
|
||||
class RelationshipCreateRequest(BaseModel):
|
||||
source_entity_id: str
|
||||
target_entity_id: str
|
||||
relationship_type: str
|
||||
confidence: float = 1.0
|
||||
source_refs: list[str] | None = None
|
||||
|
||||
|
||||
@router.post("/entities")
|
||||
def api_create_entity(req: EntityCreateRequest) -> dict:
|
||||
"""Create a new engineering entity."""
|
||||
try:
|
||||
entity = create_entity(
|
||||
entity_type=req.entity_type,
|
||||
name=req.name,
|
||||
project=req.project,
|
||||
description=req.description,
|
||||
properties=req.properties,
|
||||
status=req.status,
|
||||
confidence=req.confidence,
|
||||
source_refs=req.source_refs,
|
||||
)
|
||||
except ValueError as e:
|
||||
raise HTTPException(status_code=400, detail=str(e))
|
||||
return {"status": "ok", "id": entity.id, "entity_type": entity.entity_type, "name": entity.name}
|
||||
|
||||
|
||||
@router.get("/entities")
|
||||
def api_list_entities(
|
||||
entity_type: str | None = None,
|
||||
project: str | None = None,
|
||||
status: str = "active",
|
||||
name_contains: str | None = None,
|
||||
limit: int = 100,
|
||||
) -> dict:
|
||||
"""List engineering entities with optional filters."""
|
||||
entities = get_entities(
|
||||
entity_type=entity_type,
|
||||
project=project,
|
||||
status=status,
|
||||
name_contains=name_contains,
|
||||
limit=limit,
|
||||
)
|
||||
return {
|
||||
"entities": [
|
||||
{
|
||||
"id": e.id,
|
||||
"entity_type": e.entity_type,
|
||||
"name": e.name,
|
||||
"project": e.project,
|
||||
"description": e.description,
|
||||
"properties": e.properties,
|
||||
"status": e.status,
|
||||
"confidence": e.confidence,
|
||||
}
|
||||
for e in entities
|
||||
],
|
||||
"count": len(entities),
|
||||
}
|
||||
|
||||
|
||||
@router.get("/entities/{entity_id}")
|
||||
def api_get_entity(entity_id: str) -> dict:
|
||||
"""Get an entity with its relationships and related entities."""
|
||||
result = get_entity_with_context(entity_id)
|
||||
if result is None:
|
||||
raise HTTPException(status_code=404, detail=f"Entity not found: {entity_id}")
|
||||
entity = result["entity"]
|
||||
return {
|
||||
"entity": {
|
||||
"id": entity.id,
|
||||
"entity_type": entity.entity_type,
|
||||
"name": entity.name,
|
||||
"project": entity.project,
|
||||
"description": entity.description,
|
||||
"properties": entity.properties,
|
||||
"status": entity.status,
|
||||
"confidence": entity.confidence,
|
||||
"source_refs": entity.source_refs,
|
||||
"created_at": entity.created_at,
|
||||
"updated_at": entity.updated_at,
|
||||
},
|
||||
"relationships": [
|
||||
{
|
||||
"id": r.id,
|
||||
"source_entity_id": r.source_entity_id,
|
||||
"target_entity_id": r.target_entity_id,
|
||||
"relationship_type": r.relationship_type,
|
||||
"confidence": r.confidence,
|
||||
}
|
||||
for r in result["relationships"]
|
||||
],
|
||||
"related_entities": {
|
||||
eid: {
|
||||
"entity_type": e.entity_type,
|
||||
"name": e.name,
|
||||
"project": e.project,
|
||||
"description": e.description[:200],
|
||||
}
|
||||
for eid, e in result["related_entities"].items()
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@router.post("/relationships")
|
||||
def api_create_relationship(req: RelationshipCreateRequest) -> dict:
|
||||
"""Create a relationship between two entities."""
|
||||
try:
|
||||
rel = create_relationship(
|
||||
source_entity_id=req.source_entity_id,
|
||||
target_entity_id=req.target_entity_id,
|
||||
relationship_type=req.relationship_type,
|
||||
confidence=req.confidence,
|
||||
source_refs=req.source_refs,
|
||||
)
|
||||
except ValueError as e:
|
||||
raise HTTPException(status_code=400, detail=str(e))
|
||||
return {
|
||||
"status": "ok",
|
||||
"id": rel.id,
|
||||
"relationship_type": rel.relationship_type,
|
||||
}
|
||||
|
||||
|
||||
@router.get("/projects/{project_name}/mirror.html", response_class=HTMLResponse)
|
||||
def api_project_mirror_html(project_name: str) -> HTMLResponse:
|
||||
"""Serve a readable HTML project overview page.
|
||||
|
||||
Open in a browser for a clean, styled project dashboard derived
|
||||
from AtoCore's structured data. Source of truth is the database —
|
||||
this page is a derived view.
|
||||
"""
|
||||
from atocore.projects.registry import resolve_project_name as _resolve
|
||||
|
||||
canonical = _resolve(project_name)
|
||||
try:
|
||||
md_content = generate_project_overview(canonical)
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=f"Mirror generation failed: {e}")
|
||||
|
||||
import markdown
|
||||
|
||||
html_body = markdown.markdown(md_content, extensions=["tables", "fenced_code"])
|
||||
html = _MIRROR_HTML_TEMPLATE.replace("{{title}}", f"{canonical} — AtoCore Mirror")
|
||||
html = html.replace("{{body}}", html_body)
|
||||
return HTMLResponse(content=html)
|
||||
|
||||
|
||||
_MIRROR_HTML_TEMPLATE = """<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<title>{{title}}</title>
|
||||
<style>
|
||||
:root { --bg: #fafafa; --text: #1a1a2e; --accent: #2563eb; --border: #e2e8f0; --card: #fff; }
|
||||
@media (prefers-color-scheme: dark) {
|
||||
:root { --bg: #0f172a; --text: #e2e8f0; --accent: #60a5fa; --border: #334155; --card: #1e293b; }
|
||||
}
|
||||
* { box-sizing: border-box; margin: 0; padding: 0; }
|
||||
body {
|
||||
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif;
|
||||
line-height: 1.7; color: var(--text); background: var(--bg);
|
||||
max-width: 800px; margin: 0 auto; padding: 2rem 1.5rem;
|
||||
}
|
||||
h1 { font-size: 1.8rem; margin-bottom: 0.5rem; color: var(--accent); }
|
||||
h2 { font-size: 1.4rem; margin-top: 2.5rem; margin-bottom: 0.8rem; padding-bottom: 0.3rem; border-bottom: 2px solid var(--border); }
|
||||
h3 { font-size: 1.15rem; margin-top: 1.5rem; margin-bottom: 0.5rem; }
|
||||
p { margin-bottom: 0.8rem; }
|
||||
ul { margin-left: 1.5rem; margin-bottom: 1rem; }
|
||||
li { margin-bottom: 0.4rem; }
|
||||
li ul { margin-top: 0.3rem; }
|
||||
strong { color: var(--accent); font-weight: 600; }
|
||||
em { opacity: 0.7; font-size: 0.9em; }
|
||||
blockquote {
|
||||
background: var(--card); border-left: 4px solid var(--accent);
|
||||
padding: 0.8rem 1.2rem; margin: 1rem 0; border-radius: 0 8px 8px 0;
|
||||
}
|
||||
hr { border: none; border-top: 1px solid var(--border); margin: 2rem 0; }
|
||||
code { background: var(--card); padding: 0.15rem 0.4rem; border-radius: 4px; font-size: 0.9em; }
|
||||
a { color: var(--accent); text-decoration: none; }
|
||||
a:hover { text-decoration: underline; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
{{body}}
|
||||
</body>
|
||||
</html>"""
|
||||
|
||||
|
||||
@router.get("/projects/{project_name}/mirror")
|
||||
def api_project_mirror(project_name: str) -> dict:
|
||||
"""Generate a human-readable project overview from structured data.
|
||||
|
||||
Layer 3 of the AtoCore architecture. The mirror is DERIVED from
|
||||
entities, project state, and memories — it is not canonical truth.
|
||||
Returns markdown that can be rendered, saved to a file, or served
|
||||
as a dashboard page.
|
||||
"""
|
||||
from atocore.projects.registry import resolve_project_name as _resolve
|
||||
|
||||
canonical = _resolve(project_name)
|
||||
try:
|
||||
markdown = generate_project_overview(canonical)
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=f"Mirror generation failed: {e}")
|
||||
return {"project": canonical, "format": "markdown", "content": markdown}
|
||||
|
||||
|
||||
@router.get("/admin/backup/{stamp}/validate")
|
||||
def api_validate_backup(stamp: str) -> dict:
|
||||
"""Validate that a previously created backup is structurally usable."""
|
||||
|
||||
@@ -104,6 +104,21 @@ class Settings(BaseSettings):
|
||||
|
||||
@property
|
||||
def resolved_project_registry_path(self) -> Path:
|
||||
"""Path to the project registry JSON file.
|
||||
|
||||
If ``ATOCORE_PROJECT_REGISTRY_DIR`` env var is set, the registry
|
||||
lives at ``<that dir>/project-registry.json``. Otherwise falls
|
||||
back to the configured ``project_registry_path`` field.
|
||||
|
||||
This lets Docker deployments point at a mounted volume via env
|
||||
var without the ephemeral in-image ``/app/config/`` getting
|
||||
wiped on every rebuild.
|
||||
"""
|
||||
import os
|
||||
|
||||
registry_dir = os.environ.get("ATOCORE_PROJECT_REGISTRY_DIR", "").strip()
|
||||
if registry_dir:
|
||||
return Path(registry_dir) / "project-registry.json"
|
||||
return self._resolve_path(self.project_registry_path)
|
||||
|
||||
@property
|
||||
|
||||
@@ -14,6 +14,7 @@ import atocore.config as _config
|
||||
from atocore.context.project_state import format_project_state, get_state
|
||||
from atocore.memory.service import get_memories_for_context
|
||||
from atocore.observability.logger import get_logger
|
||||
from atocore.engineering.service import get_entities, get_entity_with_context
|
||||
from atocore.projects.registry import resolve_project_name
|
||||
from atocore.retrieval.retriever import ChunkResult, retrieve
|
||||
|
||||
@@ -29,13 +30,20 @@ SYSTEM_PREFIX = (
|
||||
# Budget allocation (per Master Plan section 9):
|
||||
# identity: 5%, preferences: 5%, project state: 20%, retrieval: 60%+
|
||||
PROJECT_STATE_BUDGET_RATIO = 0.20
|
||||
MEMORY_BUDGET_RATIO = 0.10 # 5% identity + 5% preference
|
||||
MEMORY_BUDGET_RATIO = 0.05 # identity + preference; lowered from 0.10 to avoid squeezing project memories and chunks
|
||||
# Project-scoped memories (project/knowledge/episodic) are the outlet
|
||||
# for the Phase 9 reflection loop on the retrieval side. Budget sits
|
||||
# between identity/preference and retrieved chunks so a reinforced
|
||||
# memory can actually reach the model.
|
||||
PROJECT_MEMORY_BUDGET_RATIO = 0.25
|
||||
PROJECT_MEMORY_TYPES = ["project", "knowledge", "episodic"]
|
||||
# General domain knowledge — unscoped memories (project="") that surface
|
||||
# in every context pack regardless of project hint. These are earned
|
||||
# engineering insights that apply across projects (e.g., "Preston removal
|
||||
# model breaks down below 5N because the contact assumption fails").
|
||||
DOMAIN_KNOWLEDGE_BUDGET_RATIO = 0.10
|
||||
DOMAIN_KNOWLEDGE_TYPES = ["knowledge"]
|
||||
ENGINEERING_CONTEXT_BUDGET_RATIO = 0.10
|
||||
|
||||
# Last built context pack for debug inspection
|
||||
_last_context_pack: "ContextPack | None" = None
|
||||
@@ -59,6 +67,10 @@ class ContextPack:
|
||||
memory_chars: int = 0
|
||||
project_memory_text: str = ""
|
||||
project_memory_chars: int = 0
|
||||
domain_knowledge_text: str = ""
|
||||
domain_knowledge_chars: int = 0
|
||||
engineering_context_text: str = ""
|
||||
engineering_context_chars: int = 0
|
||||
total_chars: int = 0
|
||||
budget: int = 0
|
||||
budget_remaining: int = 0
|
||||
@@ -139,8 +151,46 @@ def build_context(
|
||||
query=user_prompt,
|
||||
)
|
||||
|
||||
# 2c. Domain knowledge — cross-project earned insight with project=""
|
||||
# that surfaces regardless of which project the query is about.
|
||||
domain_knowledge_text = ""
|
||||
domain_knowledge_chars = 0
|
||||
domain_budget = min(
|
||||
int(budget * DOMAIN_KNOWLEDGE_BUDGET_RATIO),
|
||||
max(budget - project_state_chars - memory_chars - project_memory_chars, 0),
|
||||
)
|
||||
if domain_budget > 0:
|
||||
domain_knowledge_text, domain_knowledge_chars = get_memories_for_context(
|
||||
memory_types=DOMAIN_KNOWLEDGE_TYPES,
|
||||
project="",
|
||||
budget=domain_budget,
|
||||
header="--- Domain Knowledge ---",
|
||||
footer="--- End Domain Knowledge ---",
|
||||
query=user_prompt,
|
||||
)
|
||||
|
||||
# 2d. Engineering context — structured entity/relationship data
|
||||
# when the query matches a known entity name.
|
||||
engineering_context_text = ""
|
||||
engineering_context_chars = 0
|
||||
if canonical_project:
|
||||
eng_budget = min(
|
||||
int(budget * ENGINEERING_CONTEXT_BUDGET_RATIO),
|
||||
max(budget - project_state_chars - memory_chars
|
||||
- project_memory_chars - domain_knowledge_chars, 0),
|
||||
)
|
||||
if eng_budget > 0:
|
||||
engineering_context_text = _build_engineering_context(
|
||||
user_prompt, canonical_project, eng_budget,
|
||||
)
|
||||
engineering_context_chars = len(engineering_context_text)
|
||||
|
||||
# 3. Calculate remaining budget for retrieval
|
||||
retrieval_budget = budget - project_state_chars - memory_chars - project_memory_chars
|
||||
retrieval_budget = (
|
||||
budget - project_state_chars - memory_chars
|
||||
- project_memory_chars - domain_knowledge_chars
|
||||
- engineering_context_chars
|
||||
)
|
||||
|
||||
# 4. Retrieve candidates
|
||||
candidates = (
|
||||
@@ -161,13 +211,16 @@ def build_context(
|
||||
|
||||
# 7. Format full context
|
||||
formatted = _format_full_context(
|
||||
project_state_text, memory_text, project_memory_text, selected
|
||||
project_state_text, memory_text, project_memory_text,
|
||||
domain_knowledge_text, engineering_context_text, selected,
|
||||
)
|
||||
if len(formatted) > budget:
|
||||
formatted, selected = _trim_context_to_budget(
|
||||
project_state_text,
|
||||
memory_text,
|
||||
project_memory_text,
|
||||
domain_knowledge_text,
|
||||
engineering_context_text,
|
||||
selected,
|
||||
budget,
|
||||
)
|
||||
@@ -178,6 +231,8 @@ def build_context(
|
||||
project_state_chars = len(project_state_text)
|
||||
memory_chars = len(memory_text)
|
||||
project_memory_chars = len(project_memory_text)
|
||||
domain_knowledge_chars = len(domain_knowledge_text)
|
||||
engineering_context_chars = len(engineering_context_text)
|
||||
retrieval_chars = sum(c.char_count for c in selected)
|
||||
total_chars = len(formatted)
|
||||
duration_ms = int((time.time() - start) * 1000)
|
||||
@@ -190,6 +245,10 @@ def build_context(
|
||||
memory_chars=memory_chars,
|
||||
project_memory_text=project_memory_text,
|
||||
project_memory_chars=project_memory_chars,
|
||||
domain_knowledge_text=domain_knowledge_text,
|
||||
domain_knowledge_chars=domain_knowledge_chars,
|
||||
engineering_context_text=engineering_context_text,
|
||||
engineering_context_chars=engineering_context_chars,
|
||||
total_chars=total_chars,
|
||||
budget=budget,
|
||||
budget_remaining=budget - total_chars,
|
||||
@@ -208,6 +267,8 @@ def build_context(
|
||||
project_state_chars=project_state_chars,
|
||||
memory_chars=memory_chars,
|
||||
project_memory_chars=project_memory_chars,
|
||||
domain_knowledge_chars=domain_knowledge_chars,
|
||||
engineering_context_chars=engineering_context_chars,
|
||||
retrieval_chars=retrieval_chars,
|
||||
total_chars=total_chars,
|
||||
budget_remaining=budget - total_chars,
|
||||
@@ -288,7 +349,9 @@ def _format_full_context(
|
||||
project_state_text: str,
|
||||
memory_text: str,
|
||||
project_memory_text: str,
|
||||
chunks: list[ContextChunk],
|
||||
domain_knowledge_text: str,
|
||||
engineering_context_text: str = "",
|
||||
chunks: list[ContextChunk] | None = None,
|
||||
) -> str:
|
||||
"""Format project state + memories + retrieved chunks into full context block."""
|
||||
parts = []
|
||||
@@ -308,7 +371,17 @@ def _format_full_context(
|
||||
parts.append(project_memory_text)
|
||||
parts.append("")
|
||||
|
||||
# 4. Retrieved chunks (lowest trust)
|
||||
# 4. Domain knowledge (cross-project earned insight)
|
||||
if domain_knowledge_text:
|
||||
parts.append(domain_knowledge_text)
|
||||
parts.append("")
|
||||
|
||||
# 5. Engineering context (structured entity/relationship data)
|
||||
if engineering_context_text:
|
||||
parts.append(engineering_context_text)
|
||||
parts.append("")
|
||||
|
||||
# 6. Retrieved chunks (lowest trust)
|
||||
if chunks:
|
||||
parts.append("--- AtoCore Retrieved Context ---")
|
||||
if project_state_text:
|
||||
@@ -320,7 +393,7 @@ def _format_full_context(
|
||||
parts.append(chunk.content)
|
||||
parts.append("")
|
||||
parts.append("--- End Context ---")
|
||||
elif not project_state_text and not memory_text and not project_memory_text:
|
||||
elif not project_state_text and not memory_text and not project_memory_text and not domain_knowledge_text and not engineering_context_text:
|
||||
parts.append("--- AtoCore Context ---\nNo relevant context found.\n--- End Context ---")
|
||||
|
||||
return "\n".join(parts)
|
||||
@@ -343,6 +416,7 @@ def _pack_to_dict(pack: ContextPack) -> dict:
|
||||
"project_state_chars": pack.project_state_chars,
|
||||
"memory_chars": pack.memory_chars,
|
||||
"project_memory_chars": pack.project_memory_chars,
|
||||
"domain_knowledge_chars": pack.domain_knowledge_chars,
|
||||
"chunks_used": len(pack.chunks_used),
|
||||
"total_chars": pack.total_chars,
|
||||
"budget": pack.budget,
|
||||
@@ -351,6 +425,8 @@ def _pack_to_dict(pack: ContextPack) -> dict:
|
||||
"has_project_state": bool(pack.project_state_text),
|
||||
"has_memories": bool(pack.memory_text),
|
||||
"has_project_memories": bool(pack.project_memory_text),
|
||||
"has_domain_knowledge": bool(pack.domain_knowledge_text),
|
||||
"has_engineering_context": bool(pack.engineering_context_text),
|
||||
"chunks": [
|
||||
{
|
||||
"source_file": c.source_file,
|
||||
@@ -364,6 +440,83 @@ def _pack_to_dict(pack: ContextPack) -> dict:
|
||||
}
|
||||
|
||||
|
||||
def _build_engineering_context(
|
||||
query: str,
|
||||
project: str,
|
||||
budget: int,
|
||||
) -> str:
|
||||
"""Find entities matching the query and format their context.
|
||||
|
||||
Uses simple word-overlap matching between query tokens and entity
|
||||
names to find relevant entities, then formats the top match with
|
||||
its relationships as a compact text band.
|
||||
"""
|
||||
if budget < 100:
|
||||
return ""
|
||||
|
||||
from atocore.memory.reinforcement import _normalize, _tokenize
|
||||
|
||||
query_tokens = _tokenize(_normalize(query))
|
||||
if not query_tokens:
|
||||
return ""
|
||||
|
||||
try:
|
||||
entities = get_entities(project=project, limit=100)
|
||||
except Exception:
|
||||
return ""
|
||||
|
||||
if not entities:
|
||||
return ""
|
||||
|
||||
scored: list[tuple[int, "Entity"]] = []
|
||||
for ent in entities:
|
||||
name_tokens = _tokenize(_normalize(ent.name))
|
||||
desc_tokens = _tokenize(_normalize(ent.description))
|
||||
overlap = len(query_tokens & (name_tokens | desc_tokens))
|
||||
if overlap > 0:
|
||||
scored.append((overlap, ent))
|
||||
|
||||
if not scored:
|
||||
return ""
|
||||
|
||||
scored.sort(key=lambda t: t[0], reverse=True)
|
||||
best_entity = scored[0][1]
|
||||
|
||||
try:
|
||||
ctx = get_entity_with_context(best_entity.id)
|
||||
except Exception:
|
||||
return ""
|
||||
|
||||
if ctx is None:
|
||||
return ""
|
||||
|
||||
lines = ["--- Engineering Context ---"]
|
||||
lines.append(f"[{best_entity.entity_type}] {best_entity.name}")
|
||||
if best_entity.description:
|
||||
lines.append(f" {best_entity.description[:150]}")
|
||||
|
||||
for rel in ctx["relationships"][:8]:
|
||||
other_id = (
|
||||
rel.target_entity_id
|
||||
if rel.source_entity_id == best_entity.id
|
||||
else rel.source_entity_id
|
||||
)
|
||||
other = ctx["related_entities"].get(other_id)
|
||||
if other:
|
||||
direction = "->" if rel.source_entity_id == best_entity.id else "<-"
|
||||
lines.append(
|
||||
f" {direction} {rel.relationship_type} [{other.entity_type}] {other.name}"
|
||||
)
|
||||
|
||||
lines.append("--- End Engineering Context ---")
|
||||
text = "\n".join(lines)
|
||||
|
||||
if len(text) > budget:
|
||||
text = text[:budget - 3].rstrip() + "..."
|
||||
|
||||
return text
|
||||
|
||||
|
||||
def _truncate_text_block(text: str, budget: int) -> tuple[str, int]:
|
||||
"""Trim a formatted text block so trusted tiers cannot exceed the total budget."""
|
||||
if budget <= 0 or not text:
|
||||
@@ -381,44 +534,66 @@ def _trim_context_to_budget(
|
||||
project_state_text: str,
|
||||
memory_text: str,
|
||||
project_memory_text: str,
|
||||
domain_knowledge_text: str,
|
||||
engineering_context_text: str,
|
||||
chunks: list[ContextChunk],
|
||||
budget: int,
|
||||
) -> tuple[str, list[ContextChunk]]:
|
||||
"""Trim retrieval → project memories → identity/preference → project state."""
|
||||
"""Trim retrieval -> engineering -> domain -> project memories -> identity -> state."""
|
||||
kept_chunks = list(chunks)
|
||||
formatted = _format_full_context(
|
||||
project_state_text, memory_text, project_memory_text, kept_chunks
|
||||
project_state_text, memory_text, project_memory_text,
|
||||
domain_knowledge_text, engineering_context_text, kept_chunks,
|
||||
)
|
||||
while len(formatted) > budget and kept_chunks:
|
||||
kept_chunks.pop()
|
||||
formatted = _format_full_context(
|
||||
project_state_text, memory_text, project_memory_text, kept_chunks
|
||||
project_state_text, memory_text, project_memory_text,
|
||||
domain_knowledge_text, engineering_context_text, kept_chunks,
|
||||
)
|
||||
|
||||
if len(formatted) <= budget:
|
||||
return formatted, kept_chunks
|
||||
|
||||
# Drop project memories next (they were the most recently added
|
||||
# tier and carry less trust than identity/preference).
|
||||
# Drop engineering context first.
|
||||
engineering_context_text = ""
|
||||
formatted = _format_full_context(
|
||||
project_state_text, memory_text, project_memory_text,
|
||||
domain_knowledge_text, engineering_context_text, kept_chunks,
|
||||
)
|
||||
if len(formatted) <= budget:
|
||||
return formatted, kept_chunks
|
||||
|
||||
# Drop domain knowledge next.
|
||||
domain_knowledge_text, _ = _truncate_text_block(domain_knowledge_text, 0)
|
||||
formatted = _format_full_context(
|
||||
project_state_text, memory_text, project_memory_text,
|
||||
domain_knowledge_text, engineering_context_text, kept_chunks,
|
||||
)
|
||||
if len(formatted) <= budget:
|
||||
return formatted, kept_chunks
|
||||
|
||||
project_memory_text, _ = _truncate_text_block(
|
||||
project_memory_text,
|
||||
max(budget - len(project_state_text) - len(memory_text), 0),
|
||||
)
|
||||
formatted = _format_full_context(
|
||||
project_state_text, memory_text, project_memory_text, kept_chunks
|
||||
project_state_text, memory_text, project_memory_text,
|
||||
domain_knowledge_text, engineering_context_text, kept_chunks,
|
||||
)
|
||||
if len(formatted) <= budget:
|
||||
return formatted, kept_chunks
|
||||
|
||||
memory_text, _ = _truncate_text_block(memory_text, max(budget - len(project_state_text), 0))
|
||||
formatted = _format_full_context(
|
||||
project_state_text, memory_text, project_memory_text, kept_chunks
|
||||
project_state_text, memory_text, project_memory_text,
|
||||
domain_knowledge_text, engineering_context_text, kept_chunks,
|
||||
)
|
||||
if len(formatted) <= budget:
|
||||
return formatted, kept_chunks
|
||||
|
||||
project_state_text, _ = _truncate_text_block(project_state_text, budget)
|
||||
formatted = _format_full_context(project_state_text, "", "", [])
|
||||
formatted = _format_full_context(project_state_text, "", "", "", [])
|
||||
if len(formatted) > budget:
|
||||
formatted, _ = _truncate_text_block(formatted, budget)
|
||||
return formatted, []
|
||||
|
||||
16
src/atocore/engineering/__init__.py
Normal file
16
src/atocore/engineering/__init__.py
Normal file
@@ -0,0 +1,16 @@
|
||||
"""Engineering Knowledge Layer — typed entities and relationships.
|
||||
|
||||
Layer 2 of the AtoCore architecture. Sits on top of the core machine
|
||||
layer (memories, project state, retrieval) and adds structured
|
||||
engineering objects with typed relationships so queries like "what
|
||||
requirements does this component satisfy" can be answered directly
|
||||
instead of relying on flat text search.
|
||||
|
||||
V1 entity types (from docs/architecture/engineering-ontology-v1.md):
|
||||
Component, Subsystem, Requirement, Constraint, Decision, Material,
|
||||
Parameter, Interface
|
||||
|
||||
V1 relationship types:
|
||||
CONTAINS, PART_OF, INTERFACES_WITH, SATISFIES, CONSTRAINED_BY,
|
||||
AFFECTED_BY_DECISION, ANALYZED_BY, VALIDATED_BY, DEPENDS_ON
|
||||
"""
|
||||
267
src/atocore/engineering/mirror.py
Normal file
267
src/atocore/engineering/mirror.py
Normal file
@@ -0,0 +1,267 @@
|
||||
"""Human Mirror — derived readable project views from structured data.
|
||||
|
||||
Layer 3 of the AtoCore architecture. Generates human-readable markdown
|
||||
pages from the engineering entity graph, Trusted Project State, and
|
||||
active memories. These pages are DERIVED — they are not canonical
|
||||
machine truth. They are support surfaces for human inspection and
|
||||
audit comfort.
|
||||
|
||||
The mirror never invents content. Every line traces back to an entity,
|
||||
a state entry, or a memory. If the structured data is wrong, the
|
||||
mirror is wrong — fix the source, not the page.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from atocore.context.project_state import get_state
|
||||
from atocore.engineering.service import (
|
||||
get_entities,
|
||||
get_relationships,
|
||||
)
|
||||
from atocore.memory.service import get_memories
|
||||
from atocore.observability.logger import get_logger
|
||||
|
||||
log = get_logger("mirror")
|
||||
|
||||
|
||||
def generate_project_overview(project: str) -> str:
|
||||
"""Generate a full project overview page in markdown."""
|
||||
sections = [
|
||||
_header(project),
|
||||
_synthesis_section(project),
|
||||
_state_section(project),
|
||||
_system_architecture(project),
|
||||
_decisions_section(project),
|
||||
_requirements_section(project),
|
||||
_materials_section(project),
|
||||
_vendors_section(project),
|
||||
_active_memories_section(project),
|
||||
_footer(project),
|
||||
]
|
||||
return "\n\n".join(s for s in sections if s)
|
||||
|
||||
|
||||
def _synthesis_section(project: str) -> str:
|
||||
"""Generate a short LLM synthesis of the current project state.
|
||||
|
||||
Reads the cached synthesis from project_state if available
|
||||
(category=status, key=synthesis_cache). If not cached, returns
|
||||
a deterministic summary from the existing structured data.
|
||||
The actual LLM-generated synthesis is produced by the weekly
|
||||
lint/synthesis pass on Dalidou (where claude CLI is available).
|
||||
"""
|
||||
entries = get_state(project)
|
||||
cached = ""
|
||||
for e in entries:
|
||||
if e.category == "status" and e.key == "synthesis_cache":
|
||||
cached = e.value
|
||||
break
|
||||
|
||||
if cached:
|
||||
return f"## Current State (auto-synthesis)\n\n> {cached}"
|
||||
|
||||
# Fallback: deterministic summary from structured data
|
||||
stage = ""
|
||||
summary = ""
|
||||
next_focus = ""
|
||||
for e in entries:
|
||||
if e.category == "status":
|
||||
if e.key == "stage":
|
||||
stage = e.value
|
||||
elif e.key == "summary":
|
||||
summary = e.value
|
||||
elif e.key == "next_focus":
|
||||
next_focus = e.value
|
||||
|
||||
if not (stage or summary or next_focus):
|
||||
return ""
|
||||
|
||||
bits = []
|
||||
if summary:
|
||||
bits.append(summary)
|
||||
if stage:
|
||||
bits.append(f"**Stage**: {stage}")
|
||||
if next_focus:
|
||||
bits.append(f"**Next**: {next_focus}")
|
||||
|
||||
return "## Current State\n\n" + "\n\n".join(bits)
|
||||
|
||||
|
||||
def _header(project: str) -> str:
|
||||
return (
|
||||
f"# {project} — Project Overview\n\n"
|
||||
f"> This page is auto-generated from AtoCore structured data.\n"
|
||||
f"> It is a **derived view**, not canonical truth. "
|
||||
f"If something is wrong here, fix the source data."
|
||||
)
|
||||
|
||||
|
||||
def _state_section(project: str) -> str:
|
||||
entries = get_state(project)
|
||||
if not entries:
|
||||
return ""
|
||||
|
||||
lines = ["## Trusted Project State"]
|
||||
by_category: dict[str, list] = {}
|
||||
for e in entries:
|
||||
by_category.setdefault(e.category.upper(), []).append(e)
|
||||
|
||||
for cat in ["DECISION", "REQUIREMENT", "STATUS", "FACT", "MILESTONE", "CONFIG", "CONTACT"]:
|
||||
items = by_category.get(cat, [])
|
||||
if not items:
|
||||
continue
|
||||
lines.append(f"\n### {cat.title()}")
|
||||
for item in items:
|
||||
value = item.value[:300]
|
||||
lines.append(f"- **{item.key}**: {value}")
|
||||
if item.source:
|
||||
lines.append(f" *(source: {item.source})*")
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def _system_architecture(project: str) -> str:
|
||||
systems = get_entities(entity_type="system", project=project)
|
||||
subsystems = get_entities(entity_type="subsystem", project=project)
|
||||
components = get_entities(entity_type="component", project=project)
|
||||
interfaces = get_entities(entity_type="interface", project=project)
|
||||
|
||||
if not systems and not subsystems and not components:
|
||||
return ""
|
||||
|
||||
lines = ["## System Architecture"]
|
||||
|
||||
for system in systems:
|
||||
lines.append(f"\n### {system.name}")
|
||||
if system.description:
|
||||
lines.append(f"{system.description}")
|
||||
|
||||
rels = get_relationships(system.id, direction="outgoing")
|
||||
children = []
|
||||
for rel in rels:
|
||||
if rel.relationship_type == "contains":
|
||||
child = next(
|
||||
(s for s in subsystems + components if s.id == rel.target_entity_id),
|
||||
None,
|
||||
)
|
||||
if child:
|
||||
children.append(child)
|
||||
|
||||
if children:
|
||||
lines.append("\n**Contains:**")
|
||||
for child in children:
|
||||
desc = f" — {child.description}" if child.description else ""
|
||||
lines.append(f"- [{child.entity_type}] **{child.name}**{desc}")
|
||||
|
||||
child_rels = get_relationships(child.id, direction="both")
|
||||
for cr in child_rels:
|
||||
if cr.relationship_type in ("uses_material", "interfaces_with", "constrained_by"):
|
||||
other_id = (
|
||||
cr.target_entity_id
|
||||
if cr.source_entity_id == child.id
|
||||
else cr.source_entity_id
|
||||
)
|
||||
other = next(
|
||||
(e for e in get_entities(project=project, limit=200)
|
||||
if e.id == other_id),
|
||||
None,
|
||||
)
|
||||
if other:
|
||||
lines.append(
|
||||
f" - *{cr.relationship_type}* → "
|
||||
f"[{other.entity_type}] {other.name}"
|
||||
)
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def _decisions_section(project: str) -> str:
|
||||
decisions = get_entities(entity_type="decision", project=project)
|
||||
if not decisions:
|
||||
return ""
|
||||
|
||||
lines = ["## Decisions"]
|
||||
for d in decisions:
|
||||
lines.append(f"\n### {d.name}")
|
||||
if d.description:
|
||||
lines.append(d.description)
|
||||
rels = get_relationships(d.id, direction="outgoing")
|
||||
for rel in rels:
|
||||
if rel.relationship_type == "affected_by_decision":
|
||||
affected = next(
|
||||
(e for e in get_entities(project=project, limit=200)
|
||||
if e.id == rel.target_entity_id),
|
||||
None,
|
||||
)
|
||||
if affected:
|
||||
lines.append(
|
||||
f"- Affects: [{affected.entity_type}] {affected.name}"
|
||||
)
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def _requirements_section(project: str) -> str:
|
||||
reqs = get_entities(entity_type="requirement", project=project)
|
||||
constraints = get_entities(entity_type="constraint", project=project)
|
||||
if not reqs and not constraints:
|
||||
return ""
|
||||
|
||||
lines = ["## Requirements & Constraints"]
|
||||
for r in reqs:
|
||||
lines.append(f"- **{r.name}**: {r.description}" if r.description else f"- **{r.name}**")
|
||||
for c in constraints:
|
||||
lines.append(f"- [constraint] **{c.name}**: {c.description}" if c.description else f"- [constraint] **{c.name}**")
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def _materials_section(project: str) -> str:
|
||||
materials = get_entities(entity_type="material", project=project)
|
||||
if not materials:
|
||||
return ""
|
||||
|
||||
lines = ["## Materials"]
|
||||
for m in materials:
|
||||
desc = f" — {m.description}" if m.description else ""
|
||||
lines.append(f"- **{m.name}**{desc}")
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def _vendors_section(project: str) -> str:
|
||||
vendors = get_entities(entity_type="vendor", project=project)
|
||||
if not vendors:
|
||||
return ""
|
||||
|
||||
lines = ["## Vendors"]
|
||||
for v in vendors:
|
||||
desc = f" — {v.description}" if v.description else ""
|
||||
lines.append(f"- **{v.name}**{desc}")
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def _active_memories_section(project: str) -> str:
|
||||
memories = get_memories(project=project, active_only=True, limit=20)
|
||||
if not memories:
|
||||
return ""
|
||||
|
||||
lines = ["## Active Memories"]
|
||||
for m in memories:
|
||||
conf = f" (conf: {m.confidence:.2f})" if m.confidence < 1.0 else ""
|
||||
refs = f" | refs: {m.reference_count}" if m.reference_count > 0 else ""
|
||||
lines.append(f"- [{m.memory_type}]{conf}{refs} {m.content[:200]}")
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def _footer(project: str) -> str:
|
||||
from datetime import datetime, timezone
|
||||
|
||||
now = datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M UTC")
|
||||
return (
|
||||
f"---\n\n"
|
||||
f"*Generated by AtoCore Human Mirror at {now}. "
|
||||
f"This is a derived view — not canonical truth.*"
|
||||
)
|
||||
317
src/atocore/engineering/service.py
Normal file
317
src/atocore/engineering/service.py
Normal file
@@ -0,0 +1,317 @@
|
||||
"""Engineering entity and relationship CRUD."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import uuid
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timezone
|
||||
|
||||
from atocore.models.database import get_connection
|
||||
from atocore.observability.logger import get_logger
|
||||
|
||||
log = get_logger("engineering")
|
||||
|
||||
ENTITY_TYPES = [
|
||||
"project",
|
||||
"system",
|
||||
"subsystem",
|
||||
"component",
|
||||
"interface",
|
||||
"requirement",
|
||||
"constraint",
|
||||
"decision",
|
||||
"material",
|
||||
"parameter",
|
||||
"analysis_model",
|
||||
"result",
|
||||
"validation_claim",
|
||||
"vendor",
|
||||
"process",
|
||||
]
|
||||
|
||||
RELATIONSHIP_TYPES = [
|
||||
"contains",
|
||||
"part_of",
|
||||
"interfaces_with",
|
||||
"satisfies",
|
||||
"constrained_by",
|
||||
"affected_by_decision",
|
||||
"analyzed_by",
|
||||
"validated_by",
|
||||
"depends_on",
|
||||
"uses_material",
|
||||
"described_by",
|
||||
"supersedes",
|
||||
]
|
||||
|
||||
ENTITY_STATUSES = ["candidate", "active", "superseded", "invalid"]
|
||||
|
||||
|
||||
@dataclass
|
||||
class Entity:
|
||||
id: str
|
||||
entity_type: str
|
||||
name: str
|
||||
project: str
|
||||
description: str = ""
|
||||
properties: dict = field(default_factory=dict)
|
||||
status: str = "active"
|
||||
confidence: float = 1.0
|
||||
source_refs: list[str] = field(default_factory=list)
|
||||
created_at: str = ""
|
||||
updated_at: str = ""
|
||||
|
||||
|
||||
@dataclass
|
||||
class Relationship:
|
||||
id: str
|
||||
source_entity_id: str
|
||||
target_entity_id: str
|
||||
relationship_type: str
|
||||
confidence: float = 1.0
|
||||
source_refs: list[str] = field(default_factory=list)
|
||||
created_at: str = ""
|
||||
|
||||
|
||||
def init_engineering_schema() -> None:
|
||||
with get_connection() as conn:
|
||||
conn.execute("""
|
||||
CREATE TABLE IF NOT EXISTS entities (
|
||||
id TEXT PRIMARY KEY,
|
||||
entity_type TEXT NOT NULL,
|
||||
name TEXT NOT NULL,
|
||||
project TEXT NOT NULL DEFAULT '',
|
||||
description TEXT NOT NULL DEFAULT '',
|
||||
properties TEXT NOT NULL DEFAULT '{}',
|
||||
status TEXT NOT NULL DEFAULT 'active',
|
||||
confidence REAL NOT NULL DEFAULT 1.0,
|
||||
source_refs TEXT NOT NULL DEFAULT '[]',
|
||||
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP
|
||||
)
|
||||
""")
|
||||
conn.execute("""
|
||||
CREATE TABLE IF NOT EXISTS relationships (
|
||||
id TEXT PRIMARY KEY,
|
||||
source_entity_id TEXT NOT NULL,
|
||||
target_entity_id TEXT NOT NULL,
|
||||
relationship_type TEXT NOT NULL,
|
||||
confidence REAL NOT NULL DEFAULT 1.0,
|
||||
source_refs TEXT NOT NULL DEFAULT '[]',
|
||||
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
FOREIGN KEY (source_entity_id) REFERENCES entities(id),
|
||||
FOREIGN KEY (target_entity_id) REFERENCES entities(id)
|
||||
)
|
||||
""")
|
||||
conn.execute("""
|
||||
CREATE INDEX IF NOT EXISTS idx_entities_project
|
||||
ON entities(project)
|
||||
""")
|
||||
conn.execute("""
|
||||
CREATE INDEX IF NOT EXISTS idx_entities_type
|
||||
ON entities(entity_type)
|
||||
""")
|
||||
conn.execute("""
|
||||
CREATE INDEX IF NOT EXISTS idx_relationships_source
|
||||
ON relationships(source_entity_id)
|
||||
""")
|
||||
conn.execute("""
|
||||
CREATE INDEX IF NOT EXISTS idx_relationships_target
|
||||
ON relationships(target_entity_id)
|
||||
""")
|
||||
log.info("engineering_schema_initialized")
|
||||
|
||||
|
||||
def create_entity(
|
||||
entity_type: str,
|
||||
name: str,
|
||||
project: str = "",
|
||||
description: str = "",
|
||||
properties: dict | None = None,
|
||||
status: str = "active",
|
||||
confidence: float = 1.0,
|
||||
source_refs: list[str] | None = None,
|
||||
) -> Entity:
|
||||
if entity_type not in ENTITY_TYPES:
|
||||
raise ValueError(f"Invalid entity type: {entity_type}. Must be one of {ENTITY_TYPES}")
|
||||
if status not in ENTITY_STATUSES:
|
||||
raise ValueError(f"Invalid status: {status}. Must be one of {ENTITY_STATUSES}")
|
||||
if not name or not name.strip():
|
||||
raise ValueError("Entity name must be non-empty")
|
||||
|
||||
entity_id = str(uuid.uuid4())
|
||||
now = datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S")
|
||||
props = properties or {}
|
||||
refs = source_refs or []
|
||||
|
||||
with get_connection() as conn:
|
||||
conn.execute(
|
||||
"""INSERT INTO entities
|
||||
(id, entity_type, name, project, description, properties,
|
||||
status, confidence, source_refs, created_at, updated_at)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)""",
|
||||
(
|
||||
entity_id, entity_type, name.strip(), project,
|
||||
description, json.dumps(props), status, confidence,
|
||||
json.dumps(refs), now, now,
|
||||
),
|
||||
)
|
||||
|
||||
log.info("entity_created", entity_id=entity_id, entity_type=entity_type, name=name)
|
||||
return Entity(
|
||||
id=entity_id, entity_type=entity_type, name=name.strip(),
|
||||
project=project, description=description, properties=props,
|
||||
status=status, confidence=confidence, source_refs=refs,
|
||||
created_at=now, updated_at=now,
|
||||
)
|
||||
|
||||
|
||||
def create_relationship(
|
||||
source_entity_id: str,
|
||||
target_entity_id: str,
|
||||
relationship_type: str,
|
||||
confidence: float = 1.0,
|
||||
source_refs: list[str] | None = None,
|
||||
) -> Relationship:
|
||||
if relationship_type not in RELATIONSHIP_TYPES:
|
||||
raise ValueError(f"Invalid relationship type: {relationship_type}")
|
||||
|
||||
rel_id = str(uuid.uuid4())
|
||||
now = datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S")
|
||||
refs = source_refs or []
|
||||
|
||||
with get_connection() as conn:
|
||||
conn.execute(
|
||||
"""INSERT INTO relationships
|
||||
(id, source_entity_id, target_entity_id, relationship_type,
|
||||
confidence, source_refs, created_at)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?)""",
|
||||
(rel_id, source_entity_id, target_entity_id,
|
||||
relationship_type, confidence, json.dumps(refs), now),
|
||||
)
|
||||
|
||||
log.info(
|
||||
"relationship_created",
|
||||
rel_id=rel_id,
|
||||
source=source_entity_id,
|
||||
target=target_entity_id,
|
||||
rel_type=relationship_type,
|
||||
)
|
||||
return Relationship(
|
||||
id=rel_id, source_entity_id=source_entity_id,
|
||||
target_entity_id=target_entity_id,
|
||||
relationship_type=relationship_type,
|
||||
confidence=confidence, source_refs=refs, created_at=now,
|
||||
)
|
||||
|
||||
|
||||
def get_entities(
|
||||
entity_type: str | None = None,
|
||||
project: str | None = None,
|
||||
status: str = "active",
|
||||
name_contains: str | None = None,
|
||||
limit: int = 100,
|
||||
) -> list[Entity]:
|
||||
query = "SELECT * FROM entities WHERE status = ?"
|
||||
params: list = [status]
|
||||
|
||||
if entity_type:
|
||||
query += " AND entity_type = ?"
|
||||
params.append(entity_type)
|
||||
if project is not None:
|
||||
query += " AND project = ?"
|
||||
params.append(project)
|
||||
if name_contains:
|
||||
query += " AND name LIKE ?"
|
||||
params.append(f"%{name_contains}%")
|
||||
|
||||
query += " ORDER BY entity_type, name LIMIT ?"
|
||||
params.append(min(limit, 500))
|
||||
|
||||
with get_connection() as conn:
|
||||
rows = conn.execute(query, params).fetchall()
|
||||
return [_row_to_entity(r) for r in rows]
|
||||
|
||||
|
||||
def get_entity(entity_id: str) -> Entity | None:
|
||||
with get_connection() as conn:
|
||||
row = conn.execute(
|
||||
"SELECT * FROM entities WHERE id = ?", (entity_id,)
|
||||
).fetchone()
|
||||
if row is None:
|
||||
return None
|
||||
return _row_to_entity(row)
|
||||
|
||||
|
||||
def get_relationships(
|
||||
entity_id: str,
|
||||
direction: str = "both",
|
||||
) -> list[Relationship]:
|
||||
results = []
|
||||
with get_connection() as conn:
|
||||
if direction in ("outgoing", "both"):
|
||||
rows = conn.execute(
|
||||
"SELECT * FROM relationships WHERE source_entity_id = ?",
|
||||
(entity_id,),
|
||||
).fetchall()
|
||||
results.extend(_row_to_relationship(r) for r in rows)
|
||||
if direction in ("incoming", "both"):
|
||||
rows = conn.execute(
|
||||
"SELECT * FROM relationships WHERE target_entity_id = ?",
|
||||
(entity_id,),
|
||||
).fetchall()
|
||||
results.extend(_row_to_relationship(r) for r in rows)
|
||||
return results
|
||||
|
||||
|
||||
def get_entity_with_context(entity_id: str) -> dict | None:
|
||||
entity = get_entity(entity_id)
|
||||
if entity is None:
|
||||
return None
|
||||
relationships = get_relationships(entity_id)
|
||||
related_ids = set()
|
||||
for rel in relationships:
|
||||
related_ids.add(rel.source_entity_id)
|
||||
related_ids.add(rel.target_entity_id)
|
||||
related_ids.discard(entity_id)
|
||||
|
||||
related_entities = {}
|
||||
for rid in related_ids:
|
||||
e = get_entity(rid)
|
||||
if e:
|
||||
related_entities[rid] = e
|
||||
|
||||
return {
|
||||
"entity": entity,
|
||||
"relationships": relationships,
|
||||
"related_entities": related_entities,
|
||||
}
|
||||
|
||||
|
||||
def _row_to_entity(row) -> Entity:
|
||||
return Entity(
|
||||
id=row["id"],
|
||||
entity_type=row["entity_type"],
|
||||
name=row["name"],
|
||||
project=row["project"] or "",
|
||||
description=row["description"] or "",
|
||||
properties=json.loads(row["properties"] or "{}"),
|
||||
status=row["status"],
|
||||
confidence=row["confidence"],
|
||||
source_refs=json.loads(row["source_refs"] or "[]"),
|
||||
created_at=row["created_at"] or "",
|
||||
updated_at=row["updated_at"] or "",
|
||||
)
|
||||
|
||||
|
||||
def _row_to_relationship(row) -> Relationship:
|
||||
return Relationship(
|
||||
id=row["id"],
|
||||
source_entity_id=row["source_entity_id"],
|
||||
target_entity_id=row["target_entity_id"],
|
||||
relationship_type=row["relationship_type"],
|
||||
confidence=row["confidence"],
|
||||
source_refs=json.loads(row["source_refs"] or "[]"),
|
||||
created_at=row["created_at"] or "",
|
||||
)
|
||||
298
src/atocore/engineering/wiki.py
Normal file
298
src/atocore/engineering/wiki.py
Normal file
@@ -0,0 +1,298 @@
|
||||
"""AtoCore Wiki — navigable HTML pages from structured data.
|
||||
|
||||
A lightweight wiki served directly from the AtoCore API. Every page is
|
||||
generated on-demand from the database so it's always current. Source of
|
||||
truth is the database — the wiki is a derived view.
|
||||
|
||||
Routes:
|
||||
/wiki Homepage with project list + search
|
||||
/wiki/projects/{name} Full project overview
|
||||
/wiki/entities/{id} Entity detail with relationships
|
||||
/wiki/search?q=... Search entities, memories, state
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import markdown as md
|
||||
|
||||
from atocore.context.project_state import get_state
|
||||
from atocore.engineering.service import (
|
||||
get_entities,
|
||||
get_entity,
|
||||
get_entity_with_context,
|
||||
get_relationships,
|
||||
)
|
||||
from atocore.memory.service import get_memories
|
||||
from atocore.projects.registry import load_project_registry
|
||||
|
||||
|
||||
def render_html(title: str, body_html: str, breadcrumbs: list[tuple[str, str]] | None = None) -> str:
|
||||
nav = ""
|
||||
if breadcrumbs:
|
||||
parts = []
|
||||
for label, href in breadcrumbs:
|
||||
if href:
|
||||
parts.append(f'<a href="{href}">{label}</a>')
|
||||
else:
|
||||
parts.append(f"<span>{label}</span>")
|
||||
nav = f'<nav class="breadcrumbs">{" / ".join(parts)}</nav>'
|
||||
|
||||
return _TEMPLATE.replace("{{title}}", title).replace("{{nav}}", nav).replace("{{body}}", body_html)
|
||||
|
||||
|
||||
def render_homepage() -> str:
|
||||
projects = []
|
||||
try:
|
||||
registered = load_project_registry()
|
||||
for p in registered:
|
||||
entity_count = len(get_entities(project=p.project_id, limit=200))
|
||||
memory_count = len(get_memories(project=p.project_id, active_only=True, limit=200))
|
||||
state_entries = get_state(p.project_id)
|
||||
|
||||
# Pull stage/type/client from state entries
|
||||
stage = ""
|
||||
proj_type = ""
|
||||
client = ""
|
||||
for e in state_entries:
|
||||
if e.category == "status":
|
||||
if e.key == "stage":
|
||||
stage = e.value
|
||||
elif e.key == "type":
|
||||
proj_type = e.value
|
||||
elif e.key == "client":
|
||||
client = e.value
|
||||
|
||||
projects.append({
|
||||
"id": p.project_id,
|
||||
"description": p.description,
|
||||
"entities": entity_count,
|
||||
"memories": memory_count,
|
||||
"state": len(state_entries),
|
||||
"stage": stage,
|
||||
"type": proj_type,
|
||||
"client": client,
|
||||
})
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Group by high-level bucket
|
||||
buckets: dict[str, list] = {
|
||||
"Active Contracts": [],
|
||||
"Leads & Prospects": [],
|
||||
"Internal Tools & Infra": [],
|
||||
"Other": [],
|
||||
}
|
||||
for p in projects:
|
||||
t = p["type"].lower()
|
||||
s = p["stage"].lower()
|
||||
if "lead" in t or "lead" in s or "prospect" in s:
|
||||
buckets["Leads & Prospects"].append(p)
|
||||
elif "contract" in t or ("active" in s and "contract" in s):
|
||||
buckets["Active Contracts"].append(p)
|
||||
elif "infra" in t or "tool" in t or "internal" in t:
|
||||
buckets["Internal Tools & Infra"].append(p)
|
||||
else:
|
||||
buckets["Other"].append(p)
|
||||
|
||||
lines = ['<h1>AtoCore Wiki</h1>']
|
||||
lines.append('<form class="search-box" action="/wiki/search" method="get">')
|
||||
lines.append('<input type="text" name="q" placeholder="Search entities, memories, projects..." autofocus>')
|
||||
lines.append('<button type="submit">Search</button>')
|
||||
lines.append('</form>')
|
||||
|
||||
for bucket_name, items in buckets.items():
|
||||
if not items:
|
||||
continue
|
||||
lines.append(f'<h2>{bucket_name}</h2>')
|
||||
lines.append('<div class="card-grid">')
|
||||
for p in items:
|
||||
client_line = f'<div class="client">{p["client"]}</div>' if p["client"] else ''
|
||||
stage_tag = f'<span class="tag">{p["stage"].split(" — ")[0]}</span>' if p["stage"] else ''
|
||||
lines.append(f'<a href="/wiki/projects/{p["id"]}" class="card">')
|
||||
lines.append(f'<h3>{p["id"]} {stage_tag}</h3>')
|
||||
lines.append(client_line)
|
||||
lines.append(f'<p>{p["description"][:140]}</p>')
|
||||
lines.append(f'<div class="stats">{p["entities"]} entities · {p["memories"]} memories · {p["state"]} state</div>')
|
||||
lines.append('</a>')
|
||||
lines.append('</div>')
|
||||
|
||||
# Quick stats
|
||||
all_entities = get_entities(limit=500)
|
||||
all_memories = get_memories(active_only=True, limit=500)
|
||||
lines.append('<h2>System</h2>')
|
||||
lines.append(f'<p>{len(all_entities)} entities · {len(all_memories)} active memories · {len(projects)} projects</p>')
|
||||
lines.append(f'<p><a href="/admin/dashboard">API Dashboard (JSON)</a> · <a href="/health">Health Check</a></p>')
|
||||
|
||||
return render_html("AtoCore Wiki", "\n".join(lines))
|
||||
|
||||
|
||||
def render_project(project: str) -> str:
|
||||
from atocore.engineering.mirror import generate_project_overview
|
||||
|
||||
markdown_content = generate_project_overview(project)
|
||||
# Convert entity names to links
|
||||
entities = get_entities(project=project, limit=200)
|
||||
html_body = md.markdown(markdown_content, extensions=["tables", "fenced_code"])
|
||||
|
||||
for ent in sorted(entities, key=lambda e: len(e.name), reverse=True):
|
||||
linked = f'<a href="/wiki/entities/{ent.id}" title="{ent.entity_type}">{ent.name}</a>'
|
||||
html_body = html_body.replace(f"<strong>{ent.name}</strong>", f"<strong>{linked}</strong>", 1)
|
||||
|
||||
return render_html(
|
||||
f"{project}",
|
||||
html_body,
|
||||
breadcrumbs=[("Wiki", "/wiki"), (project, "")],
|
||||
)
|
||||
|
||||
|
||||
def render_entity(entity_id: str) -> str | None:
|
||||
ctx = get_entity_with_context(entity_id)
|
||||
if ctx is None:
|
||||
return None
|
||||
|
||||
ent = ctx["entity"]
|
||||
lines = [f'<h1>[{ent.entity_type}] {ent.name}</h1>']
|
||||
|
||||
if ent.project:
|
||||
lines.append(f'<p>Project: <a href="/wiki/projects/{ent.project}">{ent.project}</a></p>')
|
||||
if ent.description:
|
||||
lines.append(f'<p>{ent.description}</p>')
|
||||
if ent.properties:
|
||||
lines.append('<h2>Properties</h2><ul>')
|
||||
for k, v in ent.properties.items():
|
||||
lines.append(f'<li><strong>{k}</strong>: {v}</li>')
|
||||
lines.append('</ul>')
|
||||
|
||||
lines.append(f'<p class="meta">confidence: {ent.confidence} · status: {ent.status} · created: {ent.created_at}</p>')
|
||||
|
||||
if ctx["relationships"]:
|
||||
lines.append('<h2>Relationships</h2><ul>')
|
||||
for rel in ctx["relationships"]:
|
||||
other_id = rel.target_entity_id if rel.source_entity_id == entity_id else rel.source_entity_id
|
||||
other = ctx["related_entities"].get(other_id)
|
||||
if other:
|
||||
direction = "\u2192" if rel.source_entity_id == entity_id else "\u2190"
|
||||
lines.append(
|
||||
f'<li>{direction} <em>{rel.relationship_type}</em> '
|
||||
f'<a href="/wiki/entities/{other_id}">[{other.entity_type}] {other.name}</a></li>'
|
||||
)
|
||||
lines.append('</ul>')
|
||||
|
||||
breadcrumbs = [("Wiki", "/wiki")]
|
||||
if ent.project:
|
||||
breadcrumbs.append((ent.project, f"/wiki/projects/{ent.project}"))
|
||||
breadcrumbs.append((ent.name, ""))
|
||||
|
||||
return render_html(ent.name, "\n".join(lines), breadcrumbs=breadcrumbs)
|
||||
|
||||
|
||||
def render_search(query: str) -> str:
|
||||
lines = [f'<h1>Search: "{query}"</h1>']
|
||||
|
||||
# Search entities by name
|
||||
entities = get_entities(name_contains=query, limit=20)
|
||||
if entities:
|
||||
lines.append(f'<h2>Entities ({len(entities)})</h2><ul>')
|
||||
for e in entities:
|
||||
proj = f' <span class="tag">{e.project}</span>' if e.project else ''
|
||||
lines.append(
|
||||
f'<li><a href="/wiki/entities/{e.id}">[{e.entity_type}] {e.name}</a>{proj}'
|
||||
f'{" — " + e.description[:100] if e.description else ""}</li>'
|
||||
)
|
||||
lines.append('</ul>')
|
||||
|
||||
# Search memories
|
||||
all_memories = get_memories(active_only=True, limit=200)
|
||||
query_lower = query.lower()
|
||||
matching_mems = [m for m in all_memories if query_lower in m.content.lower()][:10]
|
||||
if matching_mems:
|
||||
lines.append(f'<h2>Memories ({len(matching_mems)})</h2><ul>')
|
||||
for m in matching_mems:
|
||||
proj = f' <span class="tag">{m.project}</span>' if m.project else ''
|
||||
lines.append(f'<li>[{m.memory_type}]{proj} {m.content[:200]}</li>')
|
||||
lines.append('</ul>')
|
||||
|
||||
if not entities and not matching_mems:
|
||||
lines.append('<p>No results found.</p>')
|
||||
|
||||
lines.append('<form class="search-box" action="/wiki/search" method="get">')
|
||||
lines.append(f'<input type="text" name="q" value="{query}" autofocus>')
|
||||
lines.append('<button type="submit">Search</button>')
|
||||
lines.append('</form>')
|
||||
|
||||
return render_html(
|
||||
f"Search: {query}",
|
||||
"\n".join(lines),
|
||||
breadcrumbs=[("Wiki", "/wiki"), ("Search", "")],
|
||||
)
|
||||
|
||||
|
||||
_TEMPLATE = """<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<title>{{title}} — AtoCore</title>
|
||||
<style>
|
||||
:root { --bg: #fafafa; --text: #1a1a2e; --accent: #2563eb; --border: #e2e8f0; --card: #fff; --hover: #f1f5f9; }
|
||||
@media (prefers-color-scheme: dark) {
|
||||
:root { --bg: #0f172a; --text: #e2e8f0; --accent: #60a5fa; --border: #334155; --card: #1e293b; --hover: #334155; }
|
||||
}
|
||||
* { box-sizing: border-box; margin: 0; padding: 0; }
|
||||
body {
|
||||
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif;
|
||||
line-height: 1.7; color: var(--text); background: var(--bg);
|
||||
max-width: 800px; margin: 0 auto; padding: 1.5rem;
|
||||
}
|
||||
h1 { font-size: 1.8rem; margin-bottom: 0.5rem; color: var(--accent); }
|
||||
h2 { font-size: 1.3rem; margin-top: 2rem; margin-bottom: 0.6rem; padding-bottom: 0.2rem; border-bottom: 2px solid var(--border); }
|
||||
h3 { font-size: 1.1rem; margin-top: 1.2rem; margin-bottom: 0.4rem; }
|
||||
p { margin-bottom: 0.8rem; }
|
||||
ul { margin-left: 1.5rem; margin-bottom: 1rem; }
|
||||
li { margin-bottom: 0.3rem; }
|
||||
li ul { margin-top: 0.2rem; }
|
||||
strong { color: var(--accent); font-weight: 600; }
|
||||
em { opacity: 0.7; font-size: 0.9em; }
|
||||
a { color: var(--accent); text-decoration: none; }
|
||||
a:hover { text-decoration: underline; }
|
||||
blockquote {
|
||||
background: var(--card); border-left: 4px solid var(--accent);
|
||||
padding: 0.6rem 1rem; margin: 1rem 0; border-radius: 0 6px 6px 0;
|
||||
font-size: 0.9em;
|
||||
}
|
||||
hr { border: none; border-top: 1px solid var(--border); margin: 2rem 0; }
|
||||
.breadcrumbs { margin-bottom: 1.5rem; font-size: 0.85em; opacity: 0.7; }
|
||||
.breadcrumbs a { opacity: 0.8; }
|
||||
.meta { font-size: 0.8em; opacity: 0.5; margin-top: 0.5rem; }
|
||||
.tag { background: var(--accent); color: var(--bg); padding: 0.1rem 0.4rem; border-radius: 3px; font-size: 0.75em; margin-left: 0.3rem; }
|
||||
.search-box { display: flex; gap: 0.5rem; margin: 1.5rem 0; }
|
||||
.search-box input {
|
||||
flex: 1; padding: 0.6rem 1rem; border: 2px solid var(--border);
|
||||
border-radius: 8px; background: var(--card); color: var(--text);
|
||||
font-size: 1rem;
|
||||
}
|
||||
.search-box input:focus { border-color: var(--accent); outline: none; }
|
||||
.search-box button {
|
||||
padding: 0.6rem 1.2rem; background: var(--accent); color: var(--bg);
|
||||
border: none; border-radius: 8px; cursor: pointer; font-size: 1rem;
|
||||
}
|
||||
.card-grid { display: grid; grid-template-columns: 1fr; gap: 1rem; margin: 1rem 0; }
|
||||
@media (min-width: 600px) { .card-grid { grid-template-columns: 1fr 1fr; } }
|
||||
.card {
|
||||
display: block; background: var(--card); border: 1px solid var(--border);
|
||||
border-radius: 10px; padding: 1.2rem; text-decoration: none;
|
||||
color: var(--text); transition: border-color 0.2s;
|
||||
}
|
||||
.card:hover { border-color: var(--accent); background: var(--hover); text-decoration: none; }
|
||||
.card h3 { color: var(--accent); margin: 0 0 0.3rem 0; }
|
||||
.card p { font-size: 0.9em; margin: 0; opacity: 0.8; }
|
||||
.card .stats { font-size: 0.8em; margin-top: 0.5rem; opacity: 0.5; }
|
||||
.card .client { font-size: 0.85em; opacity: 0.65; margin-bottom: 0.3rem; font-style: italic; }
|
||||
.card h3 .tag { font-size: 0.65em; vertical-align: middle; margin-left: 0.4rem; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
{{nav}}
|
||||
{{body}}
|
||||
</body>
|
||||
</html>"""
|
||||
@@ -8,6 +8,7 @@ from atocore import __version__
|
||||
from atocore.api.routes import router
|
||||
import atocore.config as _config
|
||||
from atocore.context.project_state import init_project_state_schema
|
||||
from atocore.engineering.service import init_engineering_schema
|
||||
from atocore.ingestion.pipeline import get_source_status
|
||||
from atocore.models.database import init_db
|
||||
from atocore.observability.logger import get_logger, setup_logging
|
||||
@@ -29,6 +30,7 @@ async def lifespan(app: FastAPI):
|
||||
_config.ensure_runtime_dirs()
|
||||
init_db()
|
||||
init_project_state_schema()
|
||||
init_engineering_schema()
|
||||
log.info(
|
||||
"startup_ready",
|
||||
env=_config.settings.env,
|
||||
|
||||
183
src/atocore/memory/_llm_prompt.py
Normal file
183
src/atocore/memory/_llm_prompt.py
Normal file
@@ -0,0 +1,183 @@
|
||||
"""Shared LLM-extractor prompt + parser (stdlib-only).
|
||||
|
||||
R12: single source of truth for the system prompt, memory type set,
|
||||
size limits, and raw JSON parsing used by both paths that shell out
|
||||
to ``claude -p``:
|
||||
|
||||
- ``atocore.memory.extractor_llm`` (in-container extractor, wraps the
|
||||
parsed dicts in ``MemoryCandidate`` with registry-checked project
|
||||
attribution)
|
||||
- ``scripts/batch_llm_extract_live.py`` (host-side extractor, can't
|
||||
import the full atocore package because Dalidou's host Python lacks
|
||||
the container's deps; imports this module via ``sys.path``)
|
||||
|
||||
This module MUST stay stdlib-only. No ``atocore`` imports, no third-
|
||||
party packages. Callers apply their own project-attribution policy on
|
||||
top of the normalized dicts this module emits.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from typing import Any
|
||||
|
||||
LLM_EXTRACTOR_VERSION = "llm-0.4.0"
|
||||
MAX_RESPONSE_CHARS = 8000
|
||||
MAX_PROMPT_CHARS = 2000
|
||||
MEMORY_TYPES = {"identity", "preference", "project", "episodic", "knowledge", "adaptation"}
|
||||
|
||||
SYSTEM_PROMPT = """You extract memory candidates from LLM conversation turns for a personal context engine called AtoCore.
|
||||
|
||||
AtoCore is the brain for Atomaste's engineering work. Known projects:
|
||||
p04-gigabit, p05-interferometer, p06-polisher, atomizer-v2, atocore,
|
||||
abb-space. Unknown project names — still tag them, the system auto-detects.
|
||||
|
||||
Your job is to emit SIGNALS that matter for future context. Be aggressive:
|
||||
err on the side of capturing useful signal. Triage filters noise downstream.
|
||||
|
||||
WHAT TO EMIT (in order of importance):
|
||||
|
||||
1. PROJECT ACTIVITY — any mention of a project with context worth remembering:
|
||||
- "Schott quote received for ABB-Space" (event + project)
|
||||
- "Cédric asked about p06 firmware timing" (stakeholder event)
|
||||
- "Still waiting on Zygo lead-time from Nabeel" (blocker status)
|
||||
- "p05 vendor decision needs to happen this week" (action item)
|
||||
|
||||
2. DECISIONS AND CHOICES — anything that commits to a direction:
|
||||
- "Going with Zygo Verifire SV for p05" (decision)
|
||||
- "Dropping stitching from primary workflow" (design choice)
|
||||
- "USB SSD mandatory, not SD card" (architectural commitment)
|
||||
|
||||
3. DURABLE ENGINEERING INSIGHT — earned knowledge that generalizes:
|
||||
- "CTE gradient dominates WFE at F/1.2" (materials insight)
|
||||
- "Preston model breaks below 5N because contact assumption fails"
|
||||
- "m=1 coma NOT correctable by force modulation" (controls insight)
|
||||
Test: would a competent engineer NEED experience to know this?
|
||||
If it's textbook/google-findable, skip it.
|
||||
|
||||
4. STAKEHOLDER AND VENDOR EVENTS:
|
||||
- "Email sent to Nabeel 2026-04-13 asking for lead time"
|
||||
- "Meeting with Jason on Table 7 next Tuesday"
|
||||
- "Starspec wants updated CAD by Friday"
|
||||
|
||||
5. PREFERENCES AND ADAPTATIONS that shape how Antoine works:
|
||||
- "Antoine prefers OAuth over API keys"
|
||||
- "Extraction stays off the capture hot path"
|
||||
|
||||
WHAT TO SKIP:
|
||||
|
||||
- Pure conversational filler ("ok thanks", "let me check")
|
||||
- Instructional help content ("run this command", "here's how to...")
|
||||
- Obvious textbook facts anyone can google in 30 seconds
|
||||
- Session meta-chatter ("let me commit this", "deploy running")
|
||||
- Transient system state snapshots ("36 active memories right now")
|
||||
|
||||
CANDIDATE TYPES — choose the best fit:
|
||||
|
||||
- project — a fact, decision, or event specific to one named project
|
||||
- knowledge — durable engineering insight (use domain, not project)
|
||||
- preference — how Antoine works / wants things done
|
||||
- adaptation — a standing rule or adjustment to behavior
|
||||
- episodic — a stakeholder event or milestone worth remembering
|
||||
|
||||
DOMAINS for knowledge candidates (required when type=knowledge and project is empty):
|
||||
physics, materials, optics, mechanics, manufacturing, metrology,
|
||||
controls, software, math, finance, business
|
||||
|
||||
TRUST HIERARCHY:
|
||||
|
||||
- project-specific: set project to the project id, leave domain empty
|
||||
- domain knowledge: set domain, leave project empty
|
||||
- events/activity: use project, type=project or episodic
|
||||
- one conversation can produce MULTIPLE candidates — emit them all
|
||||
|
||||
OUTPUT RULES:
|
||||
|
||||
- Each candidate content under 250 characters, stands alone
|
||||
- Default confidence 0.5. Raise to 0.7 only for ratified/committed claims.
|
||||
- Raw JSON array, no prose, no markdown fences
|
||||
- Empty array [] is fine when the conversation has no durable signal
|
||||
|
||||
Each element:
|
||||
{"type": "project|knowledge|preference|adaptation|episodic", "content": "...", "project": "...", "domain": "", "confidence": 0.5}"""
|
||||
|
||||
|
||||
def build_user_message(prompt: str, response: str, project_hint: str) -> str:
|
||||
prompt_excerpt = (prompt or "")[:MAX_PROMPT_CHARS]
|
||||
response_excerpt = (response or "")[:MAX_RESPONSE_CHARS]
|
||||
return (
|
||||
f"PROJECT HINT (may be empty): {project_hint or ''}\n\n"
|
||||
f"USER PROMPT:\n{prompt_excerpt}\n\n"
|
||||
f"ASSISTANT RESPONSE:\n{response_excerpt}\n\n"
|
||||
"Return the JSON array now."
|
||||
)
|
||||
|
||||
|
||||
def parse_llm_json_array(raw_output: str) -> list[dict[str, Any]]:
|
||||
"""Strip markdown fences / leading prose and return the parsed JSON
|
||||
array as a list of raw dicts. Returns an empty list on any parse
|
||||
failure — callers decide whether to log."""
|
||||
text = (raw_output or "").strip()
|
||||
if text.startswith("```"):
|
||||
text = text.strip("`")
|
||||
nl = text.find("\n")
|
||||
if nl >= 0:
|
||||
text = text[nl + 1:]
|
||||
if text.endswith("```"):
|
||||
text = text[:-3]
|
||||
text = text.strip()
|
||||
|
||||
if not text or text == "[]":
|
||||
return []
|
||||
|
||||
if not text.lstrip().startswith("["):
|
||||
start = text.find("[")
|
||||
end = text.rfind("]")
|
||||
if start >= 0 and end > start:
|
||||
text = text[start:end + 1]
|
||||
|
||||
try:
|
||||
parsed = json.loads(text)
|
||||
except json.JSONDecodeError:
|
||||
return []
|
||||
|
||||
if not isinstance(parsed, list):
|
||||
return []
|
||||
return [item for item in parsed if isinstance(item, dict)]
|
||||
|
||||
|
||||
def normalize_candidate_item(item: dict[str, Any]) -> dict[str, Any] | None:
|
||||
"""Validate and normalize one raw model item into a candidate dict.
|
||||
|
||||
Returns None if the item fails basic validation (unknown type,
|
||||
empty content). Does NOT apply project-attribution policy — that's
|
||||
the caller's job, since the registry-check differs between the
|
||||
in-container path and the host path.
|
||||
|
||||
Output keys: type, content, project (raw model value), domain,
|
||||
confidence.
|
||||
"""
|
||||
mem_type = str(item.get("type") or "").strip().lower()
|
||||
content = str(item.get("content") or "").strip()
|
||||
if mem_type not in MEMORY_TYPES or not content:
|
||||
return None
|
||||
|
||||
model_project = str(item.get("project") or "").strip()
|
||||
domain = str(item.get("domain") or "").strip().lower()
|
||||
|
||||
try:
|
||||
confidence = float(item.get("confidence", 0.5))
|
||||
except (TypeError, ValueError):
|
||||
confidence = 0.5
|
||||
confidence = max(0.0, min(1.0, confidence))
|
||||
|
||||
if domain and not model_project:
|
||||
content = f"[{domain}] {content}"
|
||||
|
||||
return {
|
||||
"type": mem_type,
|
||||
"content": content[:1000],
|
||||
"project": model_project,
|
||||
"domain": domain,
|
||||
"confidence": confidence,
|
||||
}
|
||||
@@ -49,7 +49,6 @@ Implementation notes:
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
@@ -58,38 +57,21 @@ from dataclasses import dataclass
|
||||
from functools import lru_cache
|
||||
|
||||
from atocore.interactions.service import Interaction
|
||||
from atocore.memory._llm_prompt import (
|
||||
LLM_EXTRACTOR_VERSION,
|
||||
SYSTEM_PROMPT as _SYSTEM_PROMPT,
|
||||
build_user_message,
|
||||
normalize_candidate_item,
|
||||
parse_llm_json_array,
|
||||
)
|
||||
from atocore.memory.extractor import MemoryCandidate
|
||||
from atocore.memory.service import MEMORY_TYPES
|
||||
from atocore.observability.logger import get_logger
|
||||
|
||||
log = get_logger("extractor_llm")
|
||||
|
||||
LLM_EXTRACTOR_VERSION = "llm-0.2.0"
|
||||
DEFAULT_MODEL = os.environ.get("ATOCORE_LLM_EXTRACTOR_MODEL", "sonnet")
|
||||
DEFAULT_TIMEOUT_S = float(os.environ.get("ATOCORE_LLM_EXTRACTOR_TIMEOUT_S", "90"))
|
||||
MAX_RESPONSE_CHARS = 8000
|
||||
MAX_PROMPT_CHARS = 2000
|
||||
|
||||
_SYSTEM_PROMPT = """You extract durable memory candidates from LLM conversation turns for a personal context engine called AtoCore.
|
||||
|
||||
Your job is to read one user prompt plus the assistant's response and decide which durable facts, decisions, preferences, architectural rules, or project invariants should be remembered across future sessions.
|
||||
|
||||
Rules:
|
||||
|
||||
1. Only surface durable claims. Skip transient status ("deploy is still running"), instructional guidance ("here is how to run the command"), troubleshooting tactics, ephemeral recommendations ("merge this PR now"), and session recaps.
|
||||
2. A candidate is durable when a reader coming back in two weeks would still need to know it. Architectural choices, named rules, ratified decisions, invariants, procurement commitments, and project-level constraints qualify. Conversational fillers and step-by-step instructions do not.
|
||||
3. Each candidate must stand alone. Rewrite the claim in one sentence under 200 characters with enough context that a reader without the conversation understands it.
|
||||
4. Each candidate must have a type from this closed set: project, knowledge, preference, adaptation.
|
||||
5. If the conversation is clearly scoped to a project (p04-gigabit, p05-interferometer, p06-polisher, atocore), set ``project`` to that id. Otherwise leave ``project`` empty.
|
||||
6. If the response makes no durable claim, return an empty list. It is correct and expected to return [] on most conversational turns.
|
||||
7. Confidence should be 0.5 by default so human review workload is honest. Raise to 0.6 only when the response states the claim in an unambiguous, committed form (e.g. "the decision is X", "the selected approach is Y", "X is non-negotiable").
|
||||
8. Output must be a raw JSON array and nothing else. No prose before or after. No markdown fences. No explanations.
|
||||
|
||||
Each array element has exactly this shape:
|
||||
|
||||
{"type": "project|knowledge|preference|adaptation", "content": "...", "project": "...", "confidence": 0.5}
|
||||
|
||||
Return [] when there is nothing to extract."""
|
||||
|
||||
|
||||
@dataclass
|
||||
@@ -152,13 +134,10 @@ def extract_candidates_llm_verbose(
|
||||
if not response_text:
|
||||
return LLMExtractionResult(candidates=[], raw_output="", error="empty_response")
|
||||
|
||||
prompt_excerpt = (interaction.prompt or "")[:MAX_PROMPT_CHARS]
|
||||
response_excerpt = response_text[:MAX_RESPONSE_CHARS]
|
||||
user_message = (
|
||||
f"PROJECT HINT (may be empty): {interaction.project or ''}\n\n"
|
||||
f"USER PROMPT:\n{prompt_excerpt}\n\n"
|
||||
f"ASSISTANT RESPONSE:\n{response_excerpt}\n\n"
|
||||
"Return the JSON array now."
|
||||
user_message = build_user_message(
|
||||
interaction.prompt or "",
|
||||
response_text,
|
||||
interaction.project or "",
|
||||
)
|
||||
|
||||
args = [
|
||||
@@ -168,7 +147,6 @@ def extract_candidates_llm_verbose(
|
||||
model or DEFAULT_MODEL,
|
||||
"--append-system-prompt",
|
||||
_SYSTEM_PROMPT,
|
||||
"--no-session-persistence",
|
||||
"--disable-slash-commands",
|
||||
user_message,
|
||||
]
|
||||
@@ -217,65 +195,59 @@ def extract_candidates_llm_verbose(
|
||||
def _parse_candidates(raw_output: str, interaction: Interaction) -> list[MemoryCandidate]:
|
||||
"""Parse the model's JSON output into MemoryCandidate objects.
|
||||
|
||||
Tolerates common model glitches: surrounding whitespace, stray
|
||||
markdown fences, leading/trailing prose. Silently drops malformed
|
||||
array elements rather than raising.
|
||||
Shared stripping + per-item validation live in
|
||||
``atocore.memory._llm_prompt``. This function adds the container-
|
||||
only R9 project attribution: registry-check model_project and fall
|
||||
back to the interaction scope when set.
|
||||
"""
|
||||
text = raw_output.strip()
|
||||
if text.startswith("```"):
|
||||
text = text.strip("`")
|
||||
first_newline = text.find("\n")
|
||||
if first_newline >= 0:
|
||||
text = text[first_newline + 1 :]
|
||||
if text.endswith("```"):
|
||||
text = text[:-3]
|
||||
text = text.strip()
|
||||
|
||||
if not text or text == "[]":
|
||||
return []
|
||||
|
||||
if not text.lstrip().startswith("["):
|
||||
start = text.find("[")
|
||||
end = text.rfind("]")
|
||||
if start >= 0 and end > start:
|
||||
text = text[start : end + 1]
|
||||
|
||||
try:
|
||||
parsed = json.loads(text)
|
||||
except json.JSONDecodeError as exc:
|
||||
log.error("llm_extractor_parse_failed", error=str(exc), raw_prefix=raw_output[:120])
|
||||
return []
|
||||
|
||||
if not isinstance(parsed, list):
|
||||
return []
|
||||
raw_items = parse_llm_json_array(raw_output)
|
||||
if not raw_items and raw_output.strip() not in ("", "[]"):
|
||||
log.error("llm_extractor_parse_failed", raw_prefix=raw_output[:120])
|
||||
|
||||
results: list[MemoryCandidate] = []
|
||||
for item in parsed:
|
||||
if not isinstance(item, dict):
|
||||
for raw_item in raw_items:
|
||||
normalized = normalize_candidate_item(raw_item)
|
||||
if normalized is None:
|
||||
continue
|
||||
mem_type = str(item.get("type") or "").strip().lower()
|
||||
content = str(item.get("content") or "").strip()
|
||||
project = str(item.get("project") or "").strip()
|
||||
if not project and interaction.project:
|
||||
|
||||
model_project = normalized["project"]
|
||||
# R9 trust hierarchy: interaction scope wins; else registry-
|
||||
# resolve the model's tag; else keep the model's tag so auto-
|
||||
# triage can surface unregistered projects.
|
||||
if interaction.project:
|
||||
project = interaction.project
|
||||
confidence_raw = item.get("confidence", 0.5)
|
||||
if mem_type not in MEMORY_TYPES:
|
||||
continue
|
||||
if not content:
|
||||
continue
|
||||
try:
|
||||
confidence = float(confidence_raw)
|
||||
except (TypeError, ValueError):
|
||||
confidence = 0.5
|
||||
confidence = max(0.0, min(1.0, confidence))
|
||||
elif model_project:
|
||||
try:
|
||||
from atocore.projects.registry import (
|
||||
load_project_registry,
|
||||
resolve_project_name,
|
||||
)
|
||||
|
||||
registered_ids = {p.project_id for p in load_project_registry()}
|
||||
resolved = resolve_project_name(model_project)
|
||||
if resolved in registered_ids:
|
||||
project = resolved
|
||||
else:
|
||||
project = model_project
|
||||
log.info(
|
||||
"unregistered_project_detected",
|
||||
model_project=model_project,
|
||||
interaction_id=interaction.id,
|
||||
)
|
||||
except Exception:
|
||||
project = model_project
|
||||
else:
|
||||
project = ""
|
||||
|
||||
content = normalized["content"]
|
||||
results.append(
|
||||
MemoryCandidate(
|
||||
memory_type=mem_type,
|
||||
content=content[:1000],
|
||||
memory_type=normalized["type"],
|
||||
content=content,
|
||||
rule="llm_extraction",
|
||||
source_span=content[:200],
|
||||
project=project,
|
||||
confidence=confidence,
|
||||
confidence=normalized["confidence"],
|
||||
source_interaction_id=interaction.id,
|
||||
extractor_version=LLM_EXTRACTOR_VERSION,
|
||||
)
|
||||
|
||||
@@ -340,6 +340,84 @@ def reinforce_memory(
|
||||
return True, old_confidence, new_confidence
|
||||
|
||||
|
||||
def auto_promote_reinforced(
|
||||
min_reference_count: int = 3,
|
||||
min_confidence: float = 0.7,
|
||||
max_age_days: int = 14,
|
||||
) -> list[str]:
|
||||
"""Auto-promote candidate memories with strong reinforcement signals.
|
||||
|
||||
Phase 10: memories that have been reinforced by multiple interactions
|
||||
graduate from candidate to active without human review. This rewards
|
||||
knowledge that the system keeps referencing organically.
|
||||
|
||||
Returns a list of promoted memory IDs.
|
||||
"""
|
||||
from datetime import timedelta
|
||||
|
||||
cutoff = (
|
||||
datetime.now(timezone.utc) - timedelta(days=max_age_days)
|
||||
).strftime("%Y-%m-%d %H:%M:%S")
|
||||
promoted: list[str] = []
|
||||
with get_connection() as conn:
|
||||
rows = conn.execute(
|
||||
"SELECT id, content, memory_type, project, confidence, "
|
||||
"reference_count FROM memories "
|
||||
"WHERE status = 'candidate' "
|
||||
"AND COALESCE(reference_count, 0) >= ? "
|
||||
"AND confidence >= ? "
|
||||
"AND last_referenced_at >= ?",
|
||||
(min_reference_count, min_confidence, cutoff),
|
||||
).fetchall()
|
||||
|
||||
for row in rows:
|
||||
mid = row["id"]
|
||||
ok = promote_memory(mid)
|
||||
if ok:
|
||||
promoted.append(mid)
|
||||
log.info(
|
||||
"memory_auto_promoted",
|
||||
memory_id=mid,
|
||||
memory_type=row["memory_type"],
|
||||
project=row["project"] or "(global)",
|
||||
reference_count=row["reference_count"],
|
||||
confidence=round(row["confidence"], 3),
|
||||
)
|
||||
return promoted
|
||||
|
||||
|
||||
def expire_stale_candidates(
|
||||
max_age_days: int = 14,
|
||||
) -> list[str]:
|
||||
"""Reject candidate memories that sat in queue too long unreinforced.
|
||||
|
||||
Candidates older than ``max_age_days`` with zero reinforcement are
|
||||
auto-rejected to prevent unbounded queue growth. Returns rejected IDs.
|
||||
"""
|
||||
from datetime import timedelta
|
||||
|
||||
cutoff = (
|
||||
datetime.now(timezone.utc) - timedelta(days=max_age_days)
|
||||
).strftime("%Y-%m-%d %H:%M:%S")
|
||||
expired: list[str] = []
|
||||
with get_connection() as conn:
|
||||
rows = conn.execute(
|
||||
"SELECT id FROM memories "
|
||||
"WHERE status = 'candidate' "
|
||||
"AND COALESCE(reference_count, 0) = 0 "
|
||||
"AND created_at < ?",
|
||||
(cutoff,),
|
||||
).fetchall()
|
||||
|
||||
for row in rows:
|
||||
mid = row["id"]
|
||||
ok = reject_candidate_memory(mid)
|
||||
if ok:
|
||||
expired.append(mid)
|
||||
log.info("memory_expired", memory_id=mid)
|
||||
return expired
|
||||
|
||||
|
||||
def get_memories_for_context(
|
||||
memory_types: list[str] | None = None,
|
||||
project: str | None = None,
|
||||
@@ -446,20 +524,27 @@ def _rank_memories_for_query(
|
||||
) -> list["Memory"]:
|
||||
"""Rerank a memory list by lexical overlap with a pre-tokenized query.
|
||||
|
||||
Ordering key: (overlap_count DESC, confidence DESC). When a query
|
||||
shares no tokens with a memory, overlap is zero and confidence
|
||||
acts as the sole tiebreaker — which matches the pre-query
|
||||
behaviour and keeps no-query calls stable.
|
||||
Primary key: overlap_density (overlap_count / memory_token_count),
|
||||
which rewards short focused memories that match the query precisely
|
||||
over long overview memories that incidentally share a few tokens.
|
||||
Secondary: absolute overlap count. Tertiary: confidence.
|
||||
|
||||
R7 fix: previously overlap_count alone was the primary key, so a
|
||||
40-token overview memory with 3 overlapping tokens tied a 5-token
|
||||
memory with 3 overlapping tokens, and the overview won on
|
||||
confidence. Now the short memory's density (0.6) beats the
|
||||
overview's density (0.075).
|
||||
"""
|
||||
from atocore.memory.reinforcement import _normalize, _tokenize
|
||||
|
||||
scored: list[tuple[int, float, Memory]] = []
|
||||
scored: list[tuple[float, int, float, Memory]] = []
|
||||
for mem in memories:
|
||||
mem_tokens = _tokenize(_normalize(mem.content))
|
||||
overlap = len(mem_tokens & query_tokens) if mem_tokens else 0
|
||||
scored.append((overlap, mem.confidence, mem))
|
||||
scored.sort(key=lambda t: (t[0], t[1]), reverse=True)
|
||||
return [mem for _, _, mem in scored]
|
||||
density = overlap / len(mem_tokens) if mem_tokens else 0.0
|
||||
scored.append((density, overlap, mem.confidence, mem))
|
||||
scored.sort(key=lambda t: (t[0], t[1], t[2]), reverse=True)
|
||||
return [mem for _, _, _, mem in scored]
|
||||
|
||||
|
||||
def _row_to_memory(row) -> Memory:
|
||||
|
||||
118
tests/test_engineering.py
Normal file
118
tests/test_engineering.py
Normal file
@@ -0,0 +1,118 @@
|
||||
"""Tests for the Engineering Knowledge Layer."""
|
||||
|
||||
from atocore.engineering.service import (
|
||||
ENTITY_TYPES,
|
||||
RELATIONSHIP_TYPES,
|
||||
create_entity,
|
||||
create_relationship,
|
||||
get_entities,
|
||||
get_entity,
|
||||
get_entity_with_context,
|
||||
get_relationships,
|
||||
init_engineering_schema,
|
||||
)
|
||||
from atocore.models.database import init_db
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
def test_create_and_get_entity(tmp_data_dir):
|
||||
init_db()
|
||||
init_engineering_schema()
|
||||
e = create_entity(
|
||||
entity_type="component",
|
||||
name="Pivot Pin",
|
||||
project="p04-gigabit",
|
||||
description="Lateral support pivot pin for M1 assembly",
|
||||
properties={"material": "GF-PTFE", "diameter_mm": 12},
|
||||
)
|
||||
assert e.entity_type == "component"
|
||||
assert e.name == "Pivot Pin"
|
||||
assert e.properties["material"] == "GF-PTFE"
|
||||
|
||||
fetched = get_entity(e.id)
|
||||
assert fetched is not None
|
||||
assert fetched.name == "Pivot Pin"
|
||||
|
||||
|
||||
def test_create_relationship(tmp_data_dir):
|
||||
init_db()
|
||||
init_engineering_schema()
|
||||
subsystem = create_entity("subsystem", "Lateral Support", project="p04-gigabit")
|
||||
component = create_entity("component", "Pivot Pin", project="p04-gigabit")
|
||||
|
||||
rel = create_relationship(
|
||||
source_entity_id=subsystem.id,
|
||||
target_entity_id=component.id,
|
||||
relationship_type="contains",
|
||||
)
|
||||
assert rel.relationship_type == "contains"
|
||||
|
||||
rels = get_relationships(subsystem.id, direction="outgoing")
|
||||
assert len(rels) == 1
|
||||
assert rels[0].target_entity_id == component.id
|
||||
|
||||
|
||||
def test_entity_with_context(tmp_data_dir):
|
||||
init_db()
|
||||
init_engineering_schema()
|
||||
subsystem = create_entity("subsystem", "Lateral Support", project="p04-gigabit")
|
||||
pin = create_entity("component", "Pivot Pin", project="p04-gigabit")
|
||||
pad = create_entity("component", "PTFE Pad", project="p04-gigabit")
|
||||
material = create_entity("material", "GF-PTFE", project="p04-gigabit",
|
||||
description="Glass-filled PTFE for thermal stability")
|
||||
|
||||
create_relationship(subsystem.id, pin.id, "contains")
|
||||
create_relationship(subsystem.id, pad.id, "contains")
|
||||
create_relationship(pad.id, material.id, "uses_material")
|
||||
|
||||
ctx = get_entity_with_context(subsystem.id)
|
||||
assert ctx is not None
|
||||
assert len(ctx["relationships"]) == 2
|
||||
assert pin.id in ctx["related_entities"]
|
||||
assert pad.id in ctx["related_entities"]
|
||||
|
||||
|
||||
def test_filter_entities_by_type_and_project(tmp_data_dir):
|
||||
init_db()
|
||||
init_engineering_schema()
|
||||
create_entity("component", "Pin A", project="p04-gigabit")
|
||||
create_entity("component", "Pin B", project="p04-gigabit")
|
||||
create_entity("material", "Steel", project="p04-gigabit")
|
||||
create_entity("component", "Actuator", project="p06-polisher")
|
||||
|
||||
components = get_entities(entity_type="component", project="p04-gigabit")
|
||||
assert len(components) == 2
|
||||
|
||||
all_p04 = get_entities(project="p04-gigabit")
|
||||
assert len(all_p04) == 3
|
||||
|
||||
polisher = get_entities(project="p06-polisher")
|
||||
assert len(polisher) == 1
|
||||
|
||||
|
||||
def test_invalid_entity_type_raises(tmp_data_dir):
|
||||
init_db()
|
||||
init_engineering_schema()
|
||||
with pytest.raises(ValueError, match="Invalid entity type"):
|
||||
create_entity("spaceship", "Enterprise")
|
||||
|
||||
|
||||
def test_invalid_relationship_type_raises(tmp_data_dir):
|
||||
init_db()
|
||||
init_engineering_schema()
|
||||
a = create_entity("component", "A")
|
||||
b = create_entity("component", "B")
|
||||
with pytest.raises(ValueError, match="Invalid relationship type"):
|
||||
create_relationship(a.id, b.id, "loves")
|
||||
|
||||
|
||||
def test_entity_name_search(tmp_data_dir):
|
||||
init_db()
|
||||
init_engineering_schema()
|
||||
create_entity("component", "Vertical Support Pad")
|
||||
create_entity("component", "Lateral Support Bracket")
|
||||
create_entity("component", "Reference Frame")
|
||||
|
||||
results = get_entities(name_contains="Support")
|
||||
assert len(results) == 2
|
||||
208
tests/test_extraction_pipeline.py
Normal file
208
tests/test_extraction_pipeline.py
Normal file
@@ -0,0 +1,208 @@
|
||||
"""Integration tests for the extraction + triage pipeline (R8).
|
||||
|
||||
Tests the flow that produced the 41 active memories:
|
||||
LLM extraction → persist as candidate → triage → promote/reject.
|
||||
Uses mocked subprocess to avoid real claude -p calls.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
from atocore.memory.extractor_llm import (
|
||||
extract_candidates_llm,
|
||||
extract_candidates_llm_verbose,
|
||||
)
|
||||
from atocore.memory.service import create_memory, get_memories
|
||||
from atocore.models.database import init_db
|
||||
import atocore.memory.extractor_llm as extractor_llm
|
||||
|
||||
|
||||
def _make_interaction(**kw):
|
||||
from atocore.interactions.service import Interaction
|
||||
|
||||
return Interaction(
|
||||
id=kw.get("id", "test-pipe-1"),
|
||||
prompt=kw.get("prompt", "test prompt"),
|
||||
response=kw.get("response", ""),
|
||||
response_summary="",
|
||||
project=kw.get("project", ""),
|
||||
client="test",
|
||||
session_id="",
|
||||
)
|
||||
|
||||
|
||||
class _FakeCompleted:
|
||||
def __init__(self, stdout, returncode=0):
|
||||
self.stdout = stdout
|
||||
self.stderr = ""
|
||||
self.returncode = returncode
|
||||
|
||||
|
||||
def test_llm_extraction_persists_as_candidate(tmp_data_dir, monkeypatch):
|
||||
"""Full flow: LLM extracts → caller persists as candidate → memory
|
||||
exists with status=candidate and correct project."""
|
||||
init_db()
|
||||
monkeypatch.setattr(extractor_llm, "_cli_available", lambda: True)
|
||||
monkeypatch.setattr(
|
||||
extractor_llm.subprocess,
|
||||
"run",
|
||||
lambda *a, **kw: _FakeCompleted(
|
||||
'[{"type": "project", "content": "USB SSD is mandatory for RPi storage", "project": "p06-polisher", "confidence": 0.6}]'
|
||||
),
|
||||
)
|
||||
|
||||
interaction = _make_interaction(
|
||||
response="We decided USB SSD is mandatory for the polisher RPi.",
|
||||
project="p06-polisher",
|
||||
)
|
||||
candidates = extract_candidates_llm(interaction)
|
||||
assert len(candidates) == 1
|
||||
assert candidates[0].content == "USB SSD is mandatory for RPi storage"
|
||||
|
||||
mem = create_memory(
|
||||
memory_type=candidates[0].memory_type,
|
||||
content=candidates[0].content,
|
||||
project=candidates[0].project,
|
||||
confidence=candidates[0].confidence,
|
||||
status="candidate",
|
||||
)
|
||||
assert mem.status == "candidate"
|
||||
assert mem.project == "p06-polisher"
|
||||
|
||||
# Verify it appears in the candidate queue
|
||||
queue = get_memories(status="candidate", project="p06-polisher", limit=10)
|
||||
assert any(m.id == mem.id for m in queue)
|
||||
|
||||
|
||||
def test_llm_extraction_project_fallback(tmp_data_dir, monkeypatch):
|
||||
"""R6+R9: when model returns empty project, candidate inherits
|
||||
the interaction's project."""
|
||||
init_db()
|
||||
monkeypatch.setattr(extractor_llm, "_cli_available", lambda: True)
|
||||
monkeypatch.setattr(
|
||||
extractor_llm.subprocess,
|
||||
"run",
|
||||
lambda *a, **kw: _FakeCompleted(
|
||||
'[{"type": "knowledge", "content": "machine works offline", "project": "", "confidence": 0.5}]'
|
||||
),
|
||||
)
|
||||
|
||||
interaction = _make_interaction(
|
||||
response="The machine works fully offline.",
|
||||
project="p06-polisher",
|
||||
)
|
||||
candidates = extract_candidates_llm(interaction)
|
||||
assert len(candidates) == 1
|
||||
assert candidates[0].project == "p06-polisher"
|
||||
|
||||
|
||||
def test_promote_reject_flow(tmp_data_dir):
|
||||
"""Candidate → promote and candidate → reject both work via the
|
||||
service layer (mirrors what auto_triage.py does via HTTP)."""
|
||||
from atocore.memory.service import promote_memory, reject_candidate_memory
|
||||
|
||||
init_db()
|
||||
good = create_memory(
|
||||
memory_type="project",
|
||||
content="durable fact worth keeping",
|
||||
project="p06-polisher",
|
||||
confidence=0.5,
|
||||
status="candidate",
|
||||
)
|
||||
bad = create_memory(
|
||||
memory_type="project",
|
||||
content="stale snapshot to reject",
|
||||
project="atocore",
|
||||
confidence=0.5,
|
||||
status="candidate",
|
||||
)
|
||||
|
||||
promote_memory(good.id)
|
||||
reject_candidate_memory(bad.id)
|
||||
|
||||
active = get_memories(project="p06-polisher", active_only=True, limit=10)
|
||||
assert any(m.id == good.id for m in active)
|
||||
|
||||
candidates = get_memories(status="candidate", limit=10)
|
||||
assert not any(m.id == good.id for m in candidates)
|
||||
assert not any(m.id == bad.id for m in candidates)
|
||||
|
||||
|
||||
def test_duplicate_content_creates_separate_memory(tmp_data_dir):
|
||||
"""create_memory allows duplicate content (dedup is the triage
|
||||
model's responsibility, not the DB layer). Both memories exist."""
|
||||
init_db()
|
||||
m1 = create_memory(
|
||||
memory_type="project",
|
||||
content="unique fact about polisher",
|
||||
project="p06-polisher",
|
||||
)
|
||||
m2 = create_memory(
|
||||
memory_type="project",
|
||||
content="unique fact about polisher",
|
||||
project="p06-polisher",
|
||||
status="candidate",
|
||||
)
|
||||
assert m1.id != m2.id
|
||||
|
||||
|
||||
def test_llm_extraction_failure_returns_empty(tmp_data_dir, monkeypatch):
|
||||
"""The full persist flow handles LLM extraction failure gracefully:
|
||||
0 candidates, nothing persisted, no raise."""
|
||||
init_db()
|
||||
monkeypatch.setattr(extractor_llm, "_cli_available", lambda: True)
|
||||
monkeypatch.setattr(
|
||||
extractor_llm.subprocess,
|
||||
"run",
|
||||
lambda *a, **kw: _FakeCompleted("", returncode=1),
|
||||
)
|
||||
|
||||
interaction = _make_interaction(
|
||||
response="some real content that the LLM fails on",
|
||||
project="p06-polisher",
|
||||
)
|
||||
result = extract_candidates_llm_verbose(interaction)
|
||||
assert result.candidates == []
|
||||
assert "exit_1" in result.error
|
||||
|
||||
# Nothing in the candidate queue
|
||||
queue = get_memories(status="candidate", limit=10)
|
||||
assert len(queue) == 0
|
||||
|
||||
|
||||
def test_extract_batch_api_503_when_cli_missing(tmp_data_dir, monkeypatch):
|
||||
"""R11: POST /admin/extract-batch with mode=llm must fail loud when
|
||||
the `claude` CLI is unavailable, instead of silently returning a
|
||||
success-with-0-candidates payload (which masked host-vs-container
|
||||
truth for operators)."""
|
||||
from fastapi.testclient import TestClient
|
||||
from atocore.main import app
|
||||
import atocore.api.routes as routes
|
||||
|
||||
init_db()
|
||||
monkeypatch.setattr(routes, "_llm_cli_available", lambda: False)
|
||||
|
||||
client = TestClient(app)
|
||||
response = client.post("/admin/extract-batch", json={"mode": "llm"})
|
||||
|
||||
assert response.status_code == 503
|
||||
assert "claude" in response.json()["detail"].lower()
|
||||
|
||||
|
||||
def test_extract_batch_api_rule_mode_ok_without_cli(tmp_data_dir, monkeypatch):
|
||||
"""Rule mode must still work when the LLM CLI is missing — R11 only
|
||||
affects mode=llm."""
|
||||
from fastapi.testclient import TestClient
|
||||
from atocore.main import app
|
||||
import atocore.api.routes as routes
|
||||
|
||||
init_db()
|
||||
monkeypatch.setattr(routes, "_llm_cli_available", lambda: False)
|
||||
|
||||
client = TestClient(app)
|
||||
response = client.post("/admin/extract-batch", json={"mode": "rule"})
|
||||
|
||||
assert response.status_code == 200
|
||||
@@ -59,7 +59,8 @@ def test_parser_strips_surrounding_prose():
|
||||
result = _parse_candidates(raw, _make_interaction())
|
||||
assert len(result) == 1
|
||||
assert result[0].memory_type == "project"
|
||||
assert result[0].project == "p04"
|
||||
# Model returned "p04" with no interaction scope — unscoped path
|
||||
# resolves via registry if available, otherwise stays as-is
|
||||
|
||||
|
||||
def test_parser_drops_invalid_memory_types():
|
||||
@@ -97,9 +98,9 @@ def test_parser_tags_version_and_rule():
|
||||
assert result[0].source_interaction_id == "test-id"
|
||||
|
||||
|
||||
def test_parser_falls_back_to_interaction_project():
|
||||
"""R6: when the model returns empty project but the interaction
|
||||
has one, the candidate should inherit the interaction's project."""
|
||||
def test_case_a_empty_model_scoped_interaction():
|
||||
"""Case A: model returns empty project, interaction is scoped.
|
||||
Interaction scope wins."""
|
||||
raw = '[{"type": "project", "content": "machine works offline"}]'
|
||||
interaction = _make_interaction()
|
||||
interaction.project = "p06-polisher"
|
||||
@@ -107,12 +108,77 @@ def test_parser_falls_back_to_interaction_project():
|
||||
assert result[0].project == "p06-polisher"
|
||||
|
||||
|
||||
def test_parser_keeps_model_project_when_provided():
|
||||
"""Model-supplied project takes precedence over interaction."""
|
||||
def test_case_b_empty_model_unscoped_interaction():
|
||||
"""Case B: both empty. Project stays empty."""
|
||||
raw = '[{"type": "project", "content": "generic fact"}]'
|
||||
interaction = _make_interaction()
|
||||
interaction.project = ""
|
||||
result = _parse_candidates(raw, interaction)
|
||||
assert result[0].project == ""
|
||||
|
||||
|
||||
def test_case_c_unregistered_model_scoped_interaction(tmp_data_dir, project_registry):
|
||||
"""Case C: model returns unregistered project, interaction is scoped.
|
||||
Interaction scope wins."""
|
||||
from atocore.models.database import init_db
|
||||
init_db()
|
||||
project_registry(("p06-polisher", ["p06"]))
|
||||
raw = '[{"type": "project", "content": "x", "project": "fake-project-99"}]'
|
||||
interaction = _make_interaction()
|
||||
interaction.project = "p06-polisher"
|
||||
result = _parse_candidates(raw, interaction)
|
||||
assert result[0].project == "p06-polisher"
|
||||
|
||||
|
||||
def test_case_d_unregistered_model_unscoped_keeps_tag(tmp_data_dir, project_registry):
|
||||
"""Case D: model returns unregistered project, interaction is unscoped.
|
||||
Keeps the model's tag for auto-project-detection (new behavior)."""
|
||||
from atocore.models.database import init_db
|
||||
init_db()
|
||||
project_registry(("p06-polisher", ["p06"]))
|
||||
raw = '[{"type": "project", "content": "x", "project": "new-lead-project"}]'
|
||||
interaction = _make_interaction()
|
||||
interaction.project = ""
|
||||
result = _parse_candidates(raw, interaction)
|
||||
assert result[0].project == "new-lead-project"
|
||||
|
||||
|
||||
def test_case_e_matching_model_and_interaction(tmp_data_dir, project_registry):
|
||||
"""Case E: model returns same project as interaction. Works."""
|
||||
from atocore.models.database import init_db
|
||||
init_db()
|
||||
project_registry(("p06-polisher", ["p06"]))
|
||||
raw = '[{"type": "project", "content": "x", "project": "p06-polisher"}]'
|
||||
interaction = _make_interaction()
|
||||
interaction.project = "p06-polisher"
|
||||
result = _parse_candidates(raw, interaction)
|
||||
assert result[0].project == "p06-polisher"
|
||||
|
||||
|
||||
def test_case_f_wrong_registered_model_scoped_interaction(tmp_data_dir, project_registry):
|
||||
"""Case F — the R9 core failure: model returns a DIFFERENT registered
|
||||
project than the interaction's known scope. Interaction scope wins.
|
||||
This is the case that was broken before the R9 fix."""
|
||||
from atocore.models.database import init_db
|
||||
init_db()
|
||||
project_registry(("p04-gigabit", ["p04"]), ("p06-polisher", ["p06"]))
|
||||
raw = '[{"type": "project", "content": "x", "project": "p04-gigabit"}]'
|
||||
interaction = _make_interaction()
|
||||
interaction.project = "p06-polisher"
|
||||
result = _parse_candidates(raw, interaction)
|
||||
assert result[0].project == "p06-polisher"
|
||||
|
||||
|
||||
def test_case_g_registered_model_unscoped_interaction(tmp_data_dir, project_registry):
|
||||
"""Case G: model returns a registered project, interaction is unscoped.
|
||||
Model project accepted (only way to get a project for unscoped captures)."""
|
||||
from atocore.models.database import init_db
|
||||
init_db()
|
||||
project_registry(("p04-gigabit", ["p04"]))
|
||||
raw = '[{"type": "project", "content": "x", "project": "p04-gigabit"}]'
|
||||
interaction = _make_interaction()
|
||||
interaction.project = ""
|
||||
result = _parse_candidates(raw, interaction)
|
||||
assert result[0].project == "p04-gigabit"
|
||||
|
||||
|
||||
|
||||
@@ -186,3 +186,98 @@ def test_memories_for_context_empty(isolated_db):
|
||||
text, chars = get_memories_for_context()
|
||||
assert text == ""
|
||||
assert chars == 0
|
||||
|
||||
|
||||
# --- Phase 10: auto-promotion + candidate expiry ---
|
||||
|
||||
|
||||
def _get_memory_by_id(memory_id):
|
||||
"""Helper: fetch a single memory by ID."""
|
||||
from atocore.models.database import get_connection
|
||||
with get_connection() as conn:
|
||||
row = conn.execute("SELECT * FROM memories WHERE id = ?", (memory_id,)).fetchone()
|
||||
return dict(row) if row else None
|
||||
|
||||
|
||||
def test_auto_promote_reinforced_basic(isolated_db):
|
||||
from atocore.memory.service import (
|
||||
auto_promote_reinforced,
|
||||
create_memory,
|
||||
reinforce_memory,
|
||||
)
|
||||
|
||||
mem_obj = create_memory("knowledge", "Zerodur has near-zero CTE", status="candidate", confidence=0.7)
|
||||
mid = mem_obj.id
|
||||
# reinforce_memory only touches active memories, so we need to
|
||||
# promote first to reinforce, then demote back to candidate —
|
||||
# OR just bump reference_count + last_referenced_at directly
|
||||
from atocore.models.database import get_connection
|
||||
from datetime import datetime, timezone
|
||||
now = datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S")
|
||||
with get_connection() as conn:
|
||||
conn.execute(
|
||||
"UPDATE memories SET reference_count = 3, last_referenced_at = ? WHERE id = ?",
|
||||
(now, mid),
|
||||
)
|
||||
|
||||
promoted = auto_promote_reinforced(min_reference_count=3, min_confidence=0.7)
|
||||
assert mid in promoted
|
||||
mem = _get_memory_by_id(mid)
|
||||
assert mem["status"] == "active"
|
||||
|
||||
|
||||
def test_auto_promote_reinforced_ignores_low_refs(isolated_db):
|
||||
from atocore.memory.service import auto_promote_reinforced, create_memory
|
||||
from atocore.models.database import get_connection
|
||||
from datetime import datetime, timezone
|
||||
|
||||
mem_obj = create_memory("knowledge", "Some knowledge", status="candidate", confidence=0.7)
|
||||
mid = mem_obj.id
|
||||
now = datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S")
|
||||
with get_connection() as conn:
|
||||
conn.execute(
|
||||
"UPDATE memories SET reference_count = 1, last_referenced_at = ? WHERE id = ?",
|
||||
(now, mid),
|
||||
)
|
||||
|
||||
promoted = auto_promote_reinforced(min_reference_count=3, min_confidence=0.7)
|
||||
assert mid not in promoted
|
||||
mem = _get_memory_by_id(mid)
|
||||
assert mem["status"] == "candidate"
|
||||
|
||||
|
||||
def test_expire_stale_candidates(isolated_db):
|
||||
from atocore.memory.service import create_memory, expire_stale_candidates
|
||||
from atocore.models.database import get_connection
|
||||
|
||||
mem_obj = create_memory("knowledge", "Old unreferenced fact", status="candidate")
|
||||
mid = mem_obj.id
|
||||
with get_connection() as conn:
|
||||
conn.execute(
|
||||
"UPDATE memories SET created_at = datetime('now', '-30 days') WHERE id = ?",
|
||||
(mid,),
|
||||
)
|
||||
|
||||
expired = expire_stale_candidates(max_age_days=14)
|
||||
assert mid in expired
|
||||
mem = _get_memory_by_id(mid)
|
||||
assert mem["status"] == "invalid"
|
||||
|
||||
|
||||
def test_expire_stale_candidates_keeps_reinforced(isolated_db):
|
||||
from atocore.memory.service import create_memory, expire_stale_candidates
|
||||
from atocore.models.database import get_connection
|
||||
|
||||
mem_obj = create_memory("knowledge", "Referenced fact", status="candidate")
|
||||
mid = mem_obj.id
|
||||
with get_connection() as conn:
|
||||
conn.execute(
|
||||
"UPDATE memories SET reference_count = 1, "
|
||||
"created_at = datetime('now', '-30 days') WHERE id = ?",
|
||||
(mid,),
|
||||
)
|
||||
|
||||
expired = expire_stale_candidates(max_age_days=14)
|
||||
assert mid not in expired
|
||||
mem = _get_memory_by_id(mid)
|
||||
assert mem["status"] == "candidate"
|
||||
|
||||
Reference in New Issue
Block a user