Compare commits
16 Commits
codex/open
...
86637f8eee
| Author | SHA1 | Date | |
|---|---|---|---|
| 86637f8eee | |||
| c49363fccc | |||
| 33a6c61ca6 | |||
| 33a106732f | |||
| 3011aa77da | |||
| ba36a28453 | |||
| 999788b790 | |||
| 775960c8c8 | |||
| b687e7fa6f | |||
| 4d4d5f437a | |||
| 5b114baa87 | |||
| c2e7064238 | |||
| dc9fdd3a38 | |||
| 58ea21df80 | |||
| 8c0f1ff6f3 | |||
| 3db1dd99b5 |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -6,6 +6,7 @@ __pycache__/
|
|||||||
dist/
|
dist/
|
||||||
build/
|
build/
|
||||||
.pytest_cache/
|
.pytest_cache/
|
||||||
|
.mypy_cache/
|
||||||
htmlcov/
|
htmlcov/
|
||||||
.coverage
|
.coverage
|
||||||
venv/
|
venv/
|
||||||
|
|||||||
@@ -6,19 +6,23 @@
|
|||||||
|
|
||||||
## Orientation
|
## Orientation
|
||||||
|
|
||||||
- **live_sha** (Dalidou `/health` build_sha): `4f8bec7` (dashboard endpoint live)
|
- **live_sha** (Dalidou `/health` build_sha): `775960c` (verified 2026-04-16 via /health, build_time 2026-04-16T17:59:30Z)
|
||||||
- **last_updated**: 2026-04-12 by Claude (full session docs sync)
|
- **last_updated**: 2026-04-16 by Claude ("Make It Actually Useful" sprint — observability + Phase 10)
|
||||||
- **main_tip**: `4ac4e5c` (includes OpenClaw capture plugin merge)
|
- **main_tip**: `999788b`
|
||||||
- **test_count**: 290 passing
|
- **test_count**: 303 (4 new Phase 10 tests)
|
||||||
- **harness**: `17/18 PASS` (only p06-tailscale — chunk bleed, not a memory/ranking issue)
|
- **harness**: `17/18 PASS` on live Dalidou (p04-constraints expects "Zerodur" — retrieval content gap, not regression)
|
||||||
- **vectors**: 33,253 (was 20,781; +12,472 from atomizer-v2 ingestion)
|
- **vectors**: 33,253
|
||||||
- **active_memories**: 47 (16 project, 16 knowledge, 6 adaptation, 3 identity, 3 preference, 3 episodic)
|
- **active_memories**: 84 (31 project, 23 knowledge, 10 episodic, 8 adaptation, 7 preference, 5 identity)
|
||||||
- **candidate_memories**: 0
|
- **candidate_memories**: 2
|
||||||
- **registered_projects**: p04-gigabit, p05-interferometer, p06-polisher, atomizer-v2, atocore
|
- **interactions**: 234 total (192 claude-code, 38 openclaw, 4 test)
|
||||||
- **project_state_entries**: p04=5, p05=9, p06=9, atocore=38 (61 total)
|
- **registered_projects**: atocore, p04-gigabit, p05-interferometer, p06-polisher, atomizer-v2, abb-space (aliased p08)
|
||||||
|
- **project_state_entries**: 110 total (atocore=47, p06=19, p05=18, p04=15, abb=6, atomizer=5)
|
||||||
|
- **entities**: 35 (engineering knowledge graph, Layer 2)
|
||||||
- **off_host_backup**: `papa@192.168.86.39:/home/papa/atocore-backups/` via cron, verified
|
- **off_host_backup**: `papa@192.168.86.39:/home/papa/atocore-backups/` via cron, verified
|
||||||
- **nightly_pipeline**: backup → cleanup → rsync → LLM extraction (sonnet) → auto-triage (sonnet)
|
- **nightly_pipeline**: backup → cleanup → rsync → OpenClaw import → vault refresh → extract → auto-triage → **auto-promote/expire (NEW)** → weekly synth/lint Sundays → **retrieval harness (NEW)** → **pipeline summary (NEW)**
|
||||||
- **capture_clients**: claude-code (Stop hook), openclaw (plugin)
|
- **capture_clients**: claude-code (Stop hook + cwd project inference), openclaw (before_agent_start + llm_output plugin, verified live)
|
||||||
|
- **wiki**: http://dalidou:8100/wiki (browse), /wiki/projects/{id}, /wiki/entities/{id}, /wiki/search
|
||||||
|
- **dashboard**: http://dalidou:8100/admin/dashboard (now shows pipeline health, interaction totals by client, all registered projects)
|
||||||
|
|
||||||
## Active Plan
|
## Active Plan
|
||||||
|
|
||||||
@@ -128,17 +132,17 @@ One branch `codex/extractor-eval-loop` for Day 1-5, a second `codex/retrieval-ha
|
|||||||
|-----|--------|----------|------------------------------------|-------------------------------------------------------------------------|--------------|--------|------------|-------------|
|
|-----|--------|----------|------------------------------------|-------------------------------------------------------------------------|--------------|--------|------------|-------------|
|
||||||
| R1 | Codex | P1 | deploy/hooks/capture_stop.py:76-85 | Live Claude capture still omits `extract`, so "loop closed both sides" remains overstated in practice even though the API supports it | fixed | Claude | 2026-04-11 | c67bec0 |
|
| R1 | Codex | P1 | deploy/hooks/capture_stop.py:76-85 | Live Claude capture still omits `extract`, so "loop closed both sides" remains overstated in practice even though the API supports it | fixed | Claude | 2026-04-11 | c67bec0 |
|
||||||
| R2 | Codex | P1 | src/atocore/context/builder.py | Project memories excluded from pack | fixed | Claude | 2026-04-11 | 8ea53f4 |
|
| R2 | Codex | P1 | src/atocore/context/builder.py | Project memories excluded from pack | fixed | Claude | 2026-04-11 | 8ea53f4 |
|
||||||
| R3 | Claude | P2 | src/atocore/memory/extractor.py | Rule cues (`## Decision:`) never fire on conversational LLM text | open | Claude | 2026-04-11 | |
|
| R3 | Claude | P2 | src/atocore/memory/extractor.py | Rule cues (`## Decision:`) never fire on conversational LLM text | declined | Claude | 2026-04-11 | see 2026-04-14 session log |
|
||||||
| R4 | Codex | P2 | DEV-LEDGER.md:11 | Orientation `main_tip` was stale versus `HEAD` / `origin/main` | fixed | Codex | 2026-04-11 | 81307ce |
|
| R4 | Codex | P2 | DEV-LEDGER.md:11 | Orientation `main_tip` was stale versus `HEAD` / `origin/main` | fixed | Codex | 2026-04-11 | 81307ce |
|
||||||
| R5 | Codex | P1 | src/atocore/interactions/service.py:157-174 | The deployed extraction path still calls only the rule extractor; the new LLM extractor is eval/script-only, so Day 4 "gate cleared" is true as a benchmark result but not as an operational extraction path | fixed | Claude | 2026-04-12 | c67bec0 |
|
| R5 | Codex | P1 | src/atocore/interactions/service.py:157-174 | The deployed extraction path still calls only the rule extractor; the new LLM extractor is eval/script-only, so Day 4 "gate cleared" is true as a benchmark result but not as an operational extraction path | fixed | Claude | 2026-04-12 | c67bec0 |
|
||||||
| R6 | Codex | P1 | src/atocore/memory/extractor_llm.py:258-276 | LLM extraction accepts model-supplied `project` verbatim with no fallback to `interaction.project`; live triage promoted a clearly p06 memory (offline/network rule) as project=`""`, which explains the p06-offline-design harness miss and falsifies the current "all 3 failures are budget-contention" claim | fixed | Claude | 2026-04-12 | 39d73e9 |
|
| R6 | Codex | P1 | src/atocore/memory/extractor_llm.py:258-276 | LLM extraction accepts model-supplied `project` verbatim with no fallback to `interaction.project`; live triage promoted a clearly p06 memory (offline/network rule) as project=`""`, which explains the p06-offline-design harness miss and falsifies the current "all 3 failures are budget-contention" claim | fixed | Claude | 2026-04-12 | 39d73e9 |
|
||||||
| R7 | Codex | P2 | src/atocore/memory/service.py:448-459 | Query ranking is overlap-count only, so broad overview memories can tie exact low-confidence memories and win on confidence; p06-firmware-interface is not just budget pressure, it also exposes a weak lexical scorer | fixed | Claude | 2026-04-12 | 8951c62 |
|
| R7 | Codex | P2 | src/atocore/memory/service.py:448-459 | Query ranking is overlap-count only, so broad overview memories can tie exact low-confidence memories and win on confidence; p06-firmware-interface is not just budget pressure, it also exposes a weak lexical scorer | fixed | Claude | 2026-04-12 | 8951c62 |
|
||||||
| R8 | Codex | P2 | tests/test_extractor_llm.py:1-7 | LLM extractor tests stop at parser/failure contracts; there is no automated coverage for the script-only persistence/review path that produced the 16 promoted memories, including project-scope preservation | fixed | Claude | 2026-04-12 | 69c9717 |
|
| R8 | Codex | P2 | tests/test_extractor_llm.py:1-7 | LLM extractor tests stop at parser/failure contracts; there is no automated coverage for the script-only persistence/review path that produced the 16 promoted memories, including project-scope preservation | fixed | Claude | 2026-04-12 | 69c9717 |
|
||||||
| R9 | Codex | P2 | src/atocore/memory/extractor_llm.py:258-259 | The R6 fallback only repairs empty project output. A wrong non-empty model project still overrides the interaction's known scope, so project attribution is improved but not yet trust-preserving. | fixed | Claude | 2026-04-12 | e5e9a99 |
|
| R9 | Codex | P2 | src/atocore/memory/extractor_llm.py:258-259 | The R6 fallback only repairs empty project output. A wrong non-empty model project still overrides the interaction's known scope, so project attribution is improved but not yet trust-preserving. | fixed | Claude | 2026-04-12 | e5e9a99 |
|
||||||
| R10 | Codex | P2 | docs/master-plan-status.md:31-33 | "Phase 8 - OpenClaw Integration" is fair as a baseline milestone, but not as a "primary" integration claim. `t420-openclaw/atocore.py` currently covers a narrow read-oriented subset (13 request shapes vs 32 API routes) plus fail-open health, while memory/interactions/admin write paths remain out of surface. | open | Claude | 2026-04-12 | |
|
| R10 | Codex | P2 | docs/master-plan-status.md:31-33 | "Phase 8 - OpenClaw Integration" is fair as a baseline milestone, but not as a "primary" integration claim. `t420-openclaw/atocore.py` currently covers a narrow read-oriented subset (13 request shapes vs 32 API routes) plus fail-open health, while memory/interactions/admin write paths remain out of surface. | fixed | Claude | 2026-04-12 | (pending) |
|
||||||
| R11 | Codex | P2 | src/atocore/api/routes.py:773-845 | `POST /admin/extract-batch` still accepts `mode="llm"` inside the container and returns a successful 0-candidate result instead of surfacing that host-only LLM extraction is unavailable from this runtime. That is a misleading API contract for operators. | open | Claude | 2026-04-12 | |
|
| R11 | Codex | P2 | src/atocore/api/routes.py:773-845 | `POST /admin/extract-batch` still accepts `mode="llm"` inside the container and returns a successful 0-candidate result instead of surfacing that host-only LLM extraction is unavailable from this runtime. That is a misleading API contract for operators. | fixed | Claude | 2026-04-12 | (pending) |
|
||||||
| R12 | Codex | P2 | scripts/batch_llm_extract_live.py:39-190 | The host-side extractor duplicates the LLM system prompt and JSON parsing logic from `src/atocore/memory/extractor_llm.py`. It works today, but this is now a prompt/parser drift risk across the container and host implementations. | open | Claude | 2026-04-12 | |
|
| R12 | Codex | P2 | scripts/batch_llm_extract_live.py:39-190 | The host-side extractor duplicates the LLM system prompt and JSON parsing logic from `src/atocore/memory/extractor_llm.py`. It works today, but this is now a prompt/parser drift risk across the container and host implementations. | fixed | Claude | 2026-04-12 | (pending) |
|
||||||
| R13 | Codex | P2 | DEV-LEDGER.md:12 | The new `286 passing` test-count claim is not reproducibly auditable from the current audit environments: neither Dalidou nor the clean worktree has `pytest` available. The claim may be true in Claude's dev shell, but it remains unverified in this audit. | open | Claude | 2026-04-12 | |
|
| R13 | Codex | P2 | DEV-LEDGER.md:12 | The new `286 passing` test-count claim is not reproducibly auditable from the current audit environments: neither Dalidou nor the clean worktree has `pytest` available. The claim may be true in Claude's dev shell, but it remains unverified in this audit. | fixed | Claude | 2026-04-12 | (pending) |
|
||||||
|
|
||||||
## Recent Decisions
|
## Recent Decisions
|
||||||
|
|
||||||
@@ -156,6 +160,21 @@ One branch `codex/extractor-eval-loop` for Day 1-5, a second `codex/retrieval-ha
|
|||||||
|
|
||||||
## Session Log
|
## Session Log
|
||||||
|
|
||||||
|
- **2026-04-16 Claude** `b687e7f..999788b` **"Make It Actually Useful" sprint.** Two-part session: ops fixes then consolidation sprint.
|
||||||
|
|
||||||
|
**Part 1 — Ops fixes:** Deployed `b687e7f` (project inference from cwd). Fixed cron logging (was `/dev/null` — redirected to `~/atocore-logs/`). Fixed OpenClaw gateway crash-loop (`discord.replyToMode: "any"` invalid → `"all"`). Deployed `atocore-capture` plugin on T420 OpenClaw using `before_agent_start` + `llm_output` hooks — verified end-to-end: 38 `client=openclaw` interactions captured. Backfilled project tags on 179/181 unscoped interactions (165 atocore, 8 p06, 6 p04).
|
||||||
|
|
||||||
|
**Part 2 — Sprint (Phase A+C):** Pipeline observability: retrieval harness now runs nightly (Step E), pipeline summary persisted to project state (Step F), dashboard enhanced with interaction totals by client + pipeline health section + dynamic project list. Phase 10 landed: `auto_promote_reinforced()` (candidate→active when reference_count≥3, confidence≥0.7) + `expire_stale_candidates()` (14-day unreinforced→auto-reject), both wired into nightly cron Step B2. Seeding script created (26 entries across 6 projects — all already existed from prior session). Tests 299→303. Harness 17/18 on live Dalidou (p04-constraints expects "Zerodur" — retrieval content gap, not regression). Deployed `775960c`.
|
||||||
|
|
||||||
|
- **2026-04-15 Claude (pm)** Closed the last harness failure honestly. **p06-tailscale fixed: 18/18 PASS.** Root-caused: not a retrieval bug — the p06 `ARCHITECTURE.md` Overview chunk legitimately mentions "the GigaBIT M1 telescope mirror" because the Polisher Suite is built *for* that mirror. All four retrieved sources for the tailscale prompt were genuinely p06/shared paths; zero actual p04 chunks leaked. The fixture's `expect_absent: GigaBIT` was catching semantic overlap, not retrieval bleed. Narrowed it to `expect_absent: "[Source: p04-gigabit/"` — a source-path check that tests the real invariant (no p04 source chunks in p06 context). Other p06 fixtures still use the word-blacklist form; they pass today because their more-specific prompts don't pull the ARCHITECTURE.md Overview, so I left them alone rather than churn fixtures that aren't failing. Did NOT change retrieval/ranking — no code change, fixture-only fix. Tests unchanged at 299.
|
||||||
|
|
||||||
|
- **2026-04-15 Claude** Deploy + doc debt sweep. Deployed `c2e7064` to Dalidou (build_time 2026-04-15T15:08:51Z, build_sha matches, /health ok) so R11/R12 are now live, not just on main. **R11 verified on live**: `POST /admin/extract-batch {"mode":"llm"}` against http://127.0.0.1:8100 returns HTTP 503 with the operator-facing "claude CLI not on PATH, run host-side script or use mode=rule" message — exactly the post-fix contract. **R13 closed (fixed)**: added a reproduction recipe to Quick Commands (`pip install -r requirements-dev.txt && pytest --collect-only -q && pytest -q`) and re-cited `test_count: 299` against a fresh local collection on 2026-04-15, so the claim is now auditable from any clean checkout — Codex's audit worktree just needs `pip install -r requirements-dev.txt`. **R10 closed (fixed)**: rewrote the `docs/master-plan-status.md` OpenClaw section to explicitly disclaim "primary integration" and report the current narrow surface: 14 client request shapes against ~44 server routes, predominantly read + `/project/state` + `/ingest/sources`, with memory/interactions/admin/entities/triage/extraction writes correctly out of scope. Open findings now: none blocking. Next natural move: the last harness failure `p06-tailscale` (chunk bleed).
|
||||||
|
|
||||||
|
- **2026-04-14 Claude (pm)** Closed R11+R12, declined R3. **R11 (fixed):** `POST /admin/extract-batch` with `mode="llm"` now returns 503 when the `claude` CLI is not on PATH, with a message pointing at the host-side script. Previously it silently returned a success-0 payload, masking host-vs-container truth. 2 new tests in `test_extraction_pipeline.py` cover the 503 path and the rule-mode-still-works path. **R12 (fixed):** extracted shared `SYSTEM_PROMPT` + `parse_llm_json_array` + `normalize_candidate_item` + `build_user_message` into stdlib-only `src/atocore/memory/_llm_prompt.py`. Both `src/atocore/memory/extractor_llm.py` (container) and `scripts/batch_llm_extract_live.py` (host) now import from it. The host script uses `sys.path` to reach the stdlib-only module without needing the full atocore package. Project-attribution policy stays path-specific (container uses registry-check; host defers to server). **R3 (declined):** rule cues not firing on conversational LLM text is by design now — the LLM extractor (llm-0.4.0) is the production path for conversational content as of the Day 4 gate (2026-04-12). Expanding rules to match conversational prose risks the FP blowup Day 2 already showed. Rule extractor stays narrow for structural PKM text. Tests 297 → 299. Live `/health` still `58ea21d`; this session's changes need deploy.
|
||||||
|
|
||||||
|
- **2026-04-14 Claude** MAJOR session: Engineering knowledge layer V1 (Layer 2) built — entity + relationship tables, 15 types, 12 relationship kinds, 35 bootstrapped entities across p04/p05/p06. Human Mirror (Layer 3) — GET /projects/{name}/mirror.html + navigable wiki at /wiki with search. Karpathy-inspired upgrades: contradiction detection in triage, weekly lint pass, weekly synthesis pass producing "current state" paragraphs at top of project pages. Auto-detection of new projects from extraction. Registry persistence fix (ATOCORE_PROJECT_REGISTRY_DIR env var). abb-space/p08 aliases added, atomizer-v2 ingested (568 docs, +12,472 vectors). Identity/preference seed (6 new), signal-aggressive extractor rewrite (llm-0.4.0), auto vault refresh in cron. **OpenClaw one-way pull importer** built per codex proposal — reads /home/papa/clawd SOUL.md, USER.md, MEMORY.md, MODEL-ROUTING.md, memory/*.md via SSH, hash-delta import, pipeline triages. First import: 10 candidates → 10 promoted with lenient triage rule. Active memories 47→84. State entries 61→78. Tests 290→297. Dashboard at /admin/dashboard. Wiki at /wiki.
|
||||||
|
|
||||||
|
|
||||||
- **2026-04-12 Claude** `4f8bec7..4ac4e5c` Session close. Merged OpenClaw capture plugin, ingested atomizer-v2 (568 docs, 12,472 new vectors → 33,253 total), seeded Phase 4 identity/preference memories (6 new, 47 total active), added deeper Wave 2 state entries (p05 +3, p06 +3), fixed R9 project trust hierarchy (7 case tests), built auto-triage pipeline, observability dashboard at /admin/dashboard. Updated master-plan-status.md and DEV-LEDGER.md to reflect full current state. 7/14 phases baseline complete. All P1s closed. Nightly pipeline runs unattended with both Claude Code and OpenClaw feeding the reflection loop.
|
- **2026-04-12 Claude** `4f8bec7..4ac4e5c` Session close. Merged OpenClaw capture plugin, ingested atomizer-v2 (568 docs, 12,472 new vectors → 33,253 total), seeded Phase 4 identity/preference memories (6 new, 47 total active), added deeper Wave 2 state entries (p05 +3, p06 +3), fixed R9 project trust hierarchy (7 case tests), built auto-triage pipeline, observability dashboard at /admin/dashboard. Updated master-plan-status.md and DEV-LEDGER.md to reflect full current state. 7/14 phases baseline complete. All P1s closed. Nightly pipeline runs unattended with both Claude Code and OpenClaw feeding the reflection loop.
|
||||||
- **2026-04-12 Codex (branch `codex/openclaw-capture-plugin`)** added a minimal external OpenClaw plugin at `openclaw-plugins/atocore-capture/` that mirrors Claude Code capture semantics: user-triggered assistant turns are POSTed to AtoCore `/interactions` with `client="openclaw"` and `reinforce=true`, fail-open, no extraction in-path. For live verification, temporarily added the local plugin load path to OpenClaw config and restarted the gateway so the plugin can load. Branch truth is ready; end-to-end verification still needs one fresh post-restart OpenClaw user turn to confirm new `client=openclaw` interactions appear on Dalidou.
|
- **2026-04-12 Codex (branch `codex/openclaw-capture-plugin`)** added a minimal external OpenClaw plugin at `openclaw-plugins/atocore-capture/` that mirrors Claude Code capture semantics: user-triggered assistant turns are POSTed to AtoCore `/interactions` with `client="openclaw"` and `reinforce=true`, fail-open, no extraction in-path. For live verification, temporarily added the local plugin load path to OpenClaw config and restarted the gateway so the plugin can load. Branch truth is ready; end-to-end verification still needs one fresh post-restart OpenClaw user turn to confirm new `client=openclaw` interactions appear on Dalidou.
|
||||||
- **2026-04-12 Claude** Batch 3 (R9 fix): `144dbbd..e5e9a99`. Trust hierarchy for project attribution — interaction scope always wins when set, model project only used for unscoped interactions + registered check. 7 case tests (A-G) cover every combination. Harness 17/18 (no regression). Tests 286->290. Before: wrong registered project could silently override interaction scope. After: interaction.project is the strongest signal; model project is only a fallback for unscoped captures. Not yet guaranteed: nothing prevents the *same* project's model output from being semantically wrong within that project. R9 marked fixed.
|
- **2026-04-12 Claude** Batch 3 (R9 fix): `144dbbd..e5e9a99`. Trust hierarchy for project attribution — interaction scope always wins when set, model project only used for unscoped interactions + registered check. 7 case tests (A-G) cover every combination. Harness 17/18 (no regression). Tests 286->290. Before: wrong registered project could silently override interaction scope. After: interaction.project is the strongest signal; model project is only a fallback for unscoped captures. Not yet guaranteed: nothing prevents the *same* project's model output from being semantically wrong within that project. R9 marked fixed.
|
||||||
@@ -201,4 +220,9 @@ git push origin main && ssh papa@dalidou "bash /srv/storage/atocore/app/deploy/d
|
|||||||
python scripts/atocore_client.py batch-extract '' '' 200 false # preview
|
python scripts/atocore_client.py batch-extract '' '' 200 false # preview
|
||||||
python scripts/atocore_client.py batch-extract '' '' 200 true # persist
|
python scripts/atocore_client.py batch-extract '' '' 200 true # persist
|
||||||
python scripts/atocore_client.py triage
|
python scripts/atocore_client.py triage
|
||||||
|
|
||||||
|
# Reproduce the ledger's test_count claim from a clean checkout
|
||||||
|
pip install -r requirements-dev.txt
|
||||||
|
pytest --collect-only -q | tail -1 # -> "N tests collected"
|
||||||
|
pytest -q # -> "N passed"
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -38,7 +38,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"id": "p06-polisher",
|
"id": "p06-polisher",
|
||||||
"aliases": ["p06", "polisher"],
|
"aliases": ["p06", "polisher", "p11", "polisher-fullum", "P11-Polisher-Fullum"],
|
||||||
"description": "Active P06 polisher corpus from PKM, software-suite notes, and selected repo context.",
|
"description": "Active P06 polisher corpus from PKM, software-suite notes, and selected repo context.",
|
||||||
"ingest_roots": [
|
"ingest_roots": [
|
||||||
{
|
{
|
||||||
@@ -47,6 +47,30 @@
|
|||||||
"label": "P06 staged project docs"
|
"label": "P06 staged project docs"
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "abb-space",
|
||||||
|
"aliases": ["abb", "abb-mirror", "p08", "p08-abb-space", "p08-abb-space-mirror"],
|
||||||
|
"description": "ABB Space mirror - lead/proposition for Atomaste. Also tracked as P08.",
|
||||||
|
"ingest_roots": [
|
||||||
|
{
|
||||||
|
"source": "vault",
|
||||||
|
"subpath": "incoming/projects/abb-space",
|
||||||
|
"label": "ABB Space docs"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "atomizer-v2",
|
||||||
|
"aliases": ["atomizer", "aom", "aom-v2"],
|
||||||
|
"description": "Atomizer V2 parametric optimization platform",
|
||||||
|
"ingest_roots": [
|
||||||
|
{
|
||||||
|
"source": "vault",
|
||||||
|
"subpath": "incoming/projects/atomizer-v2/repo",
|
||||||
|
"label": "Atomizer V2 repo"
|
||||||
|
}
|
||||||
|
]
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -34,36 +34,120 @@ export PYTHONPATH="$APP_DIR/src:${PYTHONPATH:-}"
|
|||||||
log "=== AtoCore batch extraction + triage starting ==="
|
log "=== AtoCore batch extraction + triage starting ==="
|
||||||
log "URL=$ATOCORE_URL LIMIT=$LIMIT"
|
log "URL=$ATOCORE_URL LIMIT=$LIMIT"
|
||||||
|
|
||||||
|
# --- Pipeline stats accumulator ---
|
||||||
|
EXTRACT_OUT=""
|
||||||
|
TRIAGE_OUT=""
|
||||||
|
HARNESS_OUT=""
|
||||||
|
|
||||||
# Step A: Extract candidates from recent interactions
|
# Step A: Extract candidates from recent interactions
|
||||||
log "Step A: LLM extraction"
|
log "Step A: LLM extraction"
|
||||||
python3 "$APP_DIR/scripts/batch_llm_extract_live.py" \
|
EXTRACT_OUT=$(python3 "$APP_DIR/scripts/batch_llm_extract_live.py" \
|
||||||
--base-url "$ATOCORE_URL" \
|
--base-url "$ATOCORE_URL" \
|
||||||
--limit "$LIMIT" \
|
--limit "$LIMIT" \
|
||||||
2>&1 || {
|
2>&1) || {
|
||||||
log "WARN: batch extraction failed (non-blocking)"
|
log "WARN: batch extraction failed (non-blocking)"
|
||||||
}
|
}
|
||||||
|
echo "$EXTRACT_OUT"
|
||||||
|
|
||||||
# Step B: Auto-triage candidates in the queue
|
# Step B: Auto-triage candidates in the queue
|
||||||
log "Step B: auto-triage"
|
log "Step B: auto-triage"
|
||||||
python3 "$APP_DIR/scripts/auto_triage.py" \
|
TRIAGE_OUT=$(python3 "$APP_DIR/scripts/auto_triage.py" \
|
||||||
--base-url "$ATOCORE_URL" \
|
--base-url "$ATOCORE_URL" \
|
||||||
2>&1 || {
|
2>&1) || {
|
||||||
log "WARN: auto-triage failed (non-blocking)"
|
log "WARN: auto-triage failed (non-blocking)"
|
||||||
}
|
}
|
||||||
|
echo "$TRIAGE_OUT"
|
||||||
|
|
||||||
# Step C: Weekly synthesis (Sundays only)
|
# Step B2: Auto-promote reinforced candidates + expire stale ones
|
||||||
|
log "Step B2: auto-promote + expire"
|
||||||
|
python3 "$APP_DIR/scripts/auto_promote_reinforced.py" \
|
||||||
|
2>&1 || {
|
||||||
|
log "WARN: auto-promote/expire failed (non-blocking)"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Step C: Daily project synthesis (keeps wiki/mirror pages fresh)
|
||||||
|
log "Step C: project synthesis (daily)"
|
||||||
|
python3 "$APP_DIR/scripts/synthesize_projects.py" \
|
||||||
|
--base-url "$ATOCORE_URL" \
|
||||||
|
2>&1 || {
|
||||||
|
log "WARN: synthesis failed (non-blocking)"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Step D: Weekly lint pass (Sundays only — heavier, not needed daily)
|
||||||
if [[ "$(date -u +%u)" == "7" ]]; then
|
if [[ "$(date -u +%u)" == "7" ]]; then
|
||||||
log "Step C: weekly project synthesis"
|
|
||||||
python3 "$APP_DIR/scripts/synthesize_projects.py" \
|
|
||||||
--base-url "$ATOCORE_URL" \
|
|
||||||
2>&1 || {
|
|
||||||
log "WARN: synthesis failed (non-blocking)"
|
|
||||||
}
|
|
||||||
|
|
||||||
log "Step D: weekly lint pass"
|
log "Step D: weekly lint pass"
|
||||||
python3 "$APP_DIR/scripts/lint_knowledge_base.py" \
|
python3 "$APP_DIR/scripts/lint_knowledge_base.py" \
|
||||||
--base-url "$ATOCORE_URL" \
|
--base-url "$ATOCORE_URL" \
|
||||||
2>&1 || true
|
2>&1 || true
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
# Step E: Retrieval harness (daily)
|
||||||
|
log "Step E: retrieval harness"
|
||||||
|
HARNESS_OUT=$(python3 "$APP_DIR/scripts/retrieval_eval.py" \
|
||||||
|
--json \
|
||||||
|
--base-url "$ATOCORE_URL" \
|
||||||
|
2>&1) || {
|
||||||
|
log "WARN: retrieval harness failed (non-blocking)"
|
||||||
|
}
|
||||||
|
echo "$HARNESS_OUT"
|
||||||
|
|
||||||
|
# Step F: Persist pipeline summary to project state
|
||||||
|
log "Step F: pipeline summary"
|
||||||
|
python3 -c "
|
||||||
|
import json, urllib.request, re, sys
|
||||||
|
|
||||||
|
base = '$ATOCORE_URL'
|
||||||
|
ts = '$TIMESTAMP'
|
||||||
|
|
||||||
|
def post_state(key, value):
|
||||||
|
body = json.dumps({
|
||||||
|
'project': 'atocore', 'category': 'status',
|
||||||
|
'key': key, 'value': value, 'source': 'nightly pipeline',
|
||||||
|
}).encode()
|
||||||
|
req = urllib.request.Request(
|
||||||
|
f'{base}/project/state', data=body,
|
||||||
|
headers={'Content-Type': 'application/json'}, method='POST',
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
urllib.request.urlopen(req, timeout=10)
|
||||||
|
except Exception as e:
|
||||||
|
print(f'WARN: failed to persist {key}: {e}', file=sys.stderr)
|
||||||
|
|
||||||
|
# Parse harness JSON
|
||||||
|
harness = {}
|
||||||
|
try:
|
||||||
|
harness = json.loads('''$HARNESS_OUT''')
|
||||||
|
post_state('retrieval_harness_result', json.dumps({
|
||||||
|
'passed': harness.get('passed', 0),
|
||||||
|
'total': harness.get('total', 0),
|
||||||
|
'failures': [f['name'] for f in harness.get('fixtures', []) if not f.get('ok')],
|
||||||
|
'run_at': ts,
|
||||||
|
}))
|
||||||
|
p, t = harness.get('passed', '?'), harness.get('total', '?')
|
||||||
|
print(f'Harness: {p}/{t}')
|
||||||
|
except Exception:
|
||||||
|
print('WARN: could not parse harness output')
|
||||||
|
|
||||||
|
# Parse triage counts from stdout
|
||||||
|
triage_out = '''$TRIAGE_OUT'''
|
||||||
|
promoted = len(re.findall(r'promoted', triage_out, re.IGNORECASE))
|
||||||
|
rejected = len(re.findall(r'rejected', triage_out, re.IGNORECASE))
|
||||||
|
needs_human = len(re.findall(r'needs.human', triage_out, re.IGNORECASE))
|
||||||
|
|
||||||
|
# Build summary
|
||||||
|
summary = {
|
||||||
|
'run_at': ts,
|
||||||
|
'harness_passed': harness.get('passed', -1),
|
||||||
|
'harness_total': harness.get('total', -1),
|
||||||
|
'triage_promoted': promoted,
|
||||||
|
'triage_rejected': rejected,
|
||||||
|
'triage_needs_human': needs_human,
|
||||||
|
}
|
||||||
|
post_state('pipeline_last_run', ts)
|
||||||
|
post_state('pipeline_summary', json.dumps(summary))
|
||||||
|
print(f'Pipeline summary persisted: {json.dumps(summary)}')
|
||||||
|
" 2>&1 || {
|
||||||
|
log "WARN: pipeline summary persistence failed (non-blocking)"
|
||||||
|
}
|
||||||
|
|
||||||
log "=== AtoCore batch extraction + triage complete ==="
|
log "=== AtoCore batch extraction + triage complete ==="
|
||||||
|
|||||||
@@ -166,10 +166,19 @@ def _extract_last_user_prompt(transcript_path: str) -> str:
|
|||||||
# Project inference from working directory.
|
# Project inference from working directory.
|
||||||
# Maps known repo paths to AtoCore project IDs. The user can extend
|
# Maps known repo paths to AtoCore project IDs. The user can extend
|
||||||
# this table or replace it with a registry lookup later.
|
# this table or replace it with a registry lookup later.
|
||||||
|
_VAULT = "C:\\Users\\antoi\\antoine\\My Libraries\\Antoine Brain Extension"
|
||||||
|
|
||||||
_PROJECT_PATH_MAP: dict[str, str] = {
|
_PROJECT_PATH_MAP: dict[str, str] = {
|
||||||
# Add mappings as needed, e.g.:
|
f"{_VAULT}\\2-Projects\\P04-GigaBIT-M1": "p04-gigabit",
|
||||||
# "C:\\Users\\antoi\\gigabit": "p04-gigabit",
|
f"{_VAULT}\\2-Projects\\P10-Interferometer": "p05-interferometer",
|
||||||
# "C:\\Users\\antoi\\interferometer": "p05-interferometer",
|
f"{_VAULT}\\2-Projects\\P11-Polisher-Fullum": "p06-polisher",
|
||||||
|
f"{_VAULT}\\2-Projects\\P08-ABB-Space-Mirror": "abb-space",
|
||||||
|
f"{_VAULT}\\2-Projects\\I01-Atomizer": "atomizer-v2",
|
||||||
|
f"{_VAULT}\\2-Projects\\I02-AtoCore": "atocore",
|
||||||
|
"C:\\Users\\antoi\\ATOCore": "atocore",
|
||||||
|
"C:\\Users\\antoi\\Polisher-Sim": "p06-polisher",
|
||||||
|
"C:\\Users\\antoi\\Fullum-Interferometer": "p05-interferometer",
|
||||||
|
"C:\\Users\\antoi\\Atomizer-V2": "atomizer-v2",
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
284
docs/MASTER-BRAIN-PLAN.md
Normal file
284
docs/MASTER-BRAIN-PLAN.md
Normal file
@@ -0,0 +1,284 @@
|
|||||||
|
# AtoCore Master Brain Plan
|
||||||
|
|
||||||
|
> Vision: AtoCore becomes the **single source of truth** that grounds every LLM
|
||||||
|
> interaction across the entire ecosystem (Claude, OpenClaw, Codex, Ollama, future
|
||||||
|
> agents). Every prompt is automatically enriched with full project context. The
|
||||||
|
> brain self-grows from daily work, auto-organizes its metadata, and stays
|
||||||
|
> flawlessly reliable.
|
||||||
|
|
||||||
|
## The Core Insight
|
||||||
|
|
||||||
|
AtoCore today is a **well-architected capture + curation system with a critical
|
||||||
|
gap on the consumption side**. We pour water into the bucket (capture from
|
||||||
|
Claude Code Stop hook + OpenClaw message hooks) but nothing is drinking from it
|
||||||
|
at prompt time. Fixing that gap is the single highest-leverage move.
|
||||||
|
|
||||||
|
**Once every LLM call is AtoCore-grounded automatically, the feedback loop
|
||||||
|
closes**: LLMs use the context → produce better responses → those responses
|
||||||
|
reference the injected memories → reinforcement fires → knowledge curates
|
||||||
|
itself. The capture side is already working. The pull side is what's missing.
|
||||||
|
|
||||||
|
## Universal Consumption Strategy
|
||||||
|
|
||||||
|
MCP is great for Claude (Claude Desktop, Claude Code, Cursor, Zed, Windsurf) but
|
||||||
|
is **not universal**. OpenClaw has its own plugin SDK. Codex, Ollama, and GPT
|
||||||
|
don't natively support MCP. The right strategy:
|
||||||
|
|
||||||
|
**HTTP API is the truth; every client gets the thinnest possible adapter.**
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────┐
|
||||||
|
│ AtoCore HTTP API │ ← canonical interface
|
||||||
|
│ /context/build │
|
||||||
|
│ /query │
|
||||||
|
│ /memory │
|
||||||
|
│ /project/state │
|
||||||
|
└──────────┬──────────┘
|
||||||
|
│
|
||||||
|
┌────────────┬───────────┼──────────┬────────────┐
|
||||||
|
│ │ │ │ │
|
||||||
|
┌──┴───┐ ┌────┴────┐ ┌───┴───┐ ┌───┴────┐ ┌───┴────┐
|
||||||
|
│ MCP │ │OpenClaw │ │Claude │ │ Codex │ │ Ollama │
|
||||||
|
│server│ │ plugin │ │ Code │ │ skill │ │ proxy │
|
||||||
|
│ │ │ (pull) │ │ hook │ │ │ │ │
|
||||||
|
└──┬───┘ └────┬────┘ └───┬───┘ └────┬───┘ └────┬───┘
|
||||||
|
│ │ │ │ │
|
||||||
|
Claude OpenClaw Claude Code Codex CLI Ollama
|
||||||
|
Desktop, agent local
|
||||||
|
Cursor, models
|
||||||
|
Zed,
|
||||||
|
Windsurf
|
||||||
|
```
|
||||||
|
|
||||||
|
Each adapter's only job: accept a prompt, call AtoCore HTTP, prepend the
|
||||||
|
returned context pack. The adapter itself carries no logic.
|
||||||
|
|
||||||
|
## Three Integration Tiers
|
||||||
|
|
||||||
|
### Tier 1: MCP-native clients (Claude ecosystem)
|
||||||
|
Build **atocore-mcp** — a standalone MCP server that wraps the HTTP API. Exposes:
|
||||||
|
- `context(query, project)` → context pack
|
||||||
|
- `search(query)` → raw retrieval
|
||||||
|
- `remember(type, content, project)` → create candidate memory
|
||||||
|
- `recall(project, key)` → project state lookup
|
||||||
|
- `list_projects()` → registered projects
|
||||||
|
|
||||||
|
Works with Claude Desktop, Claude Code (via `claude mcp add atocore`), Cursor,
|
||||||
|
Zed, Windsurf without any per-client work beyond config.
|
||||||
|
|
||||||
|
### Tier 2: Custom plugin ecosystems (OpenClaw)
|
||||||
|
Extend the existing `atocore-capture` plugin on T420 to also register a
|
||||||
|
**`before_prompt_build`** hook that pulls context from AtoCore and injects it
|
||||||
|
into the agent's system prompt. The plugin already has the HTTP client, the
|
||||||
|
authentication, the fail-open pattern. This is ~30 lines of added code.
|
||||||
|
|
||||||
|
### Tier 3: Everything else (Codex, Ollama, custom agents)
|
||||||
|
For clients without plugin/hook systems, ship a **thin proxy/middleware** the
|
||||||
|
user configures as the LLM endpoint:
|
||||||
|
- `atocore-proxy` listens on `localhost:PORT`
|
||||||
|
- Intercepts OpenAI-compatible chat/completion calls
|
||||||
|
- Pulls context from AtoCore, injects into system prompt
|
||||||
|
- Forwards to the real model endpoint (OpenAI, Ollama, Anthropic, etc.)
|
||||||
|
- Returns the response, then captures the interaction back to AtoCore
|
||||||
|
|
||||||
|
This makes AtoCore a "drop-in" layer for anything that speaks
|
||||||
|
OpenAI-compatible HTTP — which is nearly every modern LLM runtime.
|
||||||
|
|
||||||
|
## Knowledge Density Plan
|
||||||
|
|
||||||
|
The brain is only as smart as what it knows. Current state: 80 active memories
|
||||||
|
across 6 projects, 324 candidates in the queue being processed. Target:
|
||||||
|
**1,000+ curated memories** to become a real master brain.
|
||||||
|
|
||||||
|
Mechanisms:
|
||||||
|
1. **Finish the current triage pass** (324 → ~80 more promotions expected).
|
||||||
|
2. **Re-extract with stronger prompt on existing 236 interactions** — tune the
|
||||||
|
LLM extractor system prompt to pull more durable facts and fewer ephemeral
|
||||||
|
snapshots.
|
||||||
|
3. **Ingest all drive/vault documents as memory candidates** (not just chunks).
|
||||||
|
Every structured markdown section with a decision/fact/requirement header
|
||||||
|
becomes a candidate memory.
|
||||||
|
4. **Multi-source triangulation**: same fact in 3+ sources = auto-promote to
|
||||||
|
confidence 0.95.
|
||||||
|
5. **Cross-project synthesis**: facts appearing in multiple project contexts
|
||||||
|
get promoted to global domain knowledge.
|
||||||
|
|
||||||
|
## Auto-Organization of Metadata
|
||||||
|
|
||||||
|
Currently: `type`, `project`, `confidence`, `status`, `reference_count`. For
|
||||||
|
master brain we need more structure, inferred automatically:
|
||||||
|
|
||||||
|
| Addition | Purpose | Mechanism |
|
||||||
|
|---|---|---|
|
||||||
|
| **Domain tags** (optics, mechanics, firmware, business…) | Cross-cutting retrieval | LLM inference during triage |
|
||||||
|
| **Temporal scope** (permanent, valid_until_X, transient) | Avoid stale truth | LLM classifies during triage |
|
||||||
|
| **Source refs** (chunk_id[], interaction_id[]) | Provenance for every fact | Enforced at creation time |
|
||||||
|
| **Relationships** (contradicts, updates, depends_on) | Memory graph | Triage infers during review |
|
||||||
|
| **Semantic clusters** | Detect duplicates, find gaps | Weekly HDBSCAN pass on embeddings |
|
||||||
|
|
||||||
|
Layer these in progressively — none of them require schema rewrites, just
|
||||||
|
additional fields and batch jobs.
|
||||||
|
|
||||||
|
## Self-Growth Mechanisms
|
||||||
|
|
||||||
|
Four loops that make AtoCore grow autonomously:
|
||||||
|
|
||||||
|
### 1. Drift detection (nightly)
|
||||||
|
Compare new chunk embeddings to existing vector distribution. Centroids >X
|
||||||
|
cosine distance from any existing centroid = new knowledge area. Log to
|
||||||
|
dashboard; human decides if it's noise or a domain worth curating.
|
||||||
|
|
||||||
|
### 2. Gap identification (continuous)
|
||||||
|
Every `/context/build` logs `query + chunks_returned + memories_returned`.
|
||||||
|
Weekly report: "top 10 queries with weak coverage." Those are targeted
|
||||||
|
curation opportunities.
|
||||||
|
|
||||||
|
### 3. Multi-source triangulation (weekly)
|
||||||
|
Scan memory content similarity across sources. When a fact appears in 3+
|
||||||
|
independent sources (vault doc + drive doc + interaction), auto-promote to
|
||||||
|
high confidence and mark as "triangulated."
|
||||||
|
|
||||||
|
### 4. Active learning prompts (monthly)
|
||||||
|
Surface "you have 200 p06 memories but only 15 p04 memories. Spend 30 min
|
||||||
|
curating p04?" via dashboard digest.
|
||||||
|
|
||||||
|
## Robustness Strategy (Flawless Operation Bar)
|
||||||
|
|
||||||
|
Current: nightly backup, off-host rsync, health endpoint, 303 tests, harness,
|
||||||
|
enhanced dashboard with pipeline health (this session).
|
||||||
|
|
||||||
|
To reach "flawless":
|
||||||
|
|
||||||
|
| Gap | Fix | Priority |
|
||||||
|
|---|---|---|
|
||||||
|
| Silent pipeline failures | Alerting webhook on harness drop / pipeline skip | P1 |
|
||||||
|
| Memory mutations untracked | Append-only audit log table | P1 |
|
||||||
|
| Integrity drift | Nightly FK + vector-chunk parity checks | P1 |
|
||||||
|
| Schema migrations ad-hoc | Formal migration framework with rollback | P2 |
|
||||||
|
| Single point of failure | Daily backup to user's main computer (new) | P1 |
|
||||||
|
| No hot standby | Second instance following primary via WAL | P3 |
|
||||||
|
| No temporal history | Memory audit + valid_until fields | P2 |
|
||||||
|
|
||||||
|
### Daily Backup to Main Computer
|
||||||
|
|
||||||
|
Currently: Dalidou → T420 (192.168.86.39) via rsync.
|
||||||
|
|
||||||
|
Add: Dalidou → main computer via a pull (main computer runs the rsync,
|
||||||
|
pulls from Dalidou). Pull-based is simpler than push — no need for SSH
|
||||||
|
keys on Dalidou to reach the Windows machine.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# On main computer, daily scheduled task:
|
||||||
|
rsync -a papa@dalidou:/srv/storage/atocore/backups/snapshots/ \
|
||||||
|
/path/to/local/atocore-backups/
|
||||||
|
```
|
||||||
|
|
||||||
|
Configure via Windows Task Scheduler or a cron-like runner. Verify weekly
|
||||||
|
that the latest snapshot is present.
|
||||||
|
|
||||||
|
## Human Interface Auto-Evolution
|
||||||
|
|
||||||
|
Current: wiki at `/wiki`, regenerates on every request from DB. Synthesis
|
||||||
|
(the "current state" paragraph at top of project pages) runs **weekly on
|
||||||
|
Sundays only**. That's why it feels stalled.
|
||||||
|
|
||||||
|
Fixes:
|
||||||
|
1. **Run synthesis daily, not weekly.** It's cheap (one claude call per
|
||||||
|
project) and keeps the human-readable overview fresh.
|
||||||
|
2. **Trigger synthesis on major events** — when 5+ new memories land for a
|
||||||
|
project, regenerate its synthesis.
|
||||||
|
3. **Add "What's New" feed** — wiki homepage shows recent additions across all
|
||||||
|
projects (last 7 days of memory promotions, state entries, entities).
|
||||||
|
4. **Memory timeline view** — project page gets a chronological list of what
|
||||||
|
we learned when.
|
||||||
|
|
||||||
|
## Phased Roadmap (8-10 weeks)
|
||||||
|
|
||||||
|
### Phase 1 (week 1-2): Universal Consumption
|
||||||
|
**Goal: every LLM call is AtoCore-grounded automatically.**
|
||||||
|
|
||||||
|
- [ ] Build `atocore-mcp` server (wraps HTTP API, stdio transport)
|
||||||
|
- [ ] Publish to npm / or run via `pipx` / stdlib HTTP
|
||||||
|
- [ ] Configure in Claude Desktop (`~/.claude/mcp_servers.json`)
|
||||||
|
- [ ] Configure in Claude Code (`claude mcp add atocore …`)
|
||||||
|
- [ ] Extend OpenClaw plugin with `before_prompt_build` PULL
|
||||||
|
- [ ] Write `atocore-proxy` middleware for Codex/Ollama/generic clients
|
||||||
|
- [ ] Document configuration for each client
|
||||||
|
|
||||||
|
**Success:** open a fresh Claude Code session, ask a project question, verify
|
||||||
|
the response references AtoCore memories without manual context commands.
|
||||||
|
|
||||||
|
### Phase 2 (week 2-3): Knowledge Density + Wiki Evolution
|
||||||
|
- [ ] Finish current triage pass (324 candidates → active)
|
||||||
|
- [ ] Tune extractor prompt for higher promotion rate on durable facts
|
||||||
|
- [ ] Daily synthesis in cron (not just Sundays)
|
||||||
|
- [ ] Event-triggered synthesis on significant project changes
|
||||||
|
- [ ] Wiki "What's New" feed
|
||||||
|
- [ ] Memory timeline per project
|
||||||
|
|
||||||
|
**Target:** 300+ active memories, wiki feels alive daily.
|
||||||
|
|
||||||
|
### Phase 3 (week 3-4): Auto-Organization
|
||||||
|
- [ ] Schema: add `domain_tags`, `valid_until`, `source_refs`, `triangulated_count`
|
||||||
|
- [ ] Triage prompt upgraded: infer tags + temporal scope + relationships
|
||||||
|
- [ ] Weekly HDBSCAN clustering of embeddings → dup detection + gap reports
|
||||||
|
- [ ] Relationship edges in a new `memory_relationships` table
|
||||||
|
|
||||||
|
### Phase 4 (week 4-5): Robustness Hardening
|
||||||
|
- [ ] Append-only `memory_audit` table + retrofit mutations
|
||||||
|
- [ ] Nightly integrity checks (FK validation, orphan detection, parity)
|
||||||
|
- [ ] Alerting webhook (Discord/email) on pipeline anomalies
|
||||||
|
- [ ] Daily backup to user's main computer (pull-based)
|
||||||
|
- [ ] Formal migration framework
|
||||||
|
|
||||||
|
### Phase 5 (week 6-7): Engineering V1 Implementation
|
||||||
|
Execute the 23 acceptance criteria in `docs/architecture/engineering-v1-acceptance.md`
|
||||||
|
against p06-polisher as the test bed. The ontology and queries are designed;
|
||||||
|
this phase implements them.
|
||||||
|
|
||||||
|
### Phase 6 (week 8-9): Self-Growth Loops
|
||||||
|
- [ ] Drift detection (nightly)
|
||||||
|
- [ ] Gap identification from `/context/build` logs
|
||||||
|
- [ ] Multi-source triangulation
|
||||||
|
- [ ] Active learning digest (monthly)
|
||||||
|
- [ ] Cross-project synthesis
|
||||||
|
|
||||||
|
### Phase 7 (ongoing): Scale & Polish
|
||||||
|
- [ ] Multi-model validation (sonnet triages, opus cross-checks on disagreements)
|
||||||
|
- [ ] AtoDrive integration (Google Drive as trusted source)
|
||||||
|
- [ ] Hot standby when real production dependence materializes
|
||||||
|
- [ ] More MCP tools (write-back, memory search, entity queries)
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
AtoCore is a master brain when:
|
||||||
|
|
||||||
|
1. **Zero manual context commands.** A fresh Claude/OpenClaw session answering
|
||||||
|
a project question without being told "use AtoCore context."
|
||||||
|
2. **1,000+ active memories** with >90% provenance coverage (every fact
|
||||||
|
traceable to a source).
|
||||||
|
3. **Every project has a current, human-readable overview** updated within 24h
|
||||||
|
of significant changes.
|
||||||
|
4. **Harness stays >95%** across 20+ fixtures covering all active projects.
|
||||||
|
5. **Zero silent pipeline failures** for 30 consecutive days (all failures
|
||||||
|
surface via alert within the hour).
|
||||||
|
6. **Claude on any task knows what we know** — user asks "what did we decide
|
||||||
|
about X?" and the answer is grounded in AtoCore, not reconstructed from
|
||||||
|
scratch.
|
||||||
|
|
||||||
|
## Where We Are Now (2026-04-16)
|
||||||
|
|
||||||
|
- ✅ Core infrastructure: HTTP API, SQLite, Chroma, deploy pipeline
|
||||||
|
- ✅ Capture pipes: Claude Code Stop hook, OpenClaw message hooks
|
||||||
|
- ✅ Nightly pipeline: backup, extract, triage, synthesis, lint, harness, summary
|
||||||
|
- ✅ Phase 10: auto-promotion from reinforcement + candidate expiry
|
||||||
|
- ✅ Dashboard shows pipeline health + interaction totals + all projects
|
||||||
|
- ⚡ 324 candidates being triaged (down from 439), ~80 active memories, growing
|
||||||
|
- ❌ No consumption at prompt time (capture-only)
|
||||||
|
- ❌ Wiki auto-evolves only on Sundays (synthesis cadence)
|
||||||
|
- ❌ No MCP adapter
|
||||||
|
- ❌ No daily backup to main computer
|
||||||
|
- ❌ Engineering V1 not implemented
|
||||||
|
- ❌ No alerting on pipeline failures
|
||||||
|
|
||||||
|
The path is clear. Phase 1 is the keystone.
|
||||||
@@ -33,15 +33,21 @@ read-only additive mode.
|
|||||||
at 5% budget ratio. Future identity/preference extraction happens
|
at 5% budget ratio. Future identity/preference extraction happens
|
||||||
organically via the nightly LLM extraction pipeline.
|
organically via the nightly LLM extraction pipeline.
|
||||||
|
|
||||||
- Phase 8 - OpenClaw Integration. As of 2026-04-12 the T420 OpenClaw
|
- Phase 8 - OpenClaw Integration (baseline only, not primary surface).
|
||||||
helper (`t420-openclaw/atocore.py`) is verified end-to-end against
|
As of 2026-04-15 the T420 OpenClaw helper (`t420-openclaw/atocore.py`)
|
||||||
live Dalidou: health check, auto-context with project detection,
|
is verified end-to-end against live Dalidou: health check, auto-context
|
||||||
Trusted Project State surfacing, project-memory band, fail-open on
|
with project detection, Trusted Project State surfacing, project-memory
|
||||||
unreachable host. Tested from both the development machine and the
|
band, fail-open on unreachable host. Tested from both the development
|
||||||
T420 via SSH. The helper covers 15 of the 33 API endpoints — the
|
machine and the T420 via SSH. Scope is narrow: **14 request shapes
|
||||||
excluded endpoints (memory management, interactions, backup) are
|
against ~44 server routes**, predominantly read-oriented plus
|
||||||
correctly scoped to the operator client (`scripts/atocore_client.py`)
|
`POST/DELETE /project/state` and `POST /ingest/sources`. Memory
|
||||||
per the read-only additive integration model.
|
management, interactions capture (covered separately by the OpenClaw
|
||||||
|
capture plugin), admin/backup, entities, triage, and extraction write
|
||||||
|
paths remain out of this client's surface by design — they are scoped
|
||||||
|
to the operator client (`scripts/atocore_client.py`) per the
|
||||||
|
read-heavy additive integration model. "Primary integration" is
|
||||||
|
therefore overclaim; "baseline read + project-state write helper" is
|
||||||
|
the accurate framing.
|
||||||
|
|
||||||
### Baseline Complete
|
### Baseline Complete
|
||||||
|
|
||||||
@@ -120,25 +126,29 @@ This sits implicitly between Phase 8 (OpenClaw) and Phase 11
|
|||||||
(multi-model). Memory-review and engineering-entity commands are
|
(multi-model). Memory-review and engineering-entity commands are
|
||||||
deferred from the shared client until their workflows are exercised.
|
deferred from the shared client until their workflows are exercised.
|
||||||
|
|
||||||
## What Is Real Today (updated 2026-04-12)
|
## What Is Real Today (updated 2026-04-16)
|
||||||
|
|
||||||
- canonical AtoCore runtime on Dalidou (build_sha tracked, deploy.sh verified)
|
- canonical AtoCore runtime on Dalidou (`775960c`, deploy.sh verified)
|
||||||
- 33,253 vectors across 5 registered projects
|
- 33,253 vectors across 6 registered projects
|
||||||
- project registry with template, proposal, register, update, refresh
|
- 234 captured interactions (192 claude-code, 38 openclaw, 4 test)
|
||||||
- 5 registered projects:
|
- 6 registered projects:
|
||||||
- `p04-gigabit` (483 docs, 5 state entries)
|
- `p04-gigabit` (483 docs, 15 state entries)
|
||||||
- `p05-interferometer` (109 docs, 9 state entries)
|
- `p05-interferometer` (109 docs, 18 state entries)
|
||||||
- `p06-polisher` (564 docs, 9 state entries)
|
- `p06-polisher` (564 docs, 19 state entries)
|
||||||
- `atomizer-v2` (568 docs, newly ingested 2026-04-12)
|
- `atomizer-v2` (568 docs, 5 state entries)
|
||||||
- `atocore` (drive source, 38 state entries)
|
- `abb-space` (6 state entries)
|
||||||
- 47 active memories (16 project, 16 knowledge, 6 adaptation, 3 identity, 3 preference, 3 episodic)
|
- `atocore` (drive source, 47 state entries)
|
||||||
|
- 110 Trusted Project State entries across all projects (decisions, requirements, facts, contacts, milestones)
|
||||||
|
- 84 active memories (31 project, 23 knowledge, 10 episodic, 8 adaptation, 7 preference, 5 identity)
|
||||||
- context pack assembly with 4 tiers: Trusted Project State > identity/preference > project memories > retrieved chunks
|
- context pack assembly with 4 tiers: Trusted Project State > identity/preference > project memories > retrieved chunks
|
||||||
- query-relevance memory ranking with overlap-density scoring
|
- query-relevance memory ranking with overlap-density scoring
|
||||||
- retrieval eval harness: 18 fixtures, 17/18 passing
|
- retrieval eval harness: 18 fixtures, 17/18 passing on live
|
||||||
- 290 tests passing
|
- 303 tests passing
|
||||||
- nightly pipeline: backup → cleanup → rsync → LLM extraction (sonnet) → auto-triage
|
- nightly pipeline: backup → cleanup → rsync → OpenClaw import → vault refresh → extract → triage → **auto-promote/expire** → weekly synth/lint → **retrieval harness** → **pipeline summary to project state**
|
||||||
|
- Phase 10 operational: reinforcement-based auto-promotion (ref_count ≥ 3, confidence ≥ 0.7) + stale candidate expiry (14 days unreinforced)
|
||||||
|
- pipeline health visible in dashboard: interaction totals by client, pipeline last_run, harness results, triage stats
|
||||||
- off-host backup to clawdbot (T420) via rsync
|
- off-host backup to clawdbot (T420) via rsync
|
||||||
- both Claude Code and OpenClaw capture interactions to AtoCore
|
- both Claude Code and OpenClaw capture interactions to AtoCore (OpenClaw via `before_agent_start` + `llm_output` plugin, verified live)
|
||||||
- DEV-LEDGER.md as shared operating memory between Claude and Codex
|
- DEV-LEDGER.md as shared operating memory between Claude and Codex
|
||||||
- observability dashboard at GET /admin/dashboard
|
- observability dashboard at GET /admin/dashboard
|
||||||
|
|
||||||
@@ -146,26 +156,28 @@ deferred from the shared client until their workflows are exercised.
|
|||||||
|
|
||||||
These are the current practical priorities.
|
These are the current practical priorities.
|
||||||
|
|
||||||
1. **Observe and stabilize** — let the nightly pipeline run for a week,
|
1. **Observe the enhanced pipeline** — let the nightly pipeline run for a
|
||||||
check the dashboard daily, verify memories accumulate correctly
|
week with the new harness + summary + auto-promote steps. Check the
|
||||||
from organic Claude Code and OpenClaw use
|
dashboard daily. Verify pipeline summary populates correctly.
|
||||||
2. **Multi-model triage** (Phase 11 entry) — switch auto-triage to a
|
2. **Knowledge density** — run batch extraction over the full 234
|
||||||
|
interactions (`--since 2026-01-01`) to mine the backlog for knowledge.
|
||||||
|
Target: 100+ active memories.
|
||||||
|
3. **Multi-model triage** (Phase 11 entry) — switch auto-triage to a
|
||||||
different model than the extractor for independent validation
|
different model than the extractor for independent validation
|
||||||
3. **Automated eval in cron** (Phase 12 entry) — add retrieval harness
|
4. **Fix p04-constraints harness failure** — retrieval doesn't surface
|
||||||
to the nightly cron so regressions are caught automatically
|
"Zerodur" for p04 constraint queries. Investigate if it's a missing
|
||||||
4. **Atomizer-v2 state entries** — curate Trusted Project State for the
|
memory or retrieval ranking issue.
|
||||||
newly ingested Atomizer knowledge base
|
|
||||||
|
|
||||||
## Next
|
## Next
|
||||||
|
|
||||||
These are the next major layers after the current stabilization pass.
|
These are the next major layers after the current stabilization pass.
|
||||||
|
|
||||||
1. Phase 10 Write-back — confidence-based auto-promotion from
|
1. Phase 6 AtoDrive — clarify Google Drive as a trusted operational
|
||||||
reinforcement signal (a memory reinforced N times auto-promotes)
|
|
||||||
2. Phase 6 AtoDrive — clarify Google Drive as a trusted operational
|
|
||||||
source and ingest from it
|
source and ingest from it
|
||||||
3. Phase 13 Hardening — Chroma backup policy, monitoring, alerting,
|
2. Phase 13 Hardening — Chroma backup policy, monitoring, alerting,
|
||||||
failure visibility beyond log files
|
failure visibility beyond log files
|
||||||
|
3. Engineering V1 implementation sprint — once knowledge density is
|
||||||
|
sufficient and the pipeline feels boring and dependable
|
||||||
|
|
||||||
## Later
|
## Later
|
||||||
|
|
||||||
@@ -187,9 +199,10 @@ These remain intentionally deferred.
|
|||||||
plugin now exists (`openclaw-plugins/atocore-capture/`), interactions
|
plugin now exists (`openclaw-plugins/atocore-capture/`), interactions
|
||||||
flow. Write-back of promoted memories back to OpenClaw's own memory
|
flow. Write-back of promoted memories back to OpenClaw's own memory
|
||||||
system is still deferred.
|
system is still deferred.
|
||||||
- ~~automatic memory promotion~~ — auto-triage now handles promote/reject
|
- ~~automatic memory promotion~~ — Phase 10 complete: auto-triage handles
|
||||||
for extraction candidates. Reinforcement-based auto-promotion
|
extraction candidates, reinforcement-based auto-promotion graduates
|
||||||
(Phase 10) is the remaining piece.
|
candidates referenced 3+ times to active, stale candidates expire
|
||||||
|
after 14 days unreinforced.
|
||||||
- ~~reflection loop integration~~ — fully operational: capture (both
|
- ~~reflection loop integration~~ — fully operational: capture (both
|
||||||
clients) → reinforce (automatic) → extract (nightly cron, sonnet) →
|
clients) → reinforce (automatic) → extract (nightly cron, sonnet) →
|
||||||
auto-triage (nightly, sonnet) → only needs_human reaches the user.
|
auto-triage (nightly, sonnet) → only needs_human reaches the user.
|
||||||
|
|||||||
274
docs/universal-consumption.md
Normal file
274
docs/universal-consumption.md
Normal file
@@ -0,0 +1,274 @@
|
|||||||
|
# Universal Consumption — Connecting LLM Clients to AtoCore
|
||||||
|
|
||||||
|
Phase 1 of the Master Brain plan. Every LLM interaction across the ecosystem
|
||||||
|
pulls context from AtoCore automatically, without the user or agent having
|
||||||
|
to remember to ask for it.
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────┐
|
||||||
|
│ AtoCore HTTP API │ ← single source of truth
|
||||||
|
│ http://dalidou:8100│
|
||||||
|
└──────────┬──────────┘
|
||||||
|
│
|
||||||
|
┌────────────────────┼────────────────────┐
|
||||||
|
│ │ │
|
||||||
|
┌───┴────┐ ┌─────┴────┐ ┌────┴────┐
|
||||||
|
│ MCP │ │ OpenClaw │ │ HTTP │
|
||||||
|
│ server │ │ plugin │ │ proxy │
|
||||||
|
└───┬────┘ └──────┬───┘ └────┬────┘
|
||||||
|
│ │ │
|
||||||
|
Claude/Cursor/ OpenClaw Codex/Ollama/
|
||||||
|
Zed/Windsurf any OpenAI-compat client
|
||||||
|
```
|
||||||
|
|
||||||
|
Three adapters, one HTTP backend. Each adapter is a thin passthrough — no
|
||||||
|
business logic duplicated.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Adapter 1: MCP Server (Claude Desktop, Claude Code, Cursor, Zed, Windsurf)
|
||||||
|
|
||||||
|
The MCP server is `scripts/atocore_mcp.py` — stdlib-only Python, stdio
|
||||||
|
transport, wraps the HTTP API. Claude-family clients see AtoCore as built-in
|
||||||
|
tools just like `Read` or `Bash`.
|
||||||
|
|
||||||
|
### Tools exposed
|
||||||
|
|
||||||
|
- **`atocore_context`** (most important): Full context pack for a query —
|
||||||
|
Trusted Project State + memories + retrieved chunks. Use at the start of
|
||||||
|
any project-related conversation to ground it.
|
||||||
|
- **`atocore_search`**: Semantic search over ingested documents (top-K chunks).
|
||||||
|
- **`atocore_memory_list`**: List active memories, filterable by project + type.
|
||||||
|
- **`atocore_memory_create`**: Propose a candidate memory (enters triage queue).
|
||||||
|
- **`atocore_project_state`**: Get Trusted Project State entries by category.
|
||||||
|
- **`atocore_projects`**: List registered projects + aliases.
|
||||||
|
- **`atocore_health`**: Service status check.
|
||||||
|
|
||||||
|
### Registration
|
||||||
|
|
||||||
|
#### Claude Code (CLI)
|
||||||
|
```bash
|
||||||
|
claude mcp add atocore -- python C:/Users/antoi/ATOCore/scripts/atocore_mcp.py
|
||||||
|
claude mcp list # verify: "atocore ... ✓ Connected"
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Claude Desktop (GUI)
|
||||||
|
Edit `~/Library/Application Support/Claude/claude_desktop_config.json`
|
||||||
|
(macOS) or `%APPDATA%\Claude\claude_desktop_config.json` (Windows):
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"atocore": {
|
||||||
|
"command": "python",
|
||||||
|
"args": ["C:/Users/antoi/ATOCore/scripts/atocore_mcp.py"],
|
||||||
|
"env": {
|
||||||
|
"ATOCORE_URL": "http://dalidou:8100"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
Restart Claude Desktop.
|
||||||
|
|
||||||
|
#### Cursor / Zed / Windsurf
|
||||||
|
Similar JSON config in each tool's MCP settings. Consult their docs —
|
||||||
|
the config schema is standard MCP.
|
||||||
|
|
||||||
|
### Configuration
|
||||||
|
|
||||||
|
Environment variables the MCP server honors:
|
||||||
|
|
||||||
|
| Var | Default | Purpose |
|
||||||
|
|---|---|---|
|
||||||
|
| `ATOCORE_URL` | `http://dalidou:8100` | Where to reach AtoCore |
|
||||||
|
| `ATOCORE_TIMEOUT` | `10` | Per-request HTTP timeout (seconds) |
|
||||||
|
|
||||||
|
### Behavior
|
||||||
|
|
||||||
|
- Fail-open: if Dalidou is unreachable, tools return "AtoCore unavailable"
|
||||||
|
error messages but don't crash the client.
|
||||||
|
- Zero business logic: every tool is a direct HTTP passthrough.
|
||||||
|
- stdlib only: no MCP SDK dependency.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Adapter 2: OpenClaw Plugin (`openclaw-plugins/atocore-capture/handler.js`)
|
||||||
|
|
||||||
|
The plugin on T420 OpenClaw has two responsibilities:
|
||||||
|
|
||||||
|
1. **CAPTURE**: On `before_agent_start` + `llm_output`, POST completed turns
|
||||||
|
to AtoCore `/interactions` (existing).
|
||||||
|
2. **PULL**: On `before_prompt_build`, call `/context/build` and inject the
|
||||||
|
context pack via `prependContext` so the agent's system prompt includes
|
||||||
|
AtoCore knowledge.
|
||||||
|
|
||||||
|
### Deployment
|
||||||
|
|
||||||
|
The plugin is loaded from
|
||||||
|
`/tmp/atocore-openclaw-capture-plugin/openclaw-plugins/atocore-capture/`
|
||||||
|
on the T420 (per OpenClaw's plugin config at `~/.openclaw/openclaw.json`).
|
||||||
|
|
||||||
|
To update:
|
||||||
|
```bash
|
||||||
|
scp openclaw-plugins/atocore-capture/handler.js \
|
||||||
|
papa@192.168.86.39:/tmp/atocore-openclaw-capture-plugin/openclaw-plugins/atocore-capture/index.js
|
||||||
|
ssh papa@192.168.86.39 'systemctl --user restart openclaw-gateway'
|
||||||
|
```
|
||||||
|
|
||||||
|
Verify in gateway logs: look for "ready (7 plugins: acpx, atocore-capture, ...)"
|
||||||
|
|
||||||
|
### Configuration (env vars set on T420)
|
||||||
|
|
||||||
|
| Var | Default | Purpose |
|
||||||
|
|---|---|---|
|
||||||
|
| `ATOCORE_BASE_URL` | `http://dalidou:8100` | AtoCore HTTP endpoint |
|
||||||
|
| `ATOCORE_PULL_DISABLED` | (unset) | Set to `1` to disable context pull |
|
||||||
|
|
||||||
|
### Behavior
|
||||||
|
|
||||||
|
- Fail-open: AtoCore unreachable = no injection, no capture, agent runs
|
||||||
|
normally.
|
||||||
|
- 6s timeout on context pull, 10s on capture — won't stall the agent.
|
||||||
|
- Context pack prepended as a clearly-bracketed block so the agent can see
|
||||||
|
it's auto-injected grounding info.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Adapter 3: HTTP Proxy (`scripts/atocore_proxy.py`)
|
||||||
|
|
||||||
|
A stdlib-only OpenAI-compatible HTTP proxy. Sits between any
|
||||||
|
OpenAI-API-speaking client and the real provider, enriches every
|
||||||
|
`/chat/completions` request with AtoCore context.
|
||||||
|
|
||||||
|
Works with:
|
||||||
|
- **Codex CLI** (OpenAI-compatible endpoint)
|
||||||
|
- **Ollama** (has OpenAI-compatible `/v1` endpoint since 0.1.24)
|
||||||
|
- **LiteLLM**, **llama.cpp server**, custom agents
|
||||||
|
- Anything that can be pointed at a custom base URL
|
||||||
|
|
||||||
|
### Start it
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# For Ollama (local models):
|
||||||
|
ATOCORE_UPSTREAM=http://localhost:11434/v1 \
|
||||||
|
python scripts/atocore_proxy.py
|
||||||
|
|
||||||
|
# For OpenAI cloud:
|
||||||
|
ATOCORE_UPSTREAM=https://api.openai.com/v1 \
|
||||||
|
ATOCORE_CLIENT_LABEL=codex \
|
||||||
|
python scripts/atocore_proxy.py
|
||||||
|
|
||||||
|
# Test:
|
||||||
|
curl http://127.0.0.1:11435/healthz
|
||||||
|
```
|
||||||
|
|
||||||
|
### Point a client at it
|
||||||
|
|
||||||
|
Set the client's OpenAI base URL to `http://127.0.0.1:11435/v1`.
|
||||||
|
|
||||||
|
#### Ollama example:
|
||||||
|
```bash
|
||||||
|
OPENAI_BASE_URL=http://127.0.0.1:11435/v1 \
|
||||||
|
some-openai-client --model llama3:8b
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Codex CLI:
|
||||||
|
Set `OPENAI_BASE_URL=http://127.0.0.1:11435/v1` in your codex config.
|
||||||
|
|
||||||
|
### Configuration
|
||||||
|
|
||||||
|
| Var | Default | Purpose |
|
||||||
|
|---|---|---|
|
||||||
|
| `ATOCORE_URL` | `http://dalidou:8100` | AtoCore HTTP endpoint |
|
||||||
|
| `ATOCORE_UPSTREAM` | (required) | Real provider base URL |
|
||||||
|
| `ATOCORE_PROXY_PORT` | `11435` | Proxy listen port |
|
||||||
|
| `ATOCORE_PROXY_HOST` | `127.0.0.1` | Proxy bind address |
|
||||||
|
| `ATOCORE_CLIENT_LABEL` | `proxy` | Client id in captures |
|
||||||
|
| `ATOCORE_INJECT` | `1` | Inject context (set `0` to disable) |
|
||||||
|
| `ATOCORE_CAPTURE` | `1` | Capture interactions (set `0` to disable) |
|
||||||
|
|
||||||
|
### Behavior
|
||||||
|
|
||||||
|
- GET requests (model listing etc) pass through unchanged
|
||||||
|
- POST to `/chat/completions` (or `/v1/chat/completions`) gets enriched:
|
||||||
|
1. Last user message extracted as query
|
||||||
|
2. AtoCore `/context/build` called with 6s timeout
|
||||||
|
3. Pack injected as system message (or prepended to existing system)
|
||||||
|
4. Enriched body forwarded to upstream
|
||||||
|
5. After success, interaction POSTed to `/interactions` in background
|
||||||
|
- Fail-open: AtoCore unreachable = pass through without injection
|
||||||
|
- Streaming responses: currently buffered (not true stream). Good enough for
|
||||||
|
most cases; can be upgraded later if needed.
|
||||||
|
|
||||||
|
### Running as a service
|
||||||
|
|
||||||
|
On Linux, create `~/.config/systemd/user/atocore-proxy.service`:
|
||||||
|
```ini
|
||||||
|
[Unit]
|
||||||
|
Description=AtoCore HTTP proxy
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Environment=ATOCORE_UPSTREAM=http://localhost:11434/v1
|
||||||
|
Environment=ATOCORE_CLIENT_LABEL=ollama
|
||||||
|
ExecStart=/usr/bin/python3 /path/to/scripts/atocore_proxy.py
|
||||||
|
Restart=on-failure
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=default.target
|
||||||
|
```
|
||||||
|
Then: `systemctl --user enable --now atocore-proxy`
|
||||||
|
|
||||||
|
On Windows, register via Task Scheduler (similar pattern to backup task)
|
||||||
|
or use NSSM to install as a service.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Verification Checklist
|
||||||
|
|
||||||
|
Fresh end-to-end test to confirm Phase 1 is working:
|
||||||
|
|
||||||
|
### For Claude Code (MCP)
|
||||||
|
1. Open a new Claude Code session (not this one).
|
||||||
|
2. Ask: "what do we know about p06 polisher's control architecture?"
|
||||||
|
3. Claude should invoke `atocore_context` or `atocore_project_state`
|
||||||
|
on its own and answer grounded in AtoCore data.
|
||||||
|
|
||||||
|
### For OpenClaw (plugin pull)
|
||||||
|
1. Send a Discord message to OpenClaw: "what's the status on p04?"
|
||||||
|
2. Check T420 logs: `journalctl --user -u openclaw-gateway --since "1 min ago" | grep atocore-pull`
|
||||||
|
3. Expect: `atocore-pull:injected project=p04-gigabit chars=NNN`
|
||||||
|
|
||||||
|
### For proxy (any OpenAI-compat client)
|
||||||
|
1. Start proxy with appropriate upstream
|
||||||
|
2. Run a client query through it
|
||||||
|
3. Check stderr: `[atocore-proxy] inject: project=... chars=...`
|
||||||
|
4. Check `curl http://127.0.0.1:8100/interactions?client=proxy` — should
|
||||||
|
show the captured turn
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Why not just MCP everywhere?
|
||||||
|
|
||||||
|
MCP is great for Claude-family clients but:
|
||||||
|
- Not supported natively by Codex CLI, Ollama, or OpenAI's own API
|
||||||
|
- No universal "attach MCP" mechanism in all LLM runtimes
|
||||||
|
- HTTP APIs are truly universal
|
||||||
|
|
||||||
|
HTTP API is the truth, each adapter is the thinnest possible shim for its
|
||||||
|
ecosystem. When new adapters are needed (Gemini CLI, Claude Code plugin
|
||||||
|
system, etc.), they follow the same pattern.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Future enhancements
|
||||||
|
|
||||||
|
- **Streaming passthrough** in the proxy (currently buffered for simplicity)
|
||||||
|
- **Response grounding check**: parse assistant output for references to
|
||||||
|
injected context, count reinforcement events
|
||||||
|
- **Per-client metrics** in the dashboard: how often each client pulls,
|
||||||
|
context pack size, injection rate
|
||||||
|
- **Smart project detection**: today we use keyword matching; could use
|
||||||
|
AtoCore's own project resolver endpoint
|
||||||
140
docs/windows-backup-setup.md
Normal file
140
docs/windows-backup-setup.md
Normal file
@@ -0,0 +1,140 @@
|
|||||||
|
# Windows Main-Computer Backup Setup
|
||||||
|
|
||||||
|
The AtoCore backup pipeline runs nightly on Dalidou and already pushes snapshots
|
||||||
|
off-host to the T420 (`papa@192.168.86.39`). This doc sets up a **second**,
|
||||||
|
pull-based daily backup to your Windows main computer at
|
||||||
|
`C:\Users\antoi\Documents\ATOCore_Backups\`.
|
||||||
|
|
||||||
|
Pull-based means the Windows machine pulls from Dalidou. This is simpler than
|
||||||
|
push because Dalidou doesn't need SSH keys to reach Windows, and the backup
|
||||||
|
only runs when the Windows machine is powered on and can reach Dalidou.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
- Windows 10/11 with OpenSSH client (built-in since Win10 1809)
|
||||||
|
- SSH key-based auth to `papa@dalidou` already working (you're using it today)
|
||||||
|
- `C:\Users\antoi\ATOCore\scripts\windows\atocore-backup-pull.ps1` present
|
||||||
|
|
||||||
|
## Test the script manually
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
powershell.exe -ExecutionPolicy Bypass -File `
|
||||||
|
C:\Users\antoi\ATOCore\scripts\windows\atocore-backup-pull.ps1
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected output:
|
||||||
|
```
|
||||||
|
[timestamp] === AtoCore backup pull starting ===
|
||||||
|
[timestamp] Dalidou reachable.
|
||||||
|
[timestamp] Pulling snapshots via scp...
|
||||||
|
[timestamp] Pulled N snapshots successfully (total X MB, latest: ...)
|
||||||
|
[timestamp] === backup complete ===
|
||||||
|
```
|
||||||
|
|
||||||
|
Target directory: `C:\Users\antoi\Documents\ATOCore_Backups\snapshots\`
|
||||||
|
Logs: `C:\Users\antoi\Documents\ATOCore_Backups\_logs\backup-*.log`
|
||||||
|
|
||||||
|
## Register the Task Scheduler task
|
||||||
|
|
||||||
|
### Option A — automatic registration (recommended)
|
||||||
|
|
||||||
|
Run this PowerShell command **as your user** (no admin needed — uses HKCU task):
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
$action = New-ScheduledTaskAction -Execute 'powershell.exe' `
|
||||||
|
-Argument '-ExecutionPolicy Bypass -NonInteractive -WindowStyle Hidden -File C:\Users\antoi\ATOCore\scripts\windows\atocore-backup-pull.ps1'
|
||||||
|
|
||||||
|
# Run daily at 10:00 local time; if missed (computer off), run at next logon
|
||||||
|
$trigger = New-ScheduledTaskTrigger -Daily -At 10:00AM
|
||||||
|
$trigger.StartBoundary = (Get-Date -Format 'yyyy-MM-ddTHH:mm:ss')
|
||||||
|
|
||||||
|
$settings = New-ScheduledTaskSettingsSet `
|
||||||
|
-AllowStartIfOnBatteries `
|
||||||
|
-DontStopIfGoingOnBatteries `
|
||||||
|
-StartWhenAvailable `
|
||||||
|
-ExecutionTimeLimit (New-TimeSpan -Minutes 10) `
|
||||||
|
-RestartCount 2 `
|
||||||
|
-RestartInterval (New-TimeSpan -Minutes 30)
|
||||||
|
|
||||||
|
Register-ScheduledTask -TaskName 'AtoCore Backup Pull' `
|
||||||
|
-Description 'Daily pull of AtoCore backup snapshots from Dalidou' `
|
||||||
|
-Action $action -Trigger $trigger -Settings $settings `
|
||||||
|
-User $env:USERNAME
|
||||||
|
```
|
||||||
|
|
||||||
|
Key settings:
|
||||||
|
- `-StartWhenAvailable`: if the computer was off at 10:00, run as soon as it
|
||||||
|
comes online
|
||||||
|
- `-AllowStartIfOnBatteries`: works on laptop battery too
|
||||||
|
- `-ExecutionTimeLimit 10min`: kill hung tasks
|
||||||
|
- `-RestartCount 2`: retry twice if it fails (Dalidou temporarily unreachable)
|
||||||
|
|
||||||
|
### Option B -- Task Scheduler GUI
|
||||||
|
|
||||||
|
1. Open Task Scheduler (`taskschd.msc`)
|
||||||
|
2. Create Basic Task -> name: `AtoCore Backup Pull`
|
||||||
|
3. Trigger: Daily, 10:00 AM, recur every 1 day
|
||||||
|
4. Action: Start a program
|
||||||
|
- Program: `powershell.exe`
|
||||||
|
- Arguments: `-ExecutionPolicy Bypass -NonInteractive -WindowStyle Hidden -File "C:\Users\antoi\ATOCore\scripts\windows\atocore-backup-pull.ps1"`
|
||||||
|
5. Finish, then edit the task:
|
||||||
|
- Settings tab: check "Run task as soon as possible after a scheduled start is missed"
|
||||||
|
- Settings tab: "If the task fails, restart every 30 minutes, up to 2 times"
|
||||||
|
- Conditions tab: uncheck "Start only if computer is on AC power" (if you want it on battery)
|
||||||
|
|
||||||
|
## Verify
|
||||||
|
|
||||||
|
After the first scheduled run:
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
# Most recent log
|
||||||
|
Get-ChildItem C:\Users\antoi\Documents\ATOCore_Backups\_logs\ |
|
||||||
|
Sort-Object Name -Descending |
|
||||||
|
Select-Object -First 1 |
|
||||||
|
Get-Content
|
||||||
|
|
||||||
|
# Latest snapshot present?
|
||||||
|
Get-ChildItem C:\Users\antoi\Documents\ATOCore_Backups\snapshots\ |
|
||||||
|
Sort-Object Name -Descending |
|
||||||
|
Select-Object -First 3
|
||||||
|
```
|
||||||
|
|
||||||
|
## Unregister (if needed)
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
Unregister-ScheduledTask -TaskName 'AtoCore Backup Pull' -Confirm:$false
|
||||||
|
```
|
||||||
|
|
||||||
|
## How it behaves
|
||||||
|
|
||||||
|
- **Computer on, Dalidou reachable**: pulls latest snapshots silently in ~15s
|
||||||
|
- **Computer on, Dalidou unreachable** (remote work, network down): fail-open,
|
||||||
|
exits without error, logs "Dalidou unreachable"
|
||||||
|
- **Computer off at scheduled time**: Task Scheduler runs it as soon as the
|
||||||
|
computer wakes up
|
||||||
|
- **Many days off**: one run catches up; scp only transfers files not already
|
||||||
|
present (snapshots are date-stamped directories, idempotent overwrites)
|
||||||
|
|
||||||
|
## What gets backed up
|
||||||
|
|
||||||
|
The snapshots tree contains:
|
||||||
|
- `YYYYMMDDTHHMMSSZ/config/` — project registry, AtoCore config
|
||||||
|
- `YYYYMMDDTHHMMSSZ/db/` — SQLite snapshot of all memory, state, interactions
|
||||||
|
- `YYYYMMDDTHHMMSSZ/backup-metadata.json` — SHA, timestamp, source info
|
||||||
|
|
||||||
|
Chroma vectors are **not** in the snapshot by default
|
||||||
|
(`ATOCORE_BACKUP_CHROMA=false` on Dalidou). They can be rebuilt from the
|
||||||
|
source documents if lost. To include them, set `ATOCORE_BACKUP_CHROMA=true`
|
||||||
|
in the Dalidou cron environment.
|
||||||
|
|
||||||
|
## Three-tier backup summary
|
||||||
|
|
||||||
|
After this setup:
|
||||||
|
|
||||||
|
| Tier | Location | Cadence | Purpose |
|
||||||
|
|---|---|---|---|
|
||||||
|
| Live | Dalidou `/srv/storage/atocore/backups/snapshots/` | Nightly 03:00 UTC | Fast restore |
|
||||||
|
| Off-host | T420 `papa@192.168.86.39:/home/papa/atocore-backups/` | Nightly after Dalidou | Dalidou dies |
|
||||||
|
| User machine | `C:\Users\antoi\Documents\ATOCore_Backups\` | Daily 10:00 local | Full home-network failure |
|
||||||
|
|
||||||
|
Three independent copies. Any two can be lost simultaneously without data loss.
|
||||||
146
openclaw-plugins/atocore-capture/handler.js
Normal file
146
openclaw-plugins/atocore-capture/handler.js
Normal file
@@ -0,0 +1,146 @@
|
|||||||
|
/**
|
||||||
|
* AtoCore OpenClaw plugin — capture + pull.
|
||||||
|
*
|
||||||
|
* Two responsibilities:
|
||||||
|
*
|
||||||
|
* 1. CAPTURE (existing): On before_agent_start, buffer the user prompt.
|
||||||
|
* On llm_output, POST prompt+response to AtoCore /interactions.
|
||||||
|
* This is the "write" side — OpenClaw turns feed AtoCore's memory.
|
||||||
|
*
|
||||||
|
* 2. PULL (Phase 1 master brain): On before_prompt_build, call AtoCore
|
||||||
|
* /context/build and inject the returned context via prependContext.
|
||||||
|
* Every OpenClaw response is automatically grounded in what AtoCore
|
||||||
|
* knows (project state, memories, relevant chunks).
|
||||||
|
*
|
||||||
|
* Fail-open throughout: AtoCore unreachable = no injection, no capture,
|
||||||
|
* never blocks the agent.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { definePluginEntry } from "openclaw/plugin-sdk/core";
|
||||||
|
|
||||||
|
const BASE_URL = process.env.ATOCORE_BASE_URL || "http://dalidou:8100";
|
||||||
|
const MIN_LEN = 15;
|
||||||
|
const MAX_RESP = 50000;
|
||||||
|
const CONTEXT_TIMEOUT_MS = 6000;
|
||||||
|
const CAPTURE_TIMEOUT_MS = 10000;
|
||||||
|
|
||||||
|
function trim(v) { return typeof v === "string" ? v.trim() : ""; }
|
||||||
|
function trunc(t, m) { return !t || t.length <= m ? t : t.slice(0, m) + "\n\n[truncated]"; }
|
||||||
|
|
||||||
|
function detectProject(prompt) {
|
||||||
|
const lower = (prompt || "").toLowerCase();
|
||||||
|
const hints = [
|
||||||
|
["p04", "p04-gigabit"],
|
||||||
|
["gigabit", "p04-gigabit"],
|
||||||
|
["p05", "p05-interferometer"],
|
||||||
|
["interferometer", "p05-interferometer"],
|
||||||
|
["p06", "p06-polisher"],
|
||||||
|
["polisher", "p06-polisher"],
|
||||||
|
["fullum", "p06-polisher"],
|
||||||
|
["abb", "abb-space"],
|
||||||
|
["atomizer", "atomizer-v2"],
|
||||||
|
["atocore", "atocore"],
|
||||||
|
];
|
||||||
|
for (const [token, proj] of hints) {
|
||||||
|
if (lower.includes(token)) return proj;
|
||||||
|
}
|
||||||
|
return "";
|
||||||
|
}
|
||||||
|
|
||||||
|
export default definePluginEntry({
|
||||||
|
register(api) {
|
||||||
|
const log = api.logger;
|
||||||
|
let lastPrompt = null;
|
||||||
|
|
||||||
|
// --- PULL: inject AtoCore context into every prompt ---
|
||||||
|
api.on("before_prompt_build", async (event, ctx) => {
|
||||||
|
if (process.env.ATOCORE_PULL_DISABLED === "1") return;
|
||||||
|
const prompt = trim(event?.prompt || "");
|
||||||
|
if (prompt.length < MIN_LEN) return;
|
||||||
|
|
||||||
|
const project = detectProject(prompt);
|
||||||
|
|
||||||
|
try {
|
||||||
|
const res = await fetch(BASE_URL.replace(/\/$/, "") + "/context/build", {
|
||||||
|
method: "POST",
|
||||||
|
headers: { "Content-Type": "application/json" },
|
||||||
|
body: JSON.stringify({ prompt, project }),
|
||||||
|
signal: AbortSignal.timeout(CONTEXT_TIMEOUT_MS),
|
||||||
|
});
|
||||||
|
if (!res.ok) {
|
||||||
|
log.info("atocore-pull:http_error", { status: res.status });
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
const data = await res.json();
|
||||||
|
const contextPack = data.formatted_context || "";
|
||||||
|
if (!contextPack.trim()) return;
|
||||||
|
|
||||||
|
log.info("atocore-pull:injected", {
|
||||||
|
project: project || "(none)",
|
||||||
|
chars: contextPack.length,
|
||||||
|
});
|
||||||
|
|
||||||
|
return {
|
||||||
|
prependContext:
|
||||||
|
"--- AtoCore Context (auto-injected) ---\n" +
|
||||||
|
contextPack +
|
||||||
|
"\n--- End AtoCore Context ---\n",
|
||||||
|
};
|
||||||
|
} catch (err) {
|
||||||
|
log.info("atocore-pull:error", { error: String(err).slice(0, 200) });
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// --- CAPTURE: buffer user prompts on agent start ---
|
||||||
|
api.on("before_agent_start", async (event, ctx) => {
|
||||||
|
const prompt = trim(event?.prompt || event?.cleanedBody || "");
|
||||||
|
if (prompt.length < MIN_LEN || prompt.startsWith("<")) {
|
||||||
|
lastPrompt = null;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
lastPrompt = { text: prompt, sessionKey: ctx?.sessionKey || "", ts: Date.now() };
|
||||||
|
log.info("atocore-capture:prompt_buffered", { len: prompt.length });
|
||||||
|
});
|
||||||
|
|
||||||
|
// --- CAPTURE: send completed turns to AtoCore ---
|
||||||
|
api.on("llm_output", async (event, ctx) => {
|
||||||
|
if (!lastPrompt) return;
|
||||||
|
const texts = Array.isArray(event?.assistantTexts) ? event.assistantTexts : [];
|
||||||
|
const response = trunc(trim(texts.join("\n\n")), MAX_RESP);
|
||||||
|
if (!response) return;
|
||||||
|
|
||||||
|
const prompt = lastPrompt.text;
|
||||||
|
const sessionKey = lastPrompt.sessionKey || ctx?.sessionKey || "";
|
||||||
|
const project = detectProject(prompt);
|
||||||
|
lastPrompt = null;
|
||||||
|
|
||||||
|
log.info("atocore-capture:posting", {
|
||||||
|
promptLen: prompt.length,
|
||||||
|
responseLen: response.length,
|
||||||
|
project: project || "(none)",
|
||||||
|
});
|
||||||
|
|
||||||
|
fetch(BASE_URL.replace(/\/$/, "") + "/interactions", {
|
||||||
|
method: "POST",
|
||||||
|
headers: { "Content-Type": "application/json" },
|
||||||
|
body: JSON.stringify({
|
||||||
|
prompt,
|
||||||
|
response,
|
||||||
|
client: "openclaw",
|
||||||
|
session_id: sessionKey,
|
||||||
|
project,
|
||||||
|
reinforce: true,
|
||||||
|
}),
|
||||||
|
signal: AbortSignal.timeout(CAPTURE_TIMEOUT_MS),
|
||||||
|
}).then(res => {
|
||||||
|
log.info("atocore-capture:posted", { status: res.status });
|
||||||
|
}).catch(err => {
|
||||||
|
log.warn("atocore-capture:post_error", { error: String(err).slice(0, 200) });
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
api.on("session_end", async () => {
|
||||||
|
lastPrompt = null;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
});
|
||||||
479
scripts/atocore_mcp.py
Normal file
479
scripts/atocore_mcp.py
Normal file
@@ -0,0 +1,479 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""AtoCore MCP server — stdio transport, stdlib-only.
|
||||||
|
|
||||||
|
Exposes the AtoCore HTTP API as MCP tools so any MCP-aware client
|
||||||
|
(Claude Desktop, Claude Code, Cursor, Zed, Windsurf) can pull
|
||||||
|
context + memories automatically at prompt time.
|
||||||
|
|
||||||
|
Design:
|
||||||
|
- stdlib only (no mcp SDK dep) — MCP protocol is simple JSON-RPC
|
||||||
|
over stdio, and AtoCore's philosophy prefers stdlib.
|
||||||
|
- Thin wrapper: every tool is a direct pass-through to an HTTP
|
||||||
|
endpoint. Zero business logic here — the AtoCore server is
|
||||||
|
the single source of truth.
|
||||||
|
- Fail-open: if AtoCore is unreachable, tools return a graceful
|
||||||
|
"unavailable" message rather than crashing the client.
|
||||||
|
|
||||||
|
Protocol: MCP 2024-11-05 / 2025-03-26 compatible
|
||||||
|
https://spec.modelcontextprotocol.io/specification/
|
||||||
|
|
||||||
|
Usage (standalone test):
|
||||||
|
echo '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"test","version":"0"}}}' | python atocore_mcp.py
|
||||||
|
|
||||||
|
Register with Claude Code:
|
||||||
|
claude mcp add atocore -- python /path/to/atocore_mcp.py
|
||||||
|
|
||||||
|
Environment:
|
||||||
|
ATOCORE_URL base URL of the AtoCore HTTP API (default http://dalidou:8100)
|
||||||
|
ATOCORE_TIMEOUT per-request HTTP timeout seconds (default 10)
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import urllib.error
|
||||||
|
import urllib.parse
|
||||||
|
import urllib.request
|
||||||
|
|
||||||
|
# --- Configuration ---
|
||||||
|
|
||||||
|
ATOCORE_URL = os.environ.get("ATOCORE_URL", "http://dalidou:8100").rstrip("/")
|
||||||
|
HTTP_TIMEOUT = float(os.environ.get("ATOCORE_TIMEOUT", "10"))
|
||||||
|
SERVER_NAME = "atocore"
|
||||||
|
SERVER_VERSION = "0.1.0"
|
||||||
|
PROTOCOL_VERSION = "2024-11-05"
|
||||||
|
|
||||||
|
|
||||||
|
# --- stderr logging (stdout is reserved for JSON-RPC) ---
|
||||||
|
|
||||||
|
def log(msg: str) -> None:
|
||||||
|
print(f"[atocore-mcp] {msg}", file=sys.stderr, flush=True)
|
||||||
|
|
||||||
|
|
||||||
|
# --- HTTP helpers ---
|
||||||
|
|
||||||
|
def http_get(path: str, params: dict | None = None) -> dict:
|
||||||
|
"""GET a JSON response from AtoCore. Raises on HTTP error."""
|
||||||
|
url = ATOCORE_URL + path
|
||||||
|
if params:
|
||||||
|
# Drop empty params so the URL stays clean
|
||||||
|
clean = {k: v for k, v in params.items() if v not in (None, "", [], {})}
|
||||||
|
if clean:
|
||||||
|
url += "?" + urllib.parse.urlencode(clean)
|
||||||
|
req = urllib.request.Request(url, headers={"Accept": "application/json"})
|
||||||
|
with urllib.request.urlopen(req, timeout=HTTP_TIMEOUT) as resp:
|
||||||
|
return json.loads(resp.read().decode("utf-8"))
|
||||||
|
|
||||||
|
|
||||||
|
def http_post(path: str, body: dict) -> dict:
|
||||||
|
url = ATOCORE_URL + path
|
||||||
|
data = json.dumps(body).encode("utf-8")
|
||||||
|
req = urllib.request.Request(
|
||||||
|
url, data=data, method="POST",
|
||||||
|
headers={"Content-Type": "application/json", "Accept": "application/json"},
|
||||||
|
)
|
||||||
|
with urllib.request.urlopen(req, timeout=HTTP_TIMEOUT) as resp:
|
||||||
|
return json.loads(resp.read().decode("utf-8"))
|
||||||
|
|
||||||
|
|
||||||
|
def safe_call(fn, *args, **kwargs) -> tuple[dict | None, str | None]:
|
||||||
|
"""Run an HTTP call, return (result, error_message_or_None)."""
|
||||||
|
try:
|
||||||
|
return fn(*args, **kwargs), None
|
||||||
|
except urllib.error.HTTPError as e:
|
||||||
|
try:
|
||||||
|
body = e.read().decode("utf-8", errors="replace")
|
||||||
|
except Exception:
|
||||||
|
body = ""
|
||||||
|
return None, f"AtoCore HTTP {e.code}: {body[:200]}"
|
||||||
|
except urllib.error.URLError as e:
|
||||||
|
return None, f"AtoCore unreachable at {ATOCORE_URL}: {e.reason}"
|
||||||
|
except Exception as e:
|
||||||
|
return None, f"AtoCore error: {type(e).__name__}: {str(e)[:200]}"
|
||||||
|
|
||||||
|
|
||||||
|
# --- Tool definitions ---
|
||||||
|
# Each tool: name, description, inputSchema (JSON Schema), handler
|
||||||
|
|
||||||
|
def _tool_context(args: dict) -> str:
|
||||||
|
"""Build a full context pack for a query — state + memories + retrieved chunks."""
|
||||||
|
query = (args.get("query") or "").strip()
|
||||||
|
project = args.get("project") or ""
|
||||||
|
if not query:
|
||||||
|
return "Error: 'query' is required."
|
||||||
|
result, err = safe_call(http_post, "/context/build", {
|
||||||
|
"prompt": query, "project": project,
|
||||||
|
})
|
||||||
|
if err:
|
||||||
|
return f"AtoCore context unavailable: {err}"
|
||||||
|
pack = result.get("formatted_context", "") or ""
|
||||||
|
if not pack.strip():
|
||||||
|
return "(AtoCore returned an empty context pack — no matching state, memories, or chunks.)"
|
||||||
|
return pack
|
||||||
|
|
||||||
|
|
||||||
|
def _tool_search(args: dict) -> str:
|
||||||
|
"""Retrieval only — raw chunks ranked by semantic similarity."""
|
||||||
|
query = (args.get("query") or "").strip()
|
||||||
|
project = args.get("project") or ""
|
||||||
|
top_k = int(args.get("top_k") or 5)
|
||||||
|
if not query:
|
||||||
|
return "Error: 'query' is required."
|
||||||
|
result, err = safe_call(http_post, "/query", {
|
||||||
|
"prompt": query, "project": project, "top_k": top_k,
|
||||||
|
})
|
||||||
|
if err:
|
||||||
|
return f"AtoCore search unavailable: {err}"
|
||||||
|
chunks = result.get("results", []) or []
|
||||||
|
if not chunks:
|
||||||
|
return "No results."
|
||||||
|
lines = []
|
||||||
|
for i, c in enumerate(chunks, 1):
|
||||||
|
src = c.get("source_file") or c.get("title") or "unknown"
|
||||||
|
heading = c.get("heading_path") or ""
|
||||||
|
snippet = (c.get("content") or "")[:300]
|
||||||
|
score = c.get("score", 0.0)
|
||||||
|
head_str = f" ({heading})" if heading else ""
|
||||||
|
lines.append(f"[{i}] score={score:.3f} source={src}{head_str}\n{snippet}")
|
||||||
|
return "\n\n".join(lines)
|
||||||
|
|
||||||
|
|
||||||
|
def _tool_memory_list(args: dict) -> str:
|
||||||
|
"""List active memories, optionally filtered by project and type."""
|
||||||
|
params = {
|
||||||
|
"status": "active",
|
||||||
|
"limit": int(args.get("limit") or 20),
|
||||||
|
}
|
||||||
|
if args.get("project"):
|
||||||
|
params["project"] = args["project"]
|
||||||
|
if args.get("memory_type"):
|
||||||
|
params["memory_type"] = args["memory_type"]
|
||||||
|
result, err = safe_call(http_get, "/memory", params=params)
|
||||||
|
if err:
|
||||||
|
return f"AtoCore memory list unavailable: {err}"
|
||||||
|
memories = result.get("memories", []) or []
|
||||||
|
if not memories:
|
||||||
|
return "No memories match."
|
||||||
|
lines = []
|
||||||
|
for m in memories:
|
||||||
|
mt = m.get("memory_type", "?")
|
||||||
|
proj = m.get("project") or "(global)"
|
||||||
|
conf = m.get("confidence", 0.0)
|
||||||
|
refs = m.get("reference_count", 0)
|
||||||
|
content = (m.get("content") or "")[:250]
|
||||||
|
lines.append(f"[{mt}/{proj}] conf={conf:.2f} refs={refs}\n {content}")
|
||||||
|
return "\n\n".join(lines)
|
||||||
|
|
||||||
|
|
||||||
|
def _tool_memory_create(args: dict) -> str:
|
||||||
|
"""Create a candidate memory (enters the triage queue)."""
|
||||||
|
memory_type = (args.get("memory_type") or "").strip()
|
||||||
|
content = (args.get("content") or "").strip()
|
||||||
|
project = args.get("project") or ""
|
||||||
|
confidence = float(args.get("confidence") or 0.5)
|
||||||
|
if not memory_type or not content:
|
||||||
|
return "Error: 'memory_type' and 'content' are required."
|
||||||
|
valid_types = ["identity", "preference", "project", "episodic", "knowledge", "adaptation"]
|
||||||
|
if memory_type not in valid_types:
|
||||||
|
return f"Error: memory_type must be one of {valid_types}."
|
||||||
|
result, err = safe_call(http_post, "/memory", {
|
||||||
|
"memory_type": memory_type,
|
||||||
|
"content": content,
|
||||||
|
"project": project,
|
||||||
|
"confidence": confidence,
|
||||||
|
"status": "candidate",
|
||||||
|
})
|
||||||
|
if err:
|
||||||
|
return f"AtoCore memory create failed: {err}"
|
||||||
|
mid = result.get("id", "?")
|
||||||
|
return f"Candidate memory created: id={mid} type={memory_type} project={project or '(global)'}"
|
||||||
|
|
||||||
|
|
||||||
|
def _tool_project_state(args: dict) -> str:
|
||||||
|
"""Get Trusted Project State entries for a project."""
|
||||||
|
project = (args.get("project") or "").strip()
|
||||||
|
category = args.get("category") or ""
|
||||||
|
if not project:
|
||||||
|
return "Error: 'project' is required."
|
||||||
|
path = f"/project/state/{urllib.parse.quote(project)}"
|
||||||
|
params = {"category": category} if category else None
|
||||||
|
result, err = safe_call(http_get, path, params=params)
|
||||||
|
if err:
|
||||||
|
return f"AtoCore project state unavailable: {err}"
|
||||||
|
entries = result.get("entries", []) or result.get("state", []) or []
|
||||||
|
if not entries:
|
||||||
|
return f"No state entries for project '{project}'."
|
||||||
|
lines = []
|
||||||
|
for e in entries:
|
||||||
|
cat = e.get("category", "?")
|
||||||
|
key = e.get("key", "?")
|
||||||
|
value = (e.get("value") or "")[:300]
|
||||||
|
src = e.get("source") or ""
|
||||||
|
lines.append(f"[{cat}/{key}] (source: {src})\n {value}")
|
||||||
|
return "\n\n".join(lines)
|
||||||
|
|
||||||
|
|
||||||
|
def _tool_projects(args: dict) -> str:
|
||||||
|
"""List registered AtoCore projects."""
|
||||||
|
result, err = safe_call(http_get, "/projects")
|
||||||
|
if err:
|
||||||
|
return f"AtoCore projects unavailable: {err}"
|
||||||
|
projects = result.get("projects", []) or []
|
||||||
|
if not projects:
|
||||||
|
return "No projects registered."
|
||||||
|
lines = []
|
||||||
|
for p in projects:
|
||||||
|
pid = p.get("project_id") or p.get("id") or p.get("name") or "?"
|
||||||
|
aliases = p.get("aliases", []) or []
|
||||||
|
alias_str = f" (aliases: {', '.join(aliases)})" if aliases else ""
|
||||||
|
lines.append(f"- {pid}{alias_str}")
|
||||||
|
return "\n".join(lines)
|
||||||
|
|
||||||
|
|
||||||
|
def _tool_health(args: dict) -> str:
|
||||||
|
"""Check AtoCore service health."""
|
||||||
|
result, err = safe_call(http_get, "/health")
|
||||||
|
if err:
|
||||||
|
return f"AtoCore unreachable: {err}"
|
||||||
|
sha = result.get("build_sha", "?")[:8]
|
||||||
|
vectors = result.get("vectors_count", "?")
|
||||||
|
env = result.get("env", "?")
|
||||||
|
return f"AtoCore healthy: sha={sha} vectors={vectors} env={env}"
|
||||||
|
|
||||||
|
|
||||||
|
TOOLS = [
|
||||||
|
{
|
||||||
|
"name": "atocore_context",
|
||||||
|
"description": (
|
||||||
|
"Get the full AtoCore context pack for a user query. Returns "
|
||||||
|
"Trusted Project State (high trust), relevant memories, and "
|
||||||
|
"retrieved source chunks formatted for prompt injection. "
|
||||||
|
"Use this FIRST on any project-related query to ground the "
|
||||||
|
"conversation in what AtoCore already knows."
|
||||||
|
),
|
||||||
|
"inputSchema": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"query": {"type": "string", "description": "The user's question or task"},
|
||||||
|
"project": {"type": "string", "description": "Project hint (e.g. 'p04-gigabit'); optional"},
|
||||||
|
},
|
||||||
|
"required": ["query"],
|
||||||
|
},
|
||||||
|
"handler": _tool_context,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "atocore_search",
|
||||||
|
"description": (
|
||||||
|
"Semantic search over AtoCore's ingested source documents. "
|
||||||
|
"Returns top-K ranked chunks. Use this when you need raw "
|
||||||
|
"references rather than a full context pack."
|
||||||
|
),
|
||||||
|
"inputSchema": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"query": {"type": "string"},
|
||||||
|
"project": {"type": "string", "description": "optional project filter"},
|
||||||
|
"top_k": {"type": "integer", "minimum": 1, "maximum": 20, "default": 5},
|
||||||
|
},
|
||||||
|
"required": ["query"],
|
||||||
|
},
|
||||||
|
"handler": _tool_search,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "atocore_memory_list",
|
||||||
|
"description": (
|
||||||
|
"List active memories (curated facts, decisions, preferences). "
|
||||||
|
"Filter by project and/or memory_type. Use this to inspect what "
|
||||||
|
"AtoCore currently remembers about a topic."
|
||||||
|
),
|
||||||
|
"inputSchema": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"project": {"type": "string"},
|
||||||
|
"memory_type": {
|
||||||
|
"type": "string",
|
||||||
|
"enum": ["identity", "preference", "project", "episodic", "knowledge", "adaptation"],
|
||||||
|
},
|
||||||
|
"limit": {"type": "integer", "minimum": 1, "maximum": 100, "default": 20},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"handler": _tool_memory_list,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "atocore_memory_create",
|
||||||
|
"description": (
|
||||||
|
"Propose a new memory for AtoCore. Creates a CANDIDATE that "
|
||||||
|
"enters the triage queue for human/auto review — not immediately "
|
||||||
|
"active. Use this to capture durable facts/decisions that "
|
||||||
|
"should persist across sessions. Do NOT use for transient state "
|
||||||
|
"or session-specific notes."
|
||||||
|
),
|
||||||
|
"inputSchema": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"memory_type": {
|
||||||
|
"type": "string",
|
||||||
|
"enum": ["identity", "preference", "project", "episodic", "knowledge", "adaptation"],
|
||||||
|
},
|
||||||
|
"content": {"type": "string", "description": "The fact/decision/preference to remember"},
|
||||||
|
"project": {"type": "string", "description": "project id if project-scoped; empty for global"},
|
||||||
|
"confidence": {"type": "number", "minimum": 0, "maximum": 1, "default": 0.5},
|
||||||
|
},
|
||||||
|
"required": ["memory_type", "content"],
|
||||||
|
},
|
||||||
|
"handler": _tool_memory_create,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "atocore_project_state",
|
||||||
|
"description": (
|
||||||
|
"Get Trusted Project State entries for a given project — the "
|
||||||
|
"highest-trust tier with curated decisions, requirements, "
|
||||||
|
"facts, contacts, milestones. Use this to look up authoritative "
|
||||||
|
"project info."
|
||||||
|
),
|
||||||
|
"inputSchema": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"project": {"type": "string"},
|
||||||
|
"category": {
|
||||||
|
"type": "string",
|
||||||
|
"enum": ["status", "decision", "requirement", "contact", "milestone", "fact", "config"],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"required": ["project"],
|
||||||
|
},
|
||||||
|
"handler": _tool_project_state,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "atocore_projects",
|
||||||
|
"description": "List all registered AtoCore projects (id + aliases).",
|
||||||
|
"inputSchema": {"type": "object", "properties": {}},
|
||||||
|
"handler": _tool_projects,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "atocore_health",
|
||||||
|
"description": "Check AtoCore service health (build SHA, vector count, env).",
|
||||||
|
"inputSchema": {"type": "object", "properties": {}},
|
||||||
|
"handler": _tool_health,
|
||||||
|
},
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
# --- JSON-RPC handlers ---
|
||||||
|
|
||||||
|
def handle_initialize(params: dict) -> dict:
|
||||||
|
return {
|
||||||
|
"protocolVersion": PROTOCOL_VERSION,
|
||||||
|
"capabilities": {
|
||||||
|
"tools": {"listChanged": False},
|
||||||
|
},
|
||||||
|
"serverInfo": {"name": SERVER_NAME, "version": SERVER_VERSION},
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def handle_tools_list(params: dict) -> dict:
|
||||||
|
return {
|
||||||
|
"tools": [
|
||||||
|
{"name": t["name"], "description": t["description"], "inputSchema": t["inputSchema"]}
|
||||||
|
for t in TOOLS
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def handle_tools_call(params: dict) -> dict:
|
||||||
|
tool_name = params.get("name", "")
|
||||||
|
args = params.get("arguments", {}) or {}
|
||||||
|
tool = next((t for t in TOOLS if t["name"] == tool_name), None)
|
||||||
|
if tool is None:
|
||||||
|
return {
|
||||||
|
"content": [{"type": "text", "text": f"Unknown tool: {tool_name}"}],
|
||||||
|
"isError": True,
|
||||||
|
}
|
||||||
|
try:
|
||||||
|
text = tool["handler"](args)
|
||||||
|
except Exception as e:
|
||||||
|
log(f"tool {tool_name} raised: {e}")
|
||||||
|
return {
|
||||||
|
"content": [{"type": "text", "text": f"Tool error: {type(e).__name__}: {e}"}],
|
||||||
|
"isError": True,
|
||||||
|
}
|
||||||
|
return {"content": [{"type": "text", "text": text}]}
|
||||||
|
|
||||||
|
|
||||||
|
def handle_ping(params: dict) -> dict:
|
||||||
|
return {}
|
||||||
|
|
||||||
|
|
||||||
|
METHODS = {
|
||||||
|
"initialize": handle_initialize,
|
||||||
|
"tools/list": handle_tools_list,
|
||||||
|
"tools/call": handle_tools_call,
|
||||||
|
"ping": handle_ping,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
# --- stdio main loop ---
|
||||||
|
|
||||||
|
def send(obj: dict) -> None:
|
||||||
|
"""Write a single-line JSON message to stdout and flush."""
|
||||||
|
sys.stdout.write(json.dumps(obj, ensure_ascii=False) + "\n")
|
||||||
|
sys.stdout.flush()
|
||||||
|
|
||||||
|
|
||||||
|
def make_response(req_id, result=None, error=None) -> dict:
|
||||||
|
resp = {"jsonrpc": "2.0", "id": req_id}
|
||||||
|
if error is not None:
|
||||||
|
resp["error"] = error
|
||||||
|
else:
|
||||||
|
resp["result"] = result if result is not None else {}
|
||||||
|
return resp
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> int:
|
||||||
|
log(f"starting (AtoCore at {ATOCORE_URL})")
|
||||||
|
for line in sys.stdin:
|
||||||
|
line = line.strip()
|
||||||
|
if not line:
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
msg = json.loads(line)
|
||||||
|
except json.JSONDecodeError as e:
|
||||||
|
log(f"parse error: {e}")
|
||||||
|
continue
|
||||||
|
|
||||||
|
method = msg.get("method", "")
|
||||||
|
req_id = msg.get("id")
|
||||||
|
params = msg.get("params", {}) or {}
|
||||||
|
|
||||||
|
# Notifications (no id) don't need a response
|
||||||
|
if req_id is None:
|
||||||
|
if method == "notifications/initialized":
|
||||||
|
log("client initialized")
|
||||||
|
continue
|
||||||
|
|
||||||
|
handler = METHODS.get(method)
|
||||||
|
if handler is None:
|
||||||
|
send(make_response(req_id, error={
|
||||||
|
"code": -32601,
|
||||||
|
"message": f"Method not found: {method}",
|
||||||
|
}))
|
||||||
|
continue
|
||||||
|
|
||||||
|
try:
|
||||||
|
result = handler(params)
|
||||||
|
send(make_response(req_id, result=result))
|
||||||
|
except Exception as e:
|
||||||
|
log(f"handler {method} raised: {e}")
|
||||||
|
send(make_response(req_id, error={
|
||||||
|
"code": -32603,
|
||||||
|
"message": f"Internal error: {type(e).__name__}: {e}",
|
||||||
|
}))
|
||||||
|
|
||||||
|
log("stdin closed, exiting")
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(main())
|
||||||
321
scripts/atocore_proxy.py
Normal file
321
scripts/atocore_proxy.py
Normal file
@@ -0,0 +1,321 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""AtoCore Proxy — OpenAI-compatible HTTP middleware.
|
||||||
|
|
||||||
|
Acts as a drop-in layer for any client that speaks the OpenAI Chat
|
||||||
|
Completions API (Codex, Ollama, LiteLLM, custom agents). Sits between
|
||||||
|
the client and the real model provider:
|
||||||
|
|
||||||
|
client -> atocore_proxy -> real_provider (OpenAI, Ollama, Anthropic, ...)
|
||||||
|
|
||||||
|
For each chat completion request:
|
||||||
|
1. Extract the user's last message as the "query"
|
||||||
|
2. Call AtoCore /context/build to get a context pack
|
||||||
|
3. Inject the pack as a system message (or prepend to existing system)
|
||||||
|
4. Forward the enriched request to the real provider
|
||||||
|
5. Capture the full interaction back to AtoCore /interactions
|
||||||
|
|
||||||
|
Fail-open: if AtoCore is unreachable, the request passes through
|
||||||
|
unchanged. If the real provider fails, the error is propagated to the
|
||||||
|
client as-is.
|
||||||
|
|
||||||
|
Configuration (env vars):
|
||||||
|
ATOCORE_URL AtoCore base URL (default http://dalidou:8100)
|
||||||
|
ATOCORE_UPSTREAM real provider base URL (e.g. http://localhost:11434/v1 for Ollama)
|
||||||
|
ATOCORE_PROXY_PORT port to listen on (default 11435)
|
||||||
|
ATOCORE_PROXY_HOST bind address (default 127.0.0.1)
|
||||||
|
ATOCORE_CLIENT_LABEL client id recorded in captures (default "proxy")
|
||||||
|
ATOCORE_CAPTURE "1" to capture interactions back (default "1")
|
||||||
|
ATOCORE_INJECT "1" to inject context (default "1")
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
# Proxy for Ollama:
|
||||||
|
ATOCORE_UPSTREAM=http://localhost:11434/v1 python atocore_proxy.py
|
||||||
|
|
||||||
|
# Then point your client at http://localhost:11435/v1 instead of the
|
||||||
|
# real provider.
|
||||||
|
|
||||||
|
Stdlib only — deliberate to keep the dependency footprint at zero.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import http.server
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import socketserver
|
||||||
|
import sys
|
||||||
|
import threading
|
||||||
|
import urllib.error
|
||||||
|
import urllib.parse
|
||||||
|
import urllib.request
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
ATOCORE_URL = os.environ.get("ATOCORE_URL", "http://dalidou:8100").rstrip("/")
|
||||||
|
UPSTREAM_URL = os.environ.get("ATOCORE_UPSTREAM", "").rstrip("/")
|
||||||
|
PROXY_PORT = int(os.environ.get("ATOCORE_PROXY_PORT", "11435"))
|
||||||
|
PROXY_HOST = os.environ.get("ATOCORE_PROXY_HOST", "127.0.0.1")
|
||||||
|
CLIENT_LABEL = os.environ.get("ATOCORE_CLIENT_LABEL", "proxy")
|
||||||
|
CAPTURE_ENABLED = os.environ.get("ATOCORE_CAPTURE", "1") == "1"
|
||||||
|
INJECT_ENABLED = os.environ.get("ATOCORE_INJECT", "1") == "1"
|
||||||
|
ATOCORE_TIMEOUT = float(os.environ.get("ATOCORE_TIMEOUT", "6"))
|
||||||
|
UPSTREAM_TIMEOUT = float(os.environ.get("ATOCORE_UPSTREAM_TIMEOUT", "300"))
|
||||||
|
|
||||||
|
PROJECT_HINTS = [
|
||||||
|
("p04-gigabit", ["p04", "gigabit"]),
|
||||||
|
("p05-interferometer", ["p05", "interferometer"]),
|
||||||
|
("p06-polisher", ["p06", "polisher", "fullum"]),
|
||||||
|
("abb-space", ["abb"]),
|
||||||
|
("atomizer-v2", ["atomizer"]),
|
||||||
|
("atocore", ["atocore", "dalidou"]),
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def log(msg: str) -> None:
|
||||||
|
print(f"[atocore-proxy] {msg}", file=sys.stderr, flush=True)
|
||||||
|
|
||||||
|
|
||||||
|
def detect_project(text: str) -> str:
|
||||||
|
lower = (text or "").lower()
|
||||||
|
for proj, tokens in PROJECT_HINTS:
|
||||||
|
if any(t in lower for t in tokens):
|
||||||
|
return proj
|
||||||
|
return ""
|
||||||
|
|
||||||
|
|
||||||
|
def get_last_user_message(body: dict) -> str:
|
||||||
|
messages = body.get("messages", []) or []
|
||||||
|
for m in reversed(messages):
|
||||||
|
if m.get("role") == "user":
|
||||||
|
content = m.get("content", "")
|
||||||
|
if isinstance(content, list):
|
||||||
|
# OpenAI multi-part content: extract text parts
|
||||||
|
parts = [p.get("text", "") for p in content if p.get("type") == "text"]
|
||||||
|
return "\n".join(parts)
|
||||||
|
return str(content)
|
||||||
|
return ""
|
||||||
|
|
||||||
|
|
||||||
|
def get_assistant_text(response: dict) -> str:
|
||||||
|
"""Extract assistant text from an OpenAI-style completion response."""
|
||||||
|
choices = response.get("choices", []) or []
|
||||||
|
if not choices:
|
||||||
|
return ""
|
||||||
|
msg = choices[0].get("message", {}) or {}
|
||||||
|
content = msg.get("content", "")
|
||||||
|
if isinstance(content, list):
|
||||||
|
parts = [p.get("text", "") for p in content if p.get("type") == "text"]
|
||||||
|
return "\n".join(parts)
|
||||||
|
return str(content)
|
||||||
|
|
||||||
|
|
||||||
|
def fetch_context(query: str, project: str) -> str:
|
||||||
|
"""Pull a context pack from AtoCore. Returns '' on any failure."""
|
||||||
|
if not INJECT_ENABLED or not query:
|
||||||
|
return ""
|
||||||
|
try:
|
||||||
|
data = json.dumps({"prompt": query, "project": project}).encode("utf-8")
|
||||||
|
req = urllib.request.Request(
|
||||||
|
ATOCORE_URL + "/context/build",
|
||||||
|
data=data,
|
||||||
|
method="POST",
|
||||||
|
headers={"Content-Type": "application/json"},
|
||||||
|
)
|
||||||
|
with urllib.request.urlopen(req, timeout=ATOCORE_TIMEOUT) as resp:
|
||||||
|
result = json.loads(resp.read().decode("utf-8"))
|
||||||
|
return result.get("formatted_context", "") or ""
|
||||||
|
except Exception as e:
|
||||||
|
log(f"context fetch failed: {type(e).__name__}: {e}")
|
||||||
|
return ""
|
||||||
|
|
||||||
|
|
||||||
|
def capture_interaction(prompt: str, response: str, project: str) -> None:
|
||||||
|
"""POST the completed turn back to AtoCore. Fire-and-forget."""
|
||||||
|
if not CAPTURE_ENABLED or not prompt or not response:
|
||||||
|
return
|
||||||
|
|
||||||
|
def _post():
|
||||||
|
try:
|
||||||
|
data = json.dumps({
|
||||||
|
"prompt": prompt,
|
||||||
|
"response": response,
|
||||||
|
"client": CLIENT_LABEL,
|
||||||
|
"project": project,
|
||||||
|
"reinforce": True,
|
||||||
|
}).encode("utf-8")
|
||||||
|
req = urllib.request.Request(
|
||||||
|
ATOCORE_URL + "/interactions",
|
||||||
|
data=data,
|
||||||
|
method="POST",
|
||||||
|
headers={"Content-Type": "application/json"},
|
||||||
|
)
|
||||||
|
urllib.request.urlopen(req, timeout=ATOCORE_TIMEOUT)
|
||||||
|
except Exception as e:
|
||||||
|
log(f"capture failed: {type(e).__name__}: {e}")
|
||||||
|
|
||||||
|
threading.Thread(target=_post, daemon=True).start()
|
||||||
|
|
||||||
|
|
||||||
|
def inject_context(body: dict, context_pack: str) -> dict:
|
||||||
|
"""Prepend the AtoCore context as a system message, or augment existing."""
|
||||||
|
if not context_pack.strip():
|
||||||
|
return body
|
||||||
|
header = "--- AtoCore Context (auto-injected) ---\n"
|
||||||
|
footer = "\n--- End AtoCore Context ---\n"
|
||||||
|
injection = header + context_pack + footer
|
||||||
|
|
||||||
|
messages = list(body.get("messages", []) or [])
|
||||||
|
if messages and messages[0].get("role") == "system":
|
||||||
|
# Augment existing system message
|
||||||
|
existing = messages[0].get("content", "") or ""
|
||||||
|
if isinstance(existing, list):
|
||||||
|
# multi-part: prepend a text part
|
||||||
|
messages[0]["content"] = [{"type": "text", "text": injection}] + existing
|
||||||
|
else:
|
||||||
|
messages[0]["content"] = injection + "\n" + str(existing)
|
||||||
|
else:
|
||||||
|
messages.insert(0, {"role": "system", "content": injection})
|
||||||
|
|
||||||
|
body["messages"] = messages
|
||||||
|
return body
|
||||||
|
|
||||||
|
|
||||||
|
def forward_to_upstream(body: dict, headers: dict[str, str], path: str) -> tuple[int, dict]:
|
||||||
|
"""Forward the enriched body to the upstream provider. Returns (status, response_dict)."""
|
||||||
|
if not UPSTREAM_URL:
|
||||||
|
return 503, {"error": {"message": "ATOCORE_UPSTREAM not configured"}}
|
||||||
|
url = UPSTREAM_URL + path
|
||||||
|
data = json.dumps(body).encode("utf-8")
|
||||||
|
# Strip hop-by-hop / host-specific headers
|
||||||
|
fwd_headers = {"Content-Type": "application/json"}
|
||||||
|
for k, v in headers.items():
|
||||||
|
lk = k.lower()
|
||||||
|
if lk in ("authorization", "x-api-key", "anthropic-version"):
|
||||||
|
fwd_headers[k] = v
|
||||||
|
req = urllib.request.Request(url, data=data, method="POST", headers=fwd_headers)
|
||||||
|
try:
|
||||||
|
with urllib.request.urlopen(req, timeout=UPSTREAM_TIMEOUT) as resp:
|
||||||
|
return resp.status, json.loads(resp.read().decode("utf-8"))
|
||||||
|
except urllib.error.HTTPError as e:
|
||||||
|
try:
|
||||||
|
body_bytes = e.read()
|
||||||
|
payload = json.loads(body_bytes.decode("utf-8"))
|
||||||
|
except Exception:
|
||||||
|
payload = {"error": {"message": f"upstream HTTP {e.code}"}}
|
||||||
|
return e.code, payload
|
||||||
|
except Exception as e:
|
||||||
|
log(f"upstream error: {e}")
|
||||||
|
return 502, {"error": {"message": f"upstream unreachable: {e}"}}
|
||||||
|
|
||||||
|
|
||||||
|
class ProxyHandler(http.server.BaseHTTPRequestHandler):
|
||||||
|
# Silence default request logging (we log what matters ourselves)
|
||||||
|
def log_message(self, format: str, *args: Any) -> None:
|
||||||
|
pass
|
||||||
|
|
||||||
|
def _read_body(self) -> dict:
|
||||||
|
length = int(self.headers.get("Content-Length", "0") or "0")
|
||||||
|
if length <= 0:
|
||||||
|
return {}
|
||||||
|
raw = self.rfile.read(length)
|
||||||
|
try:
|
||||||
|
return json.loads(raw.decode("utf-8"))
|
||||||
|
except Exception:
|
||||||
|
return {}
|
||||||
|
|
||||||
|
def _send_json(self, status: int, payload: dict) -> None:
|
||||||
|
body = json.dumps(payload).encode("utf-8")
|
||||||
|
self.send_response(status)
|
||||||
|
self.send_header("Content-Type", "application/json")
|
||||||
|
self.send_header("Content-Length", str(len(body)))
|
||||||
|
self.send_header("Access-Control-Allow-Origin", "*")
|
||||||
|
self.end_headers()
|
||||||
|
self.wfile.write(body)
|
||||||
|
|
||||||
|
def do_OPTIONS(self) -> None: # CORS preflight
|
||||||
|
self.send_response(204)
|
||||||
|
self.send_header("Access-Control-Allow-Origin", "*")
|
||||||
|
self.send_header("Access-Control-Allow-Methods", "POST, GET, OPTIONS")
|
||||||
|
self.send_header("Access-Control-Allow-Headers", "Content-Type, Authorization, X-API-Key")
|
||||||
|
self.end_headers()
|
||||||
|
|
||||||
|
def do_GET(self) -> None:
|
||||||
|
parsed = urllib.parse.urlparse(self.path)
|
||||||
|
if parsed.path == "/healthz":
|
||||||
|
self._send_json(200, {
|
||||||
|
"status": "ok",
|
||||||
|
"atocore": ATOCORE_URL,
|
||||||
|
"upstream": UPSTREAM_URL or "(not configured)",
|
||||||
|
"inject": INJECT_ENABLED,
|
||||||
|
"capture": CAPTURE_ENABLED,
|
||||||
|
})
|
||||||
|
return
|
||||||
|
# Pass through GET to upstream (model listing etc)
|
||||||
|
if not UPSTREAM_URL:
|
||||||
|
self._send_json(503, {"error": {"message": "ATOCORE_UPSTREAM not configured"}})
|
||||||
|
return
|
||||||
|
try:
|
||||||
|
req = urllib.request.Request(UPSTREAM_URL + parsed.path + (f"?{parsed.query}" if parsed.query else ""))
|
||||||
|
for k in ("Authorization", "X-API-Key"):
|
||||||
|
v = self.headers.get(k)
|
||||||
|
if v:
|
||||||
|
req.add_header(k, v)
|
||||||
|
with urllib.request.urlopen(req, timeout=UPSTREAM_TIMEOUT) as resp:
|
||||||
|
data = resp.read()
|
||||||
|
self.send_response(resp.status)
|
||||||
|
self.send_header("Content-Type", resp.headers.get("Content-Type", "application/json"))
|
||||||
|
self.send_header("Content-Length", str(len(data)))
|
||||||
|
self.end_headers()
|
||||||
|
self.wfile.write(data)
|
||||||
|
except Exception as e:
|
||||||
|
self._send_json(502, {"error": {"message": f"upstream error: {e}"}})
|
||||||
|
|
||||||
|
def do_POST(self) -> None:
|
||||||
|
parsed = urllib.parse.urlparse(self.path)
|
||||||
|
body = self._read_body()
|
||||||
|
|
||||||
|
# Only enrich chat completions; other endpoints pass through
|
||||||
|
if parsed.path.endswith("/chat/completions") or parsed.path == "/v1/chat/completions":
|
||||||
|
prompt = get_last_user_message(body)
|
||||||
|
project = detect_project(prompt)
|
||||||
|
context = fetch_context(prompt, project) if prompt else ""
|
||||||
|
if context:
|
||||||
|
log(f"inject: project={project or '(none)'} chars={len(context)}")
|
||||||
|
body = inject_context(body, context)
|
||||||
|
|
||||||
|
status, response = forward_to_upstream(body, dict(self.headers), parsed.path)
|
||||||
|
self._send_json(status, response)
|
||||||
|
|
||||||
|
if status == 200:
|
||||||
|
assistant_text = get_assistant_text(response)
|
||||||
|
capture_interaction(prompt, assistant_text, project)
|
||||||
|
else:
|
||||||
|
# Non-chat endpoints (embeddings, completions, etc.) — pure passthrough
|
||||||
|
status, response = forward_to_upstream(body, dict(self.headers), parsed.path)
|
||||||
|
self._send_json(status, response)
|
||||||
|
|
||||||
|
|
||||||
|
class ThreadedServer(socketserver.ThreadingMixIn, http.server.HTTPServer):
|
||||||
|
daemon_threads = True
|
||||||
|
allow_reuse_address = True
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> int:
|
||||||
|
if not UPSTREAM_URL:
|
||||||
|
log("WARNING: ATOCORE_UPSTREAM not set. Chat completions will fail.")
|
||||||
|
log("Example: ATOCORE_UPSTREAM=http://localhost:11434/v1 for Ollama")
|
||||||
|
server = ThreadedServer((PROXY_HOST, PROXY_PORT), ProxyHandler)
|
||||||
|
log(f"listening on {PROXY_HOST}:{PROXY_PORT}")
|
||||||
|
log(f"AtoCore: {ATOCORE_URL} inject={INJECT_ENABLED} capture={CAPTURE_ENABLED}")
|
||||||
|
log(f"Upstream: {UPSTREAM_URL or '(not configured)'}")
|
||||||
|
log(f"Client label: {CLIENT_LABEL}")
|
||||||
|
log("Ready. Point your OpenAI-compatible client at /v1/chat/completions")
|
||||||
|
try:
|
||||||
|
server.serve_forever()
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
log("stopping")
|
||||||
|
server.server_close()
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(main())
|
||||||
79
scripts/auto_promote_reinforced.py
Normal file
79
scripts/auto_promote_reinforced.py
Normal file
@@ -0,0 +1,79 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Auto-promote reinforced candidates + expire stale ones.
|
||||||
|
|
||||||
|
Phase 10: reinforcement-based auto-promotion. Candidates referenced
|
||||||
|
by 3+ interactions with confidence >= 0.7 graduate to active.
|
||||||
|
Candidates unreinforced for 14+ days are auto-rejected.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python3 scripts/auto_promote_reinforced.py [--base-url URL] [--dry-run]
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
|
||||||
|
# Allow importing from src/ when run from repo root
|
||||||
|
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "src"))
|
||||||
|
|
||||||
|
from atocore.memory.service import auto_promote_reinforced, expire_stale_candidates
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> None:
|
||||||
|
parser = argparse.ArgumentParser(description="Auto-promote + expire candidates")
|
||||||
|
parser.add_argument("--dry-run", action="store_true", help="Report only, don't change anything")
|
||||||
|
parser.add_argument("--min-refs", type=int, default=3, help="Min reference_count for promotion")
|
||||||
|
parser.add_argument("--min-confidence", type=float, default=0.7, help="Min confidence for promotion")
|
||||||
|
parser.add_argument("--expire-days", type=int, default=14, help="Days before unreinforced candidates expire")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
if args.dry_run:
|
||||||
|
print("DRY RUN — no changes will be made")
|
||||||
|
# For dry-run, query directly and report
|
||||||
|
from atocore.models.database import get_connection
|
||||||
|
from datetime import datetime, timedelta, timezone
|
||||||
|
|
||||||
|
cutoff_promote = (datetime.now(timezone.utc) - timedelta(days=args.expire_days)).strftime("%Y-%m-%d %H:%M:%S")
|
||||||
|
cutoff_expire = cutoff_promote
|
||||||
|
|
||||||
|
with get_connection() as conn:
|
||||||
|
promotable = conn.execute(
|
||||||
|
"SELECT id, content, memory_type, project, confidence, reference_count "
|
||||||
|
"FROM memories WHERE status = 'candidate' "
|
||||||
|
"AND COALESCE(reference_count, 0) >= ? AND confidence >= ? "
|
||||||
|
"AND last_referenced_at >= ?",
|
||||||
|
(args.min_refs, args.min_confidence, cutoff_promote),
|
||||||
|
).fetchall()
|
||||||
|
expirable = conn.execute(
|
||||||
|
"SELECT id, content, memory_type, project "
|
||||||
|
"FROM memories WHERE status = 'candidate' "
|
||||||
|
"AND COALESCE(reference_count, 0) = 0 AND created_at < ?",
|
||||||
|
(cutoff_expire,),
|
||||||
|
).fetchall()
|
||||||
|
|
||||||
|
print(f"\nWould promote {len(promotable)} candidates:")
|
||||||
|
for r in promotable:
|
||||||
|
print(f" [{r['memory_type']}] refs={r['reference_count']} conf={r['confidence']:.2f} | {r['content'][:80]}...")
|
||||||
|
print(f"\nWould expire {len(expirable)} stale candidates:")
|
||||||
|
for r in expirable:
|
||||||
|
print(f" [{r['memory_type']}] {r['project'] or 'global'} | {r['content'][:80]}...")
|
||||||
|
return
|
||||||
|
|
||||||
|
promoted = auto_promote_reinforced(
|
||||||
|
min_reference_count=args.min_refs,
|
||||||
|
min_confidence=args.min_confidence,
|
||||||
|
)
|
||||||
|
expired = expire_stale_candidates(max_age_days=args.expire_days)
|
||||||
|
|
||||||
|
print(f"promoted={len(promoted)} expired={len(expired)}")
|
||||||
|
if promoted:
|
||||||
|
print(f"Promoted IDs: {promoted}")
|
||||||
|
if expired:
|
||||||
|
print(f"Expired IDs: {expired}")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
@@ -29,6 +29,7 @@ import os
|
|||||||
import shutil
|
import shutil
|
||||||
import subprocess
|
import subprocess
|
||||||
import sys
|
import sys
|
||||||
|
import time
|
||||||
import tempfile
|
import tempfile
|
||||||
import urllib.error
|
import urllib.error
|
||||||
import urllib.parse
|
import urllib.parse
|
||||||
@@ -63,9 +64,11 @@ Rules:
|
|||||||
|
|
||||||
3. CONTRADICTS when the candidate *conflicts* with an existing active memory (not a duplicate, but states something that can't both be true). Set `conflicts_with` to the existing memory id. This flags the tension for human review instead of silently rejecting or double-storing. Examples: "Option A selected" vs "Option B selected" for the same decision; "uses material X" vs "uses material Y" for the same component.
|
3. CONTRADICTS when the candidate *conflicts* with an existing active memory (not a duplicate, but states something that can't both be true). Set `conflicts_with` to the existing memory id. This flags the tension for human review instead of silently rejecting or double-storing. Examples: "Option A selected" vs "Option B selected" for the same decision; "uses material X" vs "uses material Y" for the same component.
|
||||||
|
|
||||||
4. NEEDS_HUMAN when you're genuinely unsure — the candidate might be valuable but you can't tell without domain knowledge. This should be rare (< 20% of candidates).
|
4. OPENCLAW-CURATED content (candidate content starts with "From OpenClaw/"): apply a MUCH LOWER bar. OpenClaw's SOUL.md, USER.md, MEMORY.md, MODEL-ROUTING.md, and dated memory/*.md files are ALREADY curated by OpenClaw as canonical continuity. Promote unless clearly wrong or a genuine duplicate. Do NOT reject OpenClaw content as "process rule belongs elsewhere" or "session log" — that's exactly what AtoCore wants to absorb. Session events, project updates, stakeholder notes, and decisions from OpenClaw daily memory files ARE valuable context and should promote.
|
||||||
|
|
||||||
5. Output ONLY the JSON object. No prose, no markdown, no explanation outside the reason field."""
|
5. NEEDS_HUMAN when you're genuinely unsure — the candidate might be valuable but you can't tell without domain knowledge. This should be rare (< 20% of candidates).
|
||||||
|
|
||||||
|
6. Output ONLY the JSON object. No prose, no markdown, no explanation outside the reason field."""
|
||||||
|
|
||||||
_sandbox_cwd = None
|
_sandbox_cwd = None
|
||||||
|
|
||||||
@@ -129,22 +132,33 @@ def triage_one(candidate, active_memories, model, timeout_s):
|
|||||||
user_message,
|
user_message,
|
||||||
]
|
]
|
||||||
|
|
||||||
try:
|
# Retry with exponential backoff on transient failures (rate limits etc)
|
||||||
completed = subprocess.run(
|
last_error = ""
|
||||||
args, capture_output=True, text=True,
|
for attempt in range(3):
|
||||||
timeout=timeout_s, cwd=get_sandbox_cwd(),
|
if attempt > 0:
|
||||||
encoding="utf-8", errors="replace",
|
time.sleep(2 ** attempt) # 2s, 4s
|
||||||
)
|
try:
|
||||||
except subprocess.TimeoutExpired:
|
completed = subprocess.run(
|
||||||
return {"verdict": "needs_human", "confidence": 0.0, "reason": "triage model timed out"}
|
args, capture_output=True, text=True,
|
||||||
except Exception as exc:
|
timeout=timeout_s, cwd=get_sandbox_cwd(),
|
||||||
return {"verdict": "needs_human", "confidence": 0.0, "reason": f"subprocess error: {exc}"}
|
encoding="utf-8", errors="replace",
|
||||||
|
)
|
||||||
|
except subprocess.TimeoutExpired:
|
||||||
|
last_error = "triage model timed out"
|
||||||
|
continue
|
||||||
|
except Exception as exc:
|
||||||
|
last_error = f"subprocess error: {exc}"
|
||||||
|
continue
|
||||||
|
|
||||||
if completed.returncode != 0:
|
if completed.returncode == 0:
|
||||||
return {"verdict": "needs_human", "confidence": 0.0, "reason": f"claude exit {completed.returncode}"}
|
raw = (completed.stdout or "").strip()
|
||||||
|
return parse_verdict(raw)
|
||||||
|
|
||||||
raw = (completed.stdout or "").strip()
|
# Capture stderr for diagnostics (truncate to 200 chars)
|
||||||
return parse_verdict(raw)
|
stderr = (completed.stderr or "").strip()[:200]
|
||||||
|
last_error = f"claude exit {completed.returncode}: {stderr}" if stderr else f"claude exit {completed.returncode}"
|
||||||
|
|
||||||
|
return {"verdict": "needs_human", "confidence": 0.0, "reason": last_error}
|
||||||
|
|
||||||
|
|
||||||
def parse_verdict(raw):
|
def parse_verdict(raw):
|
||||||
@@ -211,6 +225,13 @@ def main():
|
|||||||
promoted = rejected = needs_human = errors = 0
|
promoted = rejected = needs_human = errors = 0
|
||||||
|
|
||||||
for i, cand in enumerate(candidates, 1):
|
for i, cand in enumerate(candidates, 1):
|
||||||
|
# Light rate-limit pacing: 0.5s between triage calls so a burst
|
||||||
|
# doesn't overwhelm the claude CLI's backend. With ~60s per call
|
||||||
|
# this is negligible overhead but avoids the "all-failed" pattern
|
||||||
|
# we saw on large batches.
|
||||||
|
if i > 1:
|
||||||
|
time.sleep(0.5)
|
||||||
|
|
||||||
project = cand.get("project") or ""
|
project = cand.get("project") or ""
|
||||||
if project not in active_cache:
|
if project not in active_cache:
|
||||||
active_cache[project] = fetch_active_memories_for_project(args.base_url, project)
|
active_cache[project] = fetch_active_memories_for_project(args.base_url, project)
|
||||||
|
|||||||
@@ -1,12 +1,15 @@
|
|||||||
"""Host-side LLM batch extraction — pure HTTP client, no atocore imports.
|
"""Host-side LLM batch extraction — HTTP client + shared prompt module.
|
||||||
|
|
||||||
Fetches interactions from the AtoCore API, runs ``claude -p`` locally
|
Fetches interactions from the AtoCore API, runs ``claude -p`` locally
|
||||||
for each, and POSTs candidates back. Zero dependency on atocore source
|
for each, and POSTs candidates back. Uses stdlib + the ``claude`` CLI
|
||||||
or Python packages — only uses stdlib + the ``claude`` CLI on PATH.
|
on PATH, plus the stdlib-only shared prompt/parser module at
|
||||||
|
``atocore.memory._llm_prompt`` to eliminate prompt/parser drift
|
||||||
|
against the in-container extractor (R12).
|
||||||
|
|
||||||
This is necessary because the ``claude`` CLI is on the Dalidou HOST
|
This is necessary because the ``claude`` CLI is on the Dalidou HOST
|
||||||
but not inside the Docker container, and the host's Python doesn't
|
but not inside the Docker container, and the host's Python doesn't
|
||||||
have the container's dependencies (pydantic_settings, etc.).
|
have the container's dependencies (pydantic_settings, etc.) — so we
|
||||||
|
only import the one stdlib-only module, not the full atocore package.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
@@ -23,88 +26,26 @@ import urllib.parse
|
|||||||
import urllib.request
|
import urllib.request
|
||||||
from datetime import datetime, timezone
|
from datetime import datetime, timezone
|
||||||
|
|
||||||
|
# R12: share the prompt + parser with the in-container extractor so
|
||||||
|
# the two paths can't drift. The imported module is stdlib-only by
|
||||||
|
# design; see src/atocore/memory/_llm_prompt.py.
|
||||||
|
_SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||||
|
_SRC_DIR = os.path.abspath(os.path.join(_SCRIPT_DIR, "..", "src"))
|
||||||
|
if _SRC_DIR not in sys.path:
|
||||||
|
sys.path.insert(0, _SRC_DIR)
|
||||||
|
|
||||||
|
from atocore.memory._llm_prompt import ( # noqa: E402
|
||||||
|
MEMORY_TYPES,
|
||||||
|
SYSTEM_PROMPT,
|
||||||
|
build_user_message,
|
||||||
|
normalize_candidate_item,
|
||||||
|
parse_llm_json_array,
|
||||||
|
)
|
||||||
|
|
||||||
DEFAULT_BASE_URL = os.environ.get("ATOCORE_BASE_URL", "http://localhost:8100")
|
DEFAULT_BASE_URL = os.environ.get("ATOCORE_BASE_URL", "http://localhost:8100")
|
||||||
DEFAULT_MODEL = os.environ.get("ATOCORE_LLM_EXTRACTOR_MODEL", "sonnet")
|
DEFAULT_MODEL = os.environ.get("ATOCORE_LLM_EXTRACTOR_MODEL", "sonnet")
|
||||||
DEFAULT_TIMEOUT_S = float(os.environ.get("ATOCORE_LLM_EXTRACTOR_TIMEOUT_S", "90"))
|
DEFAULT_TIMEOUT_S = float(os.environ.get("ATOCORE_LLM_EXTRACTOR_TIMEOUT_S", "90"))
|
||||||
MAX_RESPONSE_CHARS = 8000
|
|
||||||
MAX_PROMPT_CHARS = 2000
|
|
||||||
|
|
||||||
MEMORY_TYPES = {"identity", "preference", "project", "episodic", "knowledge", "adaptation"}
|
|
||||||
|
|
||||||
SYSTEM_PROMPT = """You extract memory candidates from LLM conversation turns for a personal context engine called AtoCore.
|
|
||||||
|
|
||||||
AtoCore is the brain for Atomaste's engineering work. Known projects:
|
|
||||||
p04-gigabit, p05-interferometer, p06-polisher, atomizer-v2, atocore,
|
|
||||||
abb-space. Unknown project names — still tag them, the system auto-detects.
|
|
||||||
|
|
||||||
Your job is to emit SIGNALS that matter for future context. Be aggressive:
|
|
||||||
err on the side of capturing useful signal. Triage filters noise downstream.
|
|
||||||
|
|
||||||
WHAT TO EMIT (in order of importance):
|
|
||||||
|
|
||||||
1. PROJECT ACTIVITY — any mention of a project with context worth remembering:
|
|
||||||
- "Schott quote received for ABB-Space" (event + project)
|
|
||||||
- "Cédric asked about p06 firmware timing" (stakeholder event)
|
|
||||||
- "Still waiting on Zygo lead-time from Nabeel" (blocker status)
|
|
||||||
- "p05 vendor decision needs to happen this week" (action item)
|
|
||||||
|
|
||||||
2. DECISIONS AND CHOICES — anything that commits to a direction:
|
|
||||||
- "Going with Zygo Verifire SV for p05" (decision)
|
|
||||||
- "Dropping stitching from primary workflow" (design choice)
|
|
||||||
- "USB SSD mandatory, not SD card" (architectural commitment)
|
|
||||||
|
|
||||||
3. DURABLE ENGINEERING INSIGHT — earned knowledge that generalizes:
|
|
||||||
- "CTE gradient dominates WFE at F/1.2" (materials insight)
|
|
||||||
- "Preston model breaks below 5N because contact assumption fails"
|
|
||||||
- "m=1 coma NOT correctable by force modulation" (controls insight)
|
|
||||||
Test: would a competent engineer NEED experience to know this?
|
|
||||||
If it's textbook/google-findable, skip it.
|
|
||||||
|
|
||||||
4. STAKEHOLDER AND VENDOR EVENTS:
|
|
||||||
- "Email sent to Nabeel 2026-04-13 asking for lead time"
|
|
||||||
- "Meeting with Jason on Table 7 next Tuesday"
|
|
||||||
- "Starspec wants updated CAD by Friday"
|
|
||||||
|
|
||||||
5. PREFERENCES AND ADAPTATIONS that shape how Antoine works:
|
|
||||||
- "Antoine prefers OAuth over API keys"
|
|
||||||
- "Extraction stays off the capture hot path"
|
|
||||||
|
|
||||||
WHAT TO SKIP:
|
|
||||||
|
|
||||||
- Pure conversational filler ("ok thanks", "let me check")
|
|
||||||
- Instructional help content ("run this command", "here's how to...")
|
|
||||||
- Obvious textbook facts anyone can google in 30 seconds
|
|
||||||
- Session meta-chatter ("let me commit this", "deploy running")
|
|
||||||
- Transient system state snapshots ("36 active memories right now")
|
|
||||||
|
|
||||||
CANDIDATE TYPES — choose the best fit:
|
|
||||||
|
|
||||||
- project — a fact, decision, or event specific to one named project
|
|
||||||
- knowledge — durable engineering insight (use domain, not project)
|
|
||||||
- preference — how Antoine works / wants things done
|
|
||||||
- adaptation — a standing rule or adjustment to behavior
|
|
||||||
- episodic — a stakeholder event or milestone worth remembering
|
|
||||||
|
|
||||||
DOMAINS for knowledge candidates (required when type=knowledge and project is empty):
|
|
||||||
physics, materials, optics, mechanics, manufacturing, metrology,
|
|
||||||
controls, software, math, finance, business
|
|
||||||
|
|
||||||
TRUST HIERARCHY:
|
|
||||||
|
|
||||||
- project-specific: set project to the project id, leave domain empty
|
|
||||||
- domain knowledge: set domain, leave project empty
|
|
||||||
- events/activity: use project, type=project or episodic
|
|
||||||
- one conversation can produce MULTIPLE candidates — emit them all
|
|
||||||
|
|
||||||
OUTPUT RULES:
|
|
||||||
|
|
||||||
- Each candidate content under 250 characters, stands alone
|
|
||||||
- Default confidence 0.5. Raise to 0.7 only for ratified/committed claims.
|
|
||||||
- Raw JSON array, no prose, no markdown fences
|
|
||||||
- Empty array [] is fine when the conversation has no durable signal
|
|
||||||
|
|
||||||
Each element:
|
|
||||||
{"type": "project|knowledge|preference|adaptation|episodic", "content": "...", "project": "...", "domain": "", "confidence": 0.5}"""
|
|
||||||
|
|
||||||
_sandbox_cwd = None
|
_sandbox_cwd = None
|
||||||
|
|
||||||
@@ -175,14 +116,7 @@ def extract_one(prompt, response, project, model, timeout_s):
|
|||||||
if not shutil.which("claude"):
|
if not shutil.which("claude"):
|
||||||
return [], "claude_cli_missing"
|
return [], "claude_cli_missing"
|
||||||
|
|
||||||
prompt_excerpt = prompt[:MAX_PROMPT_CHARS]
|
user_message = build_user_message(prompt, response, project)
|
||||||
response_excerpt = response[:MAX_RESPONSE_CHARS]
|
|
||||||
user_message = (
|
|
||||||
f"PROJECT HINT (may be empty): {project}\n\n"
|
|
||||||
f"USER PROMPT:\n{prompt_excerpt}\n\n"
|
|
||||||
f"ASSISTANT RESPONSE:\n{response_excerpt}\n\n"
|
|
||||||
"Return the JSON array now."
|
|
||||||
)
|
|
||||||
|
|
||||||
args = [
|
args = [
|
||||||
"claude", "-p",
|
"claude", "-p",
|
||||||
@@ -192,85 +126,56 @@ def extract_one(prompt, response, project, model, timeout_s):
|
|||||||
user_message,
|
user_message,
|
||||||
]
|
]
|
||||||
|
|
||||||
try:
|
# Retry with exponential backoff on transient failures (rate limits etc)
|
||||||
completed = subprocess.run(
|
import time as _time
|
||||||
args, capture_output=True, text=True,
|
last_error = ""
|
||||||
timeout=timeout_s, cwd=get_sandbox_cwd(),
|
for attempt in range(3):
|
||||||
encoding="utf-8", errors="replace",
|
if attempt > 0:
|
||||||
)
|
_time.sleep(2 ** attempt) # 2s, 4s
|
||||||
except subprocess.TimeoutExpired:
|
try:
|
||||||
return [], "timeout"
|
completed = subprocess.run(
|
||||||
except Exception as exc:
|
args, capture_output=True, text=True,
|
||||||
return [], f"subprocess_error: {exc}"
|
timeout=timeout_s, cwd=get_sandbox_cwd(),
|
||||||
|
encoding="utf-8", errors="replace",
|
||||||
|
)
|
||||||
|
except subprocess.TimeoutExpired:
|
||||||
|
last_error = "timeout"
|
||||||
|
continue
|
||||||
|
except Exception as exc:
|
||||||
|
last_error = f"subprocess_error: {exc}"
|
||||||
|
continue
|
||||||
|
|
||||||
if completed.returncode != 0:
|
if completed.returncode == 0:
|
||||||
return [], f"exit_{completed.returncode}"
|
raw = (completed.stdout or "").strip()
|
||||||
|
return parse_candidates(raw, project), ""
|
||||||
|
|
||||||
raw = (completed.stdout or "").strip()
|
# Capture stderr for diagnostics (truncate to 200 chars)
|
||||||
return parse_candidates(raw, project), ""
|
stderr = (completed.stderr or "").strip()[:200]
|
||||||
|
last_error = f"exit_{completed.returncode}: {stderr}" if stderr else f"exit_{completed.returncode}"
|
||||||
|
|
||||||
|
return [], last_error
|
||||||
|
|
||||||
|
|
||||||
def parse_candidates(raw, interaction_project):
|
def parse_candidates(raw, interaction_project):
|
||||||
"""Parse model JSON output into candidate dicts."""
|
"""Parse model JSON output into candidate dicts.
|
||||||
text = raw.strip()
|
|
||||||
if text.startswith("```"):
|
|
||||||
text = text.strip("`")
|
|
||||||
nl = text.find("\n")
|
|
||||||
if nl >= 0:
|
|
||||||
text = text[nl + 1:]
|
|
||||||
if text.endswith("```"):
|
|
||||||
text = text[:-3]
|
|
||||||
text = text.strip()
|
|
||||||
|
|
||||||
if not text or text == "[]":
|
|
||||||
return []
|
|
||||||
|
|
||||||
if not text.lstrip().startswith("["):
|
|
||||||
start = text.find("[")
|
|
||||||
end = text.rfind("]")
|
|
||||||
if start >= 0 and end > start:
|
|
||||||
text = text[start:end + 1]
|
|
||||||
|
|
||||||
try:
|
|
||||||
parsed = json.loads(text)
|
|
||||||
except json.JSONDecodeError:
|
|
||||||
return []
|
|
||||||
|
|
||||||
if not isinstance(parsed, list):
|
|
||||||
return []
|
|
||||||
|
|
||||||
|
Stripping + per-item normalization come from the shared
|
||||||
|
``_llm_prompt`` module. Host-side project attribution: interaction
|
||||||
|
scope wins, otherwise keep the model's tag (the API's own R9
|
||||||
|
registry-check will happen server-side in the container on write;
|
||||||
|
here we preserve the signal instead of dropping it).
|
||||||
|
"""
|
||||||
results = []
|
results = []
|
||||||
for item in parsed:
|
for item in parse_llm_json_array(raw):
|
||||||
if not isinstance(item, dict):
|
normalized = normalize_candidate_item(item)
|
||||||
|
if normalized is None:
|
||||||
continue
|
continue
|
||||||
mem_type = str(item.get("type") or "").strip().lower()
|
project = interaction_project or normalized["project"] or ""
|
||||||
content = str(item.get("content") or "").strip()
|
|
||||||
model_project = str(item.get("project") or "").strip()
|
|
||||||
domain = str(item.get("domain") or "").strip().lower()
|
|
||||||
# R9 trust hierarchy: interaction scope always wins when set.
|
|
||||||
# For unscoped interactions, keep model's project tag even if
|
|
||||||
# unregistered — the system will detect new projects/leads.
|
|
||||||
if interaction_project:
|
|
||||||
project = interaction_project
|
|
||||||
elif model_project:
|
|
||||||
project = model_project
|
|
||||||
else:
|
|
||||||
project = ""
|
|
||||||
# Domain knowledge: embed tag in content for cross-project retrieval
|
|
||||||
if domain and not project:
|
|
||||||
content = f"[{domain}] {content}"
|
|
||||||
conf = item.get("confidence", 0.5)
|
|
||||||
if mem_type not in MEMORY_TYPES or not content:
|
|
||||||
continue
|
|
||||||
try:
|
|
||||||
conf = max(0.0, min(1.0, float(conf)))
|
|
||||||
except (TypeError, ValueError):
|
|
||||||
conf = 0.5
|
|
||||||
results.append({
|
results.append({
|
||||||
"memory_type": mem_type,
|
"memory_type": normalized["type"],
|
||||||
"content": content[:1000],
|
"content": normalized["content"],
|
||||||
"project": project,
|
"project": project,
|
||||||
"confidence": conf,
|
"confidence": normalized["confidence"],
|
||||||
})
|
})
|
||||||
return results
|
return results
|
||||||
|
|
||||||
@@ -299,10 +204,14 @@ def main():
|
|||||||
total_persisted = 0
|
total_persisted = 0
|
||||||
errors = 0
|
errors = 0
|
||||||
|
|
||||||
for summary in interaction_summaries:
|
import time as _time
|
||||||
|
for ix, summary in enumerate(interaction_summaries):
|
||||||
resp_chars = summary.get("response_chars", 0) or 0
|
resp_chars = summary.get("response_chars", 0) or 0
|
||||||
if resp_chars < 50:
|
if resp_chars < 50:
|
||||||
continue
|
continue
|
||||||
|
# Light pacing between calls to avoid bursting the claude CLI
|
||||||
|
if ix > 0:
|
||||||
|
_time.sleep(0.5)
|
||||||
iid = summary["id"]
|
iid = summary["id"]
|
||||||
try:
|
try:
|
||||||
raw = api_get(
|
raw = api_get(
|
||||||
|
|||||||
@@ -42,7 +42,7 @@ from pathlib import Path
|
|||||||
|
|
||||||
DEFAULT_BASE_URL = os.environ.get("ATOCORE_BASE_URL", "http://localhost:8100")
|
DEFAULT_BASE_URL = os.environ.get("ATOCORE_BASE_URL", "http://localhost:8100")
|
||||||
DEFAULT_OPENCLAW_HOST = os.environ.get("ATOCORE_OPENCLAW_HOST", "papa@192.168.86.39")
|
DEFAULT_OPENCLAW_HOST = os.environ.get("ATOCORE_OPENCLAW_HOST", "papa@192.168.86.39")
|
||||||
DEFAULT_OPENCLAW_PATH = os.environ.get("ATOCORE_OPENCLAW_PATH", "/home/papa/openclaw-workspace")
|
DEFAULT_OPENCLAW_PATH = os.environ.get("ATOCORE_OPENCLAW_PATH", "/home/papa/clawd")
|
||||||
|
|
||||||
# Files to pull and how to classify them
|
# Files to pull and how to classify them
|
||||||
DURABLE_FILES = [
|
DURABLE_FILES = [
|
||||||
|
|||||||
@@ -218,8 +218,8 @@
|
|||||||
"Tailscale"
|
"Tailscale"
|
||||||
],
|
],
|
||||||
"expect_absent": [
|
"expect_absent": [
|
||||||
"GigaBIT"
|
"[Source: p04-gigabit/"
|
||||||
],
|
],
|
||||||
"notes": "New p06 memory: Tailscale mesh for RPi remote access"
|
"notes": "New p06 memory: Tailscale mesh for RPi remote access. Cross-project guard is a source-path check, not a word blacklist: the polisher ARCHITECTURE.md legitimately mentions the GigaBIT M1 mirror (it is what the polisher is built for), so testing for absence of that word produces false positives. The real invariant is that no p04 source chunks are retrieved into p06 context."
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
|
|||||||
159
scripts/seed_project_state.py
Normal file
159
scripts/seed_project_state.py
Normal file
@@ -0,0 +1,159 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Seed Trusted Project State entries for all active projects.
|
||||||
|
|
||||||
|
Populates the project_state table with curated decisions, requirements,
|
||||||
|
facts, contacts, and milestones so context packs have real content
|
||||||
|
in the highest-trust tier.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python3 scripts/seed_project_state.py --base-url http://dalidou:8100
|
||||||
|
python3 scripts/seed_project_state.py --base-url http://dalidou:8100 --dry-run
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import urllib.request
|
||||||
|
import sys
|
||||||
|
|
||||||
|
# Each entry: (project, category, key, value, source)
|
||||||
|
SEED_ENTRIES: list[tuple[str, str, str, str, str]] = [
|
||||||
|
# ---- p04-gigabit (GigaBIT M1 1.2m Primary Mirror) ----
|
||||||
|
("p04-gigabit", "fact", "mirror-spec",
|
||||||
|
"1.2m borosilicate primary mirror for GigaBIT telescope. F/1.5, lightweight isogrid back structure.",
|
||||||
|
"CDR docs + vault"),
|
||||||
|
("p04-gigabit", "decision", "back-structure",
|
||||||
|
"Option B selected: conical isogrid back structure with variable rib density. Chosen over flat-back for stiffness-to-weight ratio.",
|
||||||
|
"CDR 2026-01"),
|
||||||
|
("p04-gigabit", "decision", "polishing-vendor",
|
||||||
|
"ABB Space (formerly INO) selected as polishing vendor. Contract includes computer-controlled polishing (CCP) and ion beam figuring (IBF).",
|
||||||
|
"Entente de service 2026-01"),
|
||||||
|
("p04-gigabit", "requirement", "surface-quality",
|
||||||
|
"Surface figure accuracy: < 25nm RMS after final figuring. Microroughness: < 2nm RMS.",
|
||||||
|
"CDR requirements"),
|
||||||
|
("p04-gigabit", "contact", "abb-space",
|
||||||
|
"ABB Space (INO), Quebec City. Primary contact for mirror polishing, CCP, and IBF. Project lead: coordinating FDR deliverables.",
|
||||||
|
"vendor records"),
|
||||||
|
("p04-gigabit", "milestone", "fdr",
|
||||||
|
"Final Design Review (FDR) in preparation. Deliverables include interface drawings, thermal analysis, and updated error budget.",
|
||||||
|
"project timeline"),
|
||||||
|
|
||||||
|
# ---- p05-interferometer (Fullum Interferometer) ----
|
||||||
|
("p05-interferometer", "fact", "system-overview",
|
||||||
|
"Custom Fizeau interferometer for in-situ metrology of large optics. Designed for the Fullum observatory polishing facility.",
|
||||||
|
"vault docs"),
|
||||||
|
("p05-interferometer", "decision", "cgh-design",
|
||||||
|
"Computer-generated hologram (CGH) selected for null testing of the 1.2m mirror. Vendor: Diffraction International.",
|
||||||
|
"vendor correspondence"),
|
||||||
|
("p05-interferometer", "requirement", "measurement-accuracy",
|
||||||
|
"Measurement accuracy target: lambda/20 (< 30nm PV) for surface figure verification.",
|
||||||
|
"system requirements"),
|
||||||
|
("p05-interferometer", "fact", "laser-source",
|
||||||
|
"HeNe laser source at 632.8nm. Beam expansion to cover full 1.2m aperture via diverger + CGH.",
|
||||||
|
"optical design docs"),
|
||||||
|
("p05-interferometer", "contact", "diffraction-intl",
|
||||||
|
"Diffraction International: CGH vendor. Fabricates the computer-generated hologram for null testing.",
|
||||||
|
"vendor records"),
|
||||||
|
|
||||||
|
# ---- p06-polisher (Polisher Suite / P11-Polisher-Fullum) ----
|
||||||
|
("p06-polisher", "fact", "suite-overview",
|
||||||
|
"Integrated CNC polishing suite for the Fullum observatory. Includes 3-axis polishing machine, metrology integration, and real-time process control.",
|
||||||
|
"vault docs"),
|
||||||
|
("p06-polisher", "decision", "control-architecture",
|
||||||
|
"Beckhoff TwinCAT 3 selected for real-time motion control. EtherCAT fieldbus for servo drives and I/O.",
|
||||||
|
"architecture docs"),
|
||||||
|
("p06-polisher", "decision", "firmware-split",
|
||||||
|
"Firmware split into safety layer (PLC-level interlocks) and application layer (trajectory generation, adaptive dwell-time).",
|
||||||
|
"architecture docs"),
|
||||||
|
("p06-polisher", "requirement", "axis-travel",
|
||||||
|
"Z-axis: 200mm travel for tool engagement. X/Y: covers 1.2m mirror diameter plus overshoot margin.",
|
||||||
|
"mechanical requirements"),
|
||||||
|
("p06-polisher", "fact", "telemetry",
|
||||||
|
"Real-time telemetry via MQTT. Metrics: spindle RPM, force sensor, temperature probes, position feedback at 1kHz.",
|
||||||
|
"control design docs"),
|
||||||
|
("p06-polisher", "contact", "fullum-observatory",
|
||||||
|
"Fullum Observatory: site where the polishing suite will be installed. Provides infrastructure (power, vibration isolation, clean environment).",
|
||||||
|
"project records"),
|
||||||
|
|
||||||
|
# ---- atomizer-v2 ----
|
||||||
|
("atomizer-v2", "fact", "product-overview",
|
||||||
|
"Atomizer V2: internal project management and multi-agent orchestration platform. War-room based task coordination.",
|
||||||
|
"repo docs"),
|
||||||
|
("atomizer-v2", "decision", "projects-first-architecture",
|
||||||
|
"Migration to projects-first architecture: each project is a workspace with its own agents, tasks, and knowledge.",
|
||||||
|
"war-room-migration-plan-v2.md"),
|
||||||
|
|
||||||
|
# ---- abb-space (P08) ----
|
||||||
|
("abb-space", "fact", "contract-overview",
|
||||||
|
"ABB Space mirror polishing contract. Phase 1: spherical mirror polishing (200mm). Schott Zerodur substrate.",
|
||||||
|
"quotes + correspondence"),
|
||||||
|
("abb-space", "contact", "schott",
|
||||||
|
"Schott AG: substrate supplier for Zerodur mirror blanks. Quote received for 200mm blank.",
|
||||||
|
"vendor records"),
|
||||||
|
|
||||||
|
# ---- atocore ----
|
||||||
|
("atocore", "fact", "architecture",
|
||||||
|
"AtoCore: runtime memory and knowledge layer. FastAPI + SQLite + ChromaDB. Hosted on Dalidou (Docker). Nightly pipeline: backup, extract, triage, synthesis.",
|
||||||
|
"codebase"),
|
||||||
|
("atocore", "decision", "no-api-keys",
|
||||||
|
"No API keys allowed in AtoCore. LLM-assisted features use OAuth via 'claude -p' CLI or equivalent CLI-authenticated paths.",
|
||||||
|
"DEV-LEDGER 2026-04-12"),
|
||||||
|
("atocore", "decision", "storage-separation",
|
||||||
|
"Human-readable sources (vault, drive) and machine operational storage (SQLite, ChromaDB) must remain separate. Machine DB is derived state.",
|
||||||
|
"AGENTS.md"),
|
||||||
|
("atocore", "decision", "extraction-off-hot-path",
|
||||||
|
"Extraction stays off the capture hot path. Batch/manual only. Never block interaction recording with extraction.",
|
||||||
|
"DEV-LEDGER 2026-04-11"),
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> None:
|
||||||
|
parser = argparse.ArgumentParser(description="Seed Trusted Project State")
|
||||||
|
parser.add_argument("--base-url", default="http://dalidou:8100")
|
||||||
|
parser.add_argument("--dry-run", action="store_true")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
base = args.base_url.rstrip("/")
|
||||||
|
created = 0
|
||||||
|
skipped = 0
|
||||||
|
errors = 0
|
||||||
|
|
||||||
|
for project, category, key, value, source in SEED_ENTRIES:
|
||||||
|
if args.dry_run:
|
||||||
|
print(f" [DRY] {project}/{category}/{key}: {value[:60]}...")
|
||||||
|
created += 1
|
||||||
|
continue
|
||||||
|
|
||||||
|
body = json.dumps({
|
||||||
|
"project": project,
|
||||||
|
"category": category,
|
||||||
|
"key": key,
|
||||||
|
"value": value,
|
||||||
|
"source": source,
|
||||||
|
"confidence": 1.0,
|
||||||
|
}).encode()
|
||||||
|
req = urllib.request.Request(
|
||||||
|
f"{base}/project/state",
|
||||||
|
data=body,
|
||||||
|
headers={"Content-Type": "application/json"},
|
||||||
|
method="POST",
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
resp = urllib.request.urlopen(req, timeout=10)
|
||||||
|
result = json.loads(resp.read())
|
||||||
|
if result.get("created"):
|
||||||
|
created += 1
|
||||||
|
print(f" + {project}/{category}/{key}")
|
||||||
|
else:
|
||||||
|
skipped += 1
|
||||||
|
print(f" = {project}/{category}/{key} (already exists)")
|
||||||
|
except Exception as e:
|
||||||
|
errors += 1
|
||||||
|
print(f" ! {project}/{category}/{key}: {e}", file=sys.stderr)
|
||||||
|
|
||||||
|
print(f"\nDone: {created} created, {skipped} skipped, {errors} errors")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
87
scripts/windows/atocore-backup-pull.ps1
Normal file
87
scripts/windows/atocore-backup-pull.ps1
Normal file
@@ -0,0 +1,87 @@
|
|||||||
|
# atocore-backup-pull.ps1
|
||||||
|
#
|
||||||
|
# Pull the latest AtoCore backup snapshot from Dalidou to this Windows machine.
|
||||||
|
# Designed to be run by Windows Task Scheduler. Fail-open by design -- if
|
||||||
|
# Dalidou is unreachable (laptop on the road, etc.), exit cleanly without error.
|
||||||
|
#
|
||||||
|
# Usage (manual test):
|
||||||
|
# powershell.exe -ExecutionPolicy Bypass -File atocore-backup-pull.ps1
|
||||||
|
#
|
||||||
|
# Scheduled task: see docs/windows-backup-setup.md for Task Scheduler config.
|
||||||
|
|
||||||
|
$ErrorActionPreference = "Continue"
|
||||||
|
|
||||||
|
# --- Configuration ---
|
||||||
|
$Remote = "papa@dalidou"
|
||||||
|
$RemoteSnapshots = "/srv/storage/atocore/backups/snapshots"
|
||||||
|
$LocalBackupDir = "$env:USERPROFILE\Documents\ATOCore_Backups"
|
||||||
|
$LogDir = "$LocalBackupDir\_logs"
|
||||||
|
$ReachabilityTest = 5 # seconds timeout for SSH probe
|
||||||
|
|
||||||
|
# --- Setup ---
|
||||||
|
if (-not (Test-Path $LocalBackupDir)) {
|
||||||
|
New-Item -ItemType Directory -Path $LocalBackupDir -Force | Out-Null
|
||||||
|
}
|
||||||
|
if (-not (Test-Path $LogDir)) {
|
||||||
|
New-Item -ItemType Directory -Path $LogDir -Force | Out-Null
|
||||||
|
}
|
||||||
|
|
||||||
|
$Timestamp = Get-Date -Format "yyyy-MM-dd_HHmmss"
|
||||||
|
$LogFile = "$LogDir\backup-$Timestamp.log"
|
||||||
|
|
||||||
|
function Log($msg) {
|
||||||
|
$line = "[{0}] {1}" -f (Get-Date -Format "yyyy-MM-dd HH:mm:ss"), $msg
|
||||||
|
Write-Host $line
|
||||||
|
Add-Content -Path $LogFile -Value $line
|
||||||
|
}
|
||||||
|
|
||||||
|
Log "=== AtoCore backup pull starting ==="
|
||||||
|
Log "Remote: $Remote"
|
||||||
|
Log "Local target: $LocalBackupDir"
|
||||||
|
|
||||||
|
# --- Reachability check: fail open if Dalidou is offline ---
|
||||||
|
Log "Checking Dalidou reachability..."
|
||||||
|
$probe = & ssh -o ConnectTimeout=$ReachabilityTest -o BatchMode=yes `
|
||||||
|
-o StrictHostKeyChecking=accept-new `
|
||||||
|
$Remote "echo ok" 2>&1
|
||||||
|
if ($LASTEXITCODE -ne 0 -or $probe -ne "ok") {
|
||||||
|
Log "Dalidou unreachable ($probe) -- fail-open exit"
|
||||||
|
exit 0
|
||||||
|
}
|
||||||
|
Log "Dalidou reachable."
|
||||||
|
|
||||||
|
# --- Pull the entire snapshots directory ---
|
||||||
|
# Dalidou's retention policy (7 daily + 4 weekly + 6 monthly) already caps
|
||||||
|
# the snapshot count, so pulling the whole dir is bounded and simple. scp
|
||||||
|
# will overwrite local files -- we rely on this to pick up new snapshots.
|
||||||
|
Log "Pulling snapshots via scp..."
|
||||||
|
$LocalSnapshotsDir = Join-Path $LocalBackupDir "snapshots"
|
||||||
|
if (-not (Test-Path $LocalSnapshotsDir)) {
|
||||||
|
New-Item -ItemType Directory -Path $LocalSnapshotsDir -Force | Out-Null
|
||||||
|
}
|
||||||
|
|
||||||
|
& scp -o BatchMode=yes -r "${Remote}:${RemoteSnapshots}/*" "$LocalSnapshotsDir\" 2>&1 |
|
||||||
|
ForEach-Object { Add-Content -Path $LogFile -Value $_ }
|
||||||
|
|
||||||
|
if ($LASTEXITCODE -ne 0) {
|
||||||
|
Log "scp failed with exit $LASTEXITCODE"
|
||||||
|
exit 0 # fail-open
|
||||||
|
}
|
||||||
|
|
||||||
|
# --- Stats ---
|
||||||
|
$snapshots = Get-ChildItem -Path $LocalSnapshotsDir -Directory |
|
||||||
|
Where-Object { $_.Name -match "^\d{8}T\d{6}Z$" } |
|
||||||
|
Sort-Object Name -Descending
|
||||||
|
|
||||||
|
$totalSize = (Get-ChildItem $LocalSnapshotsDir -Recurse -File | Measure-Object -Property Length -Sum).Sum
|
||||||
|
$SizeMB = [math]::Round($totalSize / 1MB, 2)
|
||||||
|
$latest = if ($snapshots.Count -gt 0) { $snapshots[0].Name } else { "(none)" }
|
||||||
|
|
||||||
|
Log ("Pulled {0} snapshots successfully (total {1} MB, latest: {2})" -f $snapshots.Count, $SizeMB, $latest)
|
||||||
|
Log "=== backup complete ==="
|
||||||
|
|
||||||
|
# --- Log retention: keep last 30 log files ---
|
||||||
|
Get-ChildItem -Path $LogDir -Filter "backup-*.log" |
|
||||||
|
Sort-Object Name -Descending |
|
||||||
|
Select-Object -Skip 30 |
|
||||||
|
ForEach-Object { Remove-Item $_.FullName -Force -ErrorAction SilentlyContinue }
|
||||||
@@ -55,6 +55,7 @@ from atocore.memory.extractor import (
|
|||||||
)
|
)
|
||||||
from atocore.memory.extractor_llm import (
|
from atocore.memory.extractor_llm import (
|
||||||
LLM_EXTRACTOR_VERSION,
|
LLM_EXTRACTOR_VERSION,
|
||||||
|
_cli_available as _llm_cli_available,
|
||||||
extract_candidates_llm,
|
extract_candidates_llm,
|
||||||
)
|
)
|
||||||
from atocore.memory.reinforcement import reinforce_from_interaction
|
from atocore.memory.reinforcement import reinforce_from_interaction
|
||||||
@@ -832,6 +833,18 @@ def api_extract_batch(req: ExtractBatchRequest | None = None) -> dict:
|
|||||||
invoke this endpoint explicitly (cron, manual curl, CLI).
|
invoke this endpoint explicitly (cron, manual curl, CLI).
|
||||||
"""
|
"""
|
||||||
payload = req or ExtractBatchRequest()
|
payload = req or ExtractBatchRequest()
|
||||||
|
|
||||||
|
if payload.mode == "llm" and not _llm_cli_available():
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=503,
|
||||||
|
detail=(
|
||||||
|
"LLM extraction unavailable in this runtime: the `claude` CLI "
|
||||||
|
"is not on PATH. Run host-side via "
|
||||||
|
"`scripts/batch_llm_extract_live.py` instead, or call this "
|
||||||
|
"endpoint with mode=\"rule\"."
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
since = payload.since
|
since = payload.since
|
||||||
|
|
||||||
if not since:
|
if not since:
|
||||||
@@ -916,11 +929,14 @@ def api_dashboard() -> dict:
|
|||||||
"""One-shot system observability dashboard.
|
"""One-shot system observability dashboard.
|
||||||
|
|
||||||
Returns memory counts by type/project/status, project state
|
Returns memory counts by type/project/status, project state
|
||||||
entry counts, recent interaction volume, and extraction pipeline
|
entry counts, interaction volume by client, pipeline health
|
||||||
|
(harness, triage stats, last run), and extraction pipeline
|
||||||
status — everything an operator needs to understand AtoCore's
|
status — everything an operator needs to understand AtoCore's
|
||||||
health beyond the basic /health endpoint.
|
health beyond the basic /health endpoint.
|
||||||
"""
|
"""
|
||||||
|
import json as _json
|
||||||
from collections import Counter
|
from collections import Counter
|
||||||
|
from datetime import datetime as _dt, timezone as _tz
|
||||||
|
|
||||||
all_memories = get_memories(active_only=False, limit=500)
|
all_memories = get_memories(active_only=False, limit=500)
|
||||||
active = [m for m in all_memories if m.status == "active"]
|
active = [m for m in all_memories if m.status == "active"]
|
||||||
@@ -930,27 +946,81 @@ def api_dashboard() -> dict:
|
|||||||
project_counts = dict(Counter(m.project or "(none)" for m in active))
|
project_counts = dict(Counter(m.project or "(none)" for m in active))
|
||||||
reinforced = [m for m in active if m.reference_count > 0]
|
reinforced = [m for m in active if m.reference_count > 0]
|
||||||
|
|
||||||
interactions = list_interactions(limit=1)
|
# Interaction stats — total + by_client from DB directly
|
||||||
recent_interaction = interactions[0].created_at if interactions else None
|
interaction_stats: dict = {"most_recent": None, "total": 0, "by_client": {}}
|
||||||
|
try:
|
||||||
|
from atocore.models.database import get_connection as _gc
|
||||||
|
|
||||||
# Extraction pipeline status
|
with _gc() as conn:
|
||||||
extract_state = {}
|
row = conn.execute("SELECT count(*) FROM interactions").fetchone()
|
||||||
|
interaction_stats["total"] = row[0] if row else 0
|
||||||
|
rows = conn.execute(
|
||||||
|
"SELECT client, count(*) FROM interactions GROUP BY client"
|
||||||
|
).fetchall()
|
||||||
|
interaction_stats["by_client"] = {r[0]: r[1] for r in rows}
|
||||||
|
row = conn.execute(
|
||||||
|
"SELECT created_at FROM interactions ORDER BY created_at DESC LIMIT 1"
|
||||||
|
).fetchone()
|
||||||
|
interaction_stats["most_recent"] = row[0] if row else None
|
||||||
|
except Exception:
|
||||||
|
interactions = list_interactions(limit=1)
|
||||||
|
interaction_stats["most_recent"] = (
|
||||||
|
interactions[0].created_at if interactions else None
|
||||||
|
)
|
||||||
|
|
||||||
|
# Pipeline health from project state
|
||||||
|
pipeline: dict = {}
|
||||||
|
extract_state: dict = {}
|
||||||
try:
|
try:
|
||||||
state_entries = get_state("atocore")
|
state_entries = get_state("atocore")
|
||||||
for entry in state_entries:
|
for entry in state_entries:
|
||||||
if entry.category == "status" and entry.key == "last_extract_batch_run":
|
if entry.category != "status":
|
||||||
|
continue
|
||||||
|
if entry.key == "last_extract_batch_run":
|
||||||
extract_state["last_run"] = entry.value
|
extract_state["last_run"] = entry.value
|
||||||
|
elif entry.key == "pipeline_last_run":
|
||||||
|
pipeline["last_run"] = entry.value
|
||||||
|
try:
|
||||||
|
last = _dt.fromisoformat(entry.value.replace("Z", "+00:00"))
|
||||||
|
delta = _dt.now(_tz.utc) - last
|
||||||
|
pipeline["hours_since_last_run"] = round(
|
||||||
|
delta.total_seconds() / 3600, 1
|
||||||
|
)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
elif entry.key == "pipeline_summary":
|
||||||
|
try:
|
||||||
|
pipeline["summary"] = _json.loads(entry.value)
|
||||||
|
except Exception:
|
||||||
|
pipeline["summary_raw"] = entry.value
|
||||||
|
elif entry.key == "retrieval_harness_result":
|
||||||
|
try:
|
||||||
|
pipeline["harness"] = _json.loads(entry.value)
|
||||||
|
except Exception:
|
||||||
|
pipeline["harness_raw"] = entry.value
|
||||||
except Exception:
|
except Exception:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
# Project state counts
|
# Project state counts — include all registered projects
|
||||||
ps_counts = {}
|
ps_counts = {}
|
||||||
for proj_id in ["p04-gigabit", "p05-interferometer", "p06-polisher", "atocore"]:
|
try:
|
||||||
try:
|
from atocore.projects.registry import load_project_registry as _lpr
|
||||||
entries = get_state(proj_id)
|
|
||||||
ps_counts[proj_id] = len(entries)
|
for proj in _lpr():
|
||||||
except Exception:
|
try:
|
||||||
pass
|
entries = get_state(proj.project_id)
|
||||||
|
ps_counts[proj.project_id] = len(entries)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
except Exception:
|
||||||
|
for proj_id in [
|
||||||
|
"p04-gigabit", "p05-interferometer", "p06-polisher", "atocore",
|
||||||
|
]:
|
||||||
|
try:
|
||||||
|
entries = get_state(proj_id)
|
||||||
|
ps_counts[proj_id] = len(entries)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
return {
|
return {
|
||||||
"memories": {
|
"memories": {
|
||||||
@@ -964,10 +1034,9 @@ def api_dashboard() -> dict:
|
|||||||
"counts": ps_counts,
|
"counts": ps_counts,
|
||||||
"total": sum(ps_counts.values()),
|
"total": sum(ps_counts.values()),
|
||||||
},
|
},
|
||||||
"interactions": {
|
"interactions": interaction_stats,
|
||||||
"most_recent": recent_interaction,
|
|
||||||
},
|
|
||||||
"extraction_pipeline": extract_state,
|
"extraction_pipeline": extract_state,
|
||||||
|
"pipeline": pipeline,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -104,6 +104,21 @@ class Settings(BaseSettings):
|
|||||||
|
|
||||||
@property
|
@property
|
||||||
def resolved_project_registry_path(self) -> Path:
|
def resolved_project_registry_path(self) -> Path:
|
||||||
|
"""Path to the project registry JSON file.
|
||||||
|
|
||||||
|
If ``ATOCORE_PROJECT_REGISTRY_DIR`` env var is set, the registry
|
||||||
|
lives at ``<that dir>/project-registry.json``. Otherwise falls
|
||||||
|
back to the configured ``project_registry_path`` field.
|
||||||
|
|
||||||
|
This lets Docker deployments point at a mounted volume via env
|
||||||
|
var without the ephemeral in-image ``/app/config/`` getting
|
||||||
|
wiped on every rebuild.
|
||||||
|
"""
|
||||||
|
import os
|
||||||
|
|
||||||
|
registry_dir = os.environ.get("ATOCORE_PROJECT_REGISTRY_DIR", "").strip()
|
||||||
|
if registry_dir:
|
||||||
|
return Path(registry_dir) / "project-registry.json"
|
||||||
return self._resolve_path(self.project_registry_path)
|
return self._resolve_path(self.project_registry_path)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
|
|||||||
183
src/atocore/memory/_llm_prompt.py
Normal file
183
src/atocore/memory/_llm_prompt.py
Normal file
@@ -0,0 +1,183 @@
|
|||||||
|
"""Shared LLM-extractor prompt + parser (stdlib-only).
|
||||||
|
|
||||||
|
R12: single source of truth for the system prompt, memory type set,
|
||||||
|
size limits, and raw JSON parsing used by both paths that shell out
|
||||||
|
to ``claude -p``:
|
||||||
|
|
||||||
|
- ``atocore.memory.extractor_llm`` (in-container extractor, wraps the
|
||||||
|
parsed dicts in ``MemoryCandidate`` with registry-checked project
|
||||||
|
attribution)
|
||||||
|
- ``scripts/batch_llm_extract_live.py`` (host-side extractor, can't
|
||||||
|
import the full atocore package because Dalidou's host Python lacks
|
||||||
|
the container's deps; imports this module via ``sys.path``)
|
||||||
|
|
||||||
|
This module MUST stay stdlib-only. No ``atocore`` imports, no third-
|
||||||
|
party packages. Callers apply their own project-attribution policy on
|
||||||
|
top of the normalized dicts this module emits.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
LLM_EXTRACTOR_VERSION = "llm-0.4.0"
|
||||||
|
MAX_RESPONSE_CHARS = 8000
|
||||||
|
MAX_PROMPT_CHARS = 2000
|
||||||
|
MEMORY_TYPES = {"identity", "preference", "project", "episodic", "knowledge", "adaptation"}
|
||||||
|
|
||||||
|
SYSTEM_PROMPT = """You extract memory candidates from LLM conversation turns for a personal context engine called AtoCore.
|
||||||
|
|
||||||
|
AtoCore is the brain for Atomaste's engineering work. Known projects:
|
||||||
|
p04-gigabit, p05-interferometer, p06-polisher, atomizer-v2, atocore,
|
||||||
|
abb-space. Unknown project names — still tag them, the system auto-detects.
|
||||||
|
|
||||||
|
Your job is to emit SIGNALS that matter for future context. Be aggressive:
|
||||||
|
err on the side of capturing useful signal. Triage filters noise downstream.
|
||||||
|
|
||||||
|
WHAT TO EMIT (in order of importance):
|
||||||
|
|
||||||
|
1. PROJECT ACTIVITY — any mention of a project with context worth remembering:
|
||||||
|
- "Schott quote received for ABB-Space" (event + project)
|
||||||
|
- "Cédric asked about p06 firmware timing" (stakeholder event)
|
||||||
|
- "Still waiting on Zygo lead-time from Nabeel" (blocker status)
|
||||||
|
- "p05 vendor decision needs to happen this week" (action item)
|
||||||
|
|
||||||
|
2. DECISIONS AND CHOICES — anything that commits to a direction:
|
||||||
|
- "Going with Zygo Verifire SV for p05" (decision)
|
||||||
|
- "Dropping stitching from primary workflow" (design choice)
|
||||||
|
- "USB SSD mandatory, not SD card" (architectural commitment)
|
||||||
|
|
||||||
|
3. DURABLE ENGINEERING INSIGHT — earned knowledge that generalizes:
|
||||||
|
- "CTE gradient dominates WFE at F/1.2" (materials insight)
|
||||||
|
- "Preston model breaks below 5N because contact assumption fails"
|
||||||
|
- "m=1 coma NOT correctable by force modulation" (controls insight)
|
||||||
|
Test: would a competent engineer NEED experience to know this?
|
||||||
|
If it's textbook/google-findable, skip it.
|
||||||
|
|
||||||
|
4. STAKEHOLDER AND VENDOR EVENTS:
|
||||||
|
- "Email sent to Nabeel 2026-04-13 asking for lead time"
|
||||||
|
- "Meeting with Jason on Table 7 next Tuesday"
|
||||||
|
- "Starspec wants updated CAD by Friday"
|
||||||
|
|
||||||
|
5. PREFERENCES AND ADAPTATIONS that shape how Antoine works:
|
||||||
|
- "Antoine prefers OAuth over API keys"
|
||||||
|
- "Extraction stays off the capture hot path"
|
||||||
|
|
||||||
|
WHAT TO SKIP:
|
||||||
|
|
||||||
|
- Pure conversational filler ("ok thanks", "let me check")
|
||||||
|
- Instructional help content ("run this command", "here's how to...")
|
||||||
|
- Obvious textbook facts anyone can google in 30 seconds
|
||||||
|
- Session meta-chatter ("let me commit this", "deploy running")
|
||||||
|
- Transient system state snapshots ("36 active memories right now")
|
||||||
|
|
||||||
|
CANDIDATE TYPES — choose the best fit:
|
||||||
|
|
||||||
|
- project — a fact, decision, or event specific to one named project
|
||||||
|
- knowledge — durable engineering insight (use domain, not project)
|
||||||
|
- preference — how Antoine works / wants things done
|
||||||
|
- adaptation — a standing rule or adjustment to behavior
|
||||||
|
- episodic — a stakeholder event or milestone worth remembering
|
||||||
|
|
||||||
|
DOMAINS for knowledge candidates (required when type=knowledge and project is empty):
|
||||||
|
physics, materials, optics, mechanics, manufacturing, metrology,
|
||||||
|
controls, software, math, finance, business
|
||||||
|
|
||||||
|
TRUST HIERARCHY:
|
||||||
|
|
||||||
|
- project-specific: set project to the project id, leave domain empty
|
||||||
|
- domain knowledge: set domain, leave project empty
|
||||||
|
- events/activity: use project, type=project or episodic
|
||||||
|
- one conversation can produce MULTIPLE candidates — emit them all
|
||||||
|
|
||||||
|
OUTPUT RULES:
|
||||||
|
|
||||||
|
- Each candidate content under 250 characters, stands alone
|
||||||
|
- Default confidence 0.5. Raise to 0.7 only for ratified/committed claims.
|
||||||
|
- Raw JSON array, no prose, no markdown fences
|
||||||
|
- Empty array [] is fine when the conversation has no durable signal
|
||||||
|
|
||||||
|
Each element:
|
||||||
|
{"type": "project|knowledge|preference|adaptation|episodic", "content": "...", "project": "...", "domain": "", "confidence": 0.5}"""
|
||||||
|
|
||||||
|
|
||||||
|
def build_user_message(prompt: str, response: str, project_hint: str) -> str:
|
||||||
|
prompt_excerpt = (prompt or "")[:MAX_PROMPT_CHARS]
|
||||||
|
response_excerpt = (response or "")[:MAX_RESPONSE_CHARS]
|
||||||
|
return (
|
||||||
|
f"PROJECT HINT (may be empty): {project_hint or ''}\n\n"
|
||||||
|
f"USER PROMPT:\n{prompt_excerpt}\n\n"
|
||||||
|
f"ASSISTANT RESPONSE:\n{response_excerpt}\n\n"
|
||||||
|
"Return the JSON array now."
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def parse_llm_json_array(raw_output: str) -> list[dict[str, Any]]:
|
||||||
|
"""Strip markdown fences / leading prose and return the parsed JSON
|
||||||
|
array as a list of raw dicts. Returns an empty list on any parse
|
||||||
|
failure — callers decide whether to log."""
|
||||||
|
text = (raw_output or "").strip()
|
||||||
|
if text.startswith("```"):
|
||||||
|
text = text.strip("`")
|
||||||
|
nl = text.find("\n")
|
||||||
|
if nl >= 0:
|
||||||
|
text = text[nl + 1:]
|
||||||
|
if text.endswith("```"):
|
||||||
|
text = text[:-3]
|
||||||
|
text = text.strip()
|
||||||
|
|
||||||
|
if not text or text == "[]":
|
||||||
|
return []
|
||||||
|
|
||||||
|
if not text.lstrip().startswith("["):
|
||||||
|
start = text.find("[")
|
||||||
|
end = text.rfind("]")
|
||||||
|
if start >= 0 and end > start:
|
||||||
|
text = text[start:end + 1]
|
||||||
|
|
||||||
|
try:
|
||||||
|
parsed = json.loads(text)
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
return []
|
||||||
|
|
||||||
|
if not isinstance(parsed, list):
|
||||||
|
return []
|
||||||
|
return [item for item in parsed if isinstance(item, dict)]
|
||||||
|
|
||||||
|
|
||||||
|
def normalize_candidate_item(item: dict[str, Any]) -> dict[str, Any] | None:
|
||||||
|
"""Validate and normalize one raw model item into a candidate dict.
|
||||||
|
|
||||||
|
Returns None if the item fails basic validation (unknown type,
|
||||||
|
empty content). Does NOT apply project-attribution policy — that's
|
||||||
|
the caller's job, since the registry-check differs between the
|
||||||
|
in-container path and the host path.
|
||||||
|
|
||||||
|
Output keys: type, content, project (raw model value), domain,
|
||||||
|
confidence.
|
||||||
|
"""
|
||||||
|
mem_type = str(item.get("type") or "").strip().lower()
|
||||||
|
content = str(item.get("content") or "").strip()
|
||||||
|
if mem_type not in MEMORY_TYPES or not content:
|
||||||
|
return None
|
||||||
|
|
||||||
|
model_project = str(item.get("project") or "").strip()
|
||||||
|
domain = str(item.get("domain") or "").strip().lower()
|
||||||
|
|
||||||
|
try:
|
||||||
|
confidence = float(item.get("confidence", 0.5))
|
||||||
|
except (TypeError, ValueError):
|
||||||
|
confidence = 0.5
|
||||||
|
confidence = max(0.0, min(1.0, confidence))
|
||||||
|
|
||||||
|
if domain and not model_project:
|
||||||
|
content = f"[{domain}] {content}"
|
||||||
|
|
||||||
|
return {
|
||||||
|
"type": mem_type,
|
||||||
|
"content": content[:1000],
|
||||||
|
"project": model_project,
|
||||||
|
"domain": domain,
|
||||||
|
"confidence": confidence,
|
||||||
|
}
|
||||||
@@ -49,7 +49,6 @@ Implementation notes:
|
|||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import json
|
|
||||||
import os
|
import os
|
||||||
import shutil
|
import shutil
|
||||||
import subprocess
|
import subprocess
|
||||||
@@ -58,92 +57,21 @@ from dataclasses import dataclass
|
|||||||
from functools import lru_cache
|
from functools import lru_cache
|
||||||
|
|
||||||
from atocore.interactions.service import Interaction
|
from atocore.interactions.service import Interaction
|
||||||
|
from atocore.memory._llm_prompt import (
|
||||||
|
LLM_EXTRACTOR_VERSION,
|
||||||
|
SYSTEM_PROMPT as _SYSTEM_PROMPT,
|
||||||
|
build_user_message,
|
||||||
|
normalize_candidate_item,
|
||||||
|
parse_llm_json_array,
|
||||||
|
)
|
||||||
from atocore.memory.extractor import MemoryCandidate
|
from atocore.memory.extractor import MemoryCandidate
|
||||||
from atocore.memory.service import MEMORY_TYPES
|
|
||||||
from atocore.observability.logger import get_logger
|
from atocore.observability.logger import get_logger
|
||||||
|
|
||||||
log = get_logger("extractor_llm")
|
log = get_logger("extractor_llm")
|
||||||
|
|
||||||
LLM_EXTRACTOR_VERSION = "llm-0.4.0"
|
|
||||||
DEFAULT_MODEL = os.environ.get("ATOCORE_LLM_EXTRACTOR_MODEL", "sonnet")
|
DEFAULT_MODEL = os.environ.get("ATOCORE_LLM_EXTRACTOR_MODEL", "sonnet")
|
||||||
DEFAULT_TIMEOUT_S = float(os.environ.get("ATOCORE_LLM_EXTRACTOR_TIMEOUT_S", "90"))
|
DEFAULT_TIMEOUT_S = float(os.environ.get("ATOCORE_LLM_EXTRACTOR_TIMEOUT_S", "90"))
|
||||||
MAX_RESPONSE_CHARS = 8000
|
|
||||||
MAX_PROMPT_CHARS = 2000
|
|
||||||
|
|
||||||
_SYSTEM_PROMPT = """You extract memory candidates from LLM conversation turns for a personal context engine called AtoCore.
|
|
||||||
|
|
||||||
AtoCore is the brain for Atomaste's engineering work. Known projects:
|
|
||||||
p04-gigabit, p05-interferometer, p06-polisher, atomizer-v2, atocore,
|
|
||||||
abb-space. Unknown project names — still tag them, the system auto-detects.
|
|
||||||
|
|
||||||
Your job is to emit SIGNALS that matter for future context. Be aggressive:
|
|
||||||
err on the side of capturing useful signal. Triage filters noise downstream.
|
|
||||||
|
|
||||||
WHAT TO EMIT (in order of importance):
|
|
||||||
|
|
||||||
1. PROJECT ACTIVITY — any mention of a project with context worth remembering:
|
|
||||||
- "Schott quote received for ABB-Space" (event + project)
|
|
||||||
- "Cédric asked about p06 firmware timing" (stakeholder event)
|
|
||||||
- "Still waiting on Zygo lead-time from Nabeel" (blocker status)
|
|
||||||
- "p05 vendor decision needs to happen this week" (action item)
|
|
||||||
|
|
||||||
2. DECISIONS AND CHOICES — anything that commits to a direction:
|
|
||||||
- "Going with Zygo Verifire SV for p05" (decision)
|
|
||||||
- "Dropping stitching from primary workflow" (design choice)
|
|
||||||
- "USB SSD mandatory, not SD card" (architectural commitment)
|
|
||||||
|
|
||||||
3. DURABLE ENGINEERING INSIGHT — earned knowledge that generalizes:
|
|
||||||
- "CTE gradient dominates WFE at F/1.2" (materials insight)
|
|
||||||
- "Preston model breaks below 5N because contact assumption fails"
|
|
||||||
- "m=1 coma NOT correctable by force modulation" (controls insight)
|
|
||||||
Test: would a competent engineer NEED experience to know this?
|
|
||||||
If it's textbook/google-findable, skip it.
|
|
||||||
|
|
||||||
4. STAKEHOLDER AND VENDOR EVENTS:
|
|
||||||
- "Email sent to Nabeel 2026-04-13 asking for lead time"
|
|
||||||
- "Meeting with Jason on Table 7 next Tuesday"
|
|
||||||
- "Starspec wants updated CAD by Friday"
|
|
||||||
|
|
||||||
5. PREFERENCES AND ADAPTATIONS that shape how Antoine works:
|
|
||||||
- "Antoine prefers OAuth over API keys"
|
|
||||||
- "Extraction stays off the capture hot path"
|
|
||||||
|
|
||||||
WHAT TO SKIP:
|
|
||||||
|
|
||||||
- Pure conversational filler ("ok thanks", "let me check")
|
|
||||||
- Instructional help content ("run this command", "here's how to...")
|
|
||||||
- Obvious textbook facts anyone can google in 30 seconds
|
|
||||||
- Session meta-chatter ("let me commit this", "deploy running")
|
|
||||||
- Transient system state snapshots ("36 active memories right now")
|
|
||||||
|
|
||||||
CANDIDATE TYPES — choose the best fit:
|
|
||||||
|
|
||||||
- project — a fact, decision, or event specific to one named project
|
|
||||||
- knowledge — durable engineering insight (use domain, not project)
|
|
||||||
- preference — how Antoine works / wants things done
|
|
||||||
- adaptation — a standing rule or adjustment to behavior
|
|
||||||
- episodic — a stakeholder event or milestone worth remembering
|
|
||||||
|
|
||||||
DOMAINS for knowledge candidates (required when type=knowledge and project is empty):
|
|
||||||
physics, materials, optics, mechanics, manufacturing, metrology,
|
|
||||||
controls, software, math, finance, business
|
|
||||||
|
|
||||||
TRUST HIERARCHY:
|
|
||||||
|
|
||||||
- project-specific: set project to the project id, leave domain empty
|
|
||||||
- domain knowledge: set domain, leave project empty
|
|
||||||
- events/activity: use project, type=project or episodic
|
|
||||||
- one conversation can produce MULTIPLE candidates — emit them all
|
|
||||||
|
|
||||||
OUTPUT RULES:
|
|
||||||
|
|
||||||
- Each candidate content under 250 characters, stands alone
|
|
||||||
- Default confidence 0.5. Raise to 0.7 only for ratified/committed claims.
|
|
||||||
- Raw JSON array, no prose, no markdown fences
|
|
||||||
- Empty array [] is fine when the conversation has no durable signal
|
|
||||||
|
|
||||||
Each element:
|
|
||||||
{"type": "project|knowledge|preference|adaptation|episodic", "content": "...", "project": "...", "domain": "", "confidence": 0.5}"""
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
@@ -206,13 +134,10 @@ def extract_candidates_llm_verbose(
|
|||||||
if not response_text:
|
if not response_text:
|
||||||
return LLMExtractionResult(candidates=[], raw_output="", error="empty_response")
|
return LLMExtractionResult(candidates=[], raw_output="", error="empty_response")
|
||||||
|
|
||||||
prompt_excerpt = (interaction.prompt or "")[:MAX_PROMPT_CHARS]
|
user_message = build_user_message(
|
||||||
response_excerpt = response_text[:MAX_RESPONSE_CHARS]
|
interaction.prompt or "",
|
||||||
user_message = (
|
response_text,
|
||||||
f"PROJECT HINT (may be empty): {interaction.project or ''}\n\n"
|
interaction.project or "",
|
||||||
f"USER PROMPT:\n{prompt_excerpt}\n\n"
|
|
||||||
f"ASSISTANT RESPONSE:\n{response_excerpt}\n\n"
|
|
||||||
"Return the JSON array now."
|
|
||||||
)
|
)
|
||||||
|
|
||||||
args = [
|
args = [
|
||||||
@@ -270,50 +195,25 @@ def extract_candidates_llm_verbose(
|
|||||||
def _parse_candidates(raw_output: str, interaction: Interaction) -> list[MemoryCandidate]:
|
def _parse_candidates(raw_output: str, interaction: Interaction) -> list[MemoryCandidate]:
|
||||||
"""Parse the model's JSON output into MemoryCandidate objects.
|
"""Parse the model's JSON output into MemoryCandidate objects.
|
||||||
|
|
||||||
Tolerates common model glitches: surrounding whitespace, stray
|
Shared stripping + per-item validation live in
|
||||||
markdown fences, leading/trailing prose. Silently drops malformed
|
``atocore.memory._llm_prompt``. This function adds the container-
|
||||||
array elements rather than raising.
|
only R9 project attribution: registry-check model_project and fall
|
||||||
|
back to the interaction scope when set.
|
||||||
"""
|
"""
|
||||||
text = raw_output.strip()
|
raw_items = parse_llm_json_array(raw_output)
|
||||||
if text.startswith("```"):
|
if not raw_items and raw_output.strip() not in ("", "[]"):
|
||||||
text = text.strip("`")
|
log.error("llm_extractor_parse_failed", raw_prefix=raw_output[:120])
|
||||||
first_newline = text.find("\n")
|
|
||||||
if first_newline >= 0:
|
|
||||||
text = text[first_newline + 1 :]
|
|
||||||
if text.endswith("```"):
|
|
||||||
text = text[:-3]
|
|
||||||
text = text.strip()
|
|
||||||
|
|
||||||
if not text or text == "[]":
|
|
||||||
return []
|
|
||||||
|
|
||||||
if not text.lstrip().startswith("["):
|
|
||||||
start = text.find("[")
|
|
||||||
end = text.rfind("]")
|
|
||||||
if start >= 0 and end > start:
|
|
||||||
text = text[start : end + 1]
|
|
||||||
|
|
||||||
try:
|
|
||||||
parsed = json.loads(text)
|
|
||||||
except json.JSONDecodeError as exc:
|
|
||||||
log.error("llm_extractor_parse_failed", error=str(exc), raw_prefix=raw_output[:120])
|
|
||||||
return []
|
|
||||||
|
|
||||||
if not isinstance(parsed, list):
|
|
||||||
return []
|
|
||||||
|
|
||||||
results: list[MemoryCandidate] = []
|
results: list[MemoryCandidate] = []
|
||||||
for item in parsed:
|
for raw_item in raw_items:
|
||||||
if not isinstance(item, dict):
|
normalized = normalize_candidate_item(raw_item)
|
||||||
|
if normalized is None:
|
||||||
continue
|
continue
|
||||||
mem_type = str(item.get("type") or "").strip().lower()
|
|
||||||
content = str(item.get("content") or "").strip()
|
model_project = normalized["project"]
|
||||||
model_project = str(item.get("project") or "").strip()
|
# R9 trust hierarchy: interaction scope wins; else registry-
|
||||||
# R9 trust hierarchy for project attribution:
|
# resolve the model's tag; else keep the model's tag so auto-
|
||||||
# 1. Interaction scope always wins when set (strongest signal)
|
# triage can surface unregistered projects.
|
||||||
# 2. Model project used only when interaction is unscoped
|
|
||||||
# AND model project resolves to a registered project
|
|
||||||
# 3. Empty string when both are empty/unregistered
|
|
||||||
if interaction.project:
|
if interaction.project:
|
||||||
project = interaction.project
|
project = interaction.project
|
||||||
elif model_project:
|
elif model_project:
|
||||||
@@ -328,9 +228,6 @@ def _parse_candidates(raw_output: str, interaction: Interaction) -> list[MemoryC
|
|||||||
if resolved in registered_ids:
|
if resolved in registered_ids:
|
||||||
project = resolved
|
project = resolved
|
||||||
else:
|
else:
|
||||||
# Unregistered project — keep the model's tag so
|
|
||||||
# auto-triage / the operator can see it and decide
|
|
||||||
# whether to register it as a new project or lead.
|
|
||||||
project = model_project
|
project = model_project
|
||||||
log.info(
|
log.info(
|
||||||
"unregistered_project_detected",
|
"unregistered_project_detected",
|
||||||
@@ -338,34 +235,19 @@ def _parse_candidates(raw_output: str, interaction: Interaction) -> list[MemoryC
|
|||||||
interaction_id=interaction.id,
|
interaction_id=interaction.id,
|
||||||
)
|
)
|
||||||
except Exception:
|
except Exception:
|
||||||
project = model_project if model_project else ""
|
project = model_project
|
||||||
else:
|
else:
|
||||||
project = ""
|
project = ""
|
||||||
domain = str(item.get("domain") or "").strip().lower()
|
|
||||||
confidence_raw = item.get("confidence", 0.5)
|
content = normalized["content"]
|
||||||
if mem_type not in MEMORY_TYPES:
|
|
||||||
continue
|
|
||||||
if not content:
|
|
||||||
continue
|
|
||||||
# Domain knowledge: embed the domain tag in the content so it
|
|
||||||
# survives without a schema migration. The context builder
|
|
||||||
# can match on it via query-relevance ranking, and a future
|
|
||||||
# migration can parse it into a proper column.
|
|
||||||
if domain and not project:
|
|
||||||
content = f"[{domain}] {content}"
|
|
||||||
try:
|
|
||||||
confidence = float(confidence_raw)
|
|
||||||
except (TypeError, ValueError):
|
|
||||||
confidence = 0.5
|
|
||||||
confidence = max(0.0, min(1.0, confidence))
|
|
||||||
results.append(
|
results.append(
|
||||||
MemoryCandidate(
|
MemoryCandidate(
|
||||||
memory_type=mem_type,
|
memory_type=normalized["type"],
|
||||||
content=content[:1000],
|
content=content,
|
||||||
rule="llm_extraction",
|
rule="llm_extraction",
|
||||||
source_span=content[:200],
|
source_span=content[:200],
|
||||||
project=project,
|
project=project,
|
||||||
confidence=confidence,
|
confidence=normalized["confidence"],
|
||||||
source_interaction_id=interaction.id,
|
source_interaction_id=interaction.id,
|
||||||
extractor_version=LLM_EXTRACTOR_VERSION,
|
extractor_version=LLM_EXTRACTOR_VERSION,
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -340,6 +340,84 @@ def reinforce_memory(
|
|||||||
return True, old_confidence, new_confidence
|
return True, old_confidence, new_confidence
|
||||||
|
|
||||||
|
|
||||||
|
def auto_promote_reinforced(
|
||||||
|
min_reference_count: int = 3,
|
||||||
|
min_confidence: float = 0.7,
|
||||||
|
max_age_days: int = 14,
|
||||||
|
) -> list[str]:
|
||||||
|
"""Auto-promote candidate memories with strong reinforcement signals.
|
||||||
|
|
||||||
|
Phase 10: memories that have been reinforced by multiple interactions
|
||||||
|
graduate from candidate to active without human review. This rewards
|
||||||
|
knowledge that the system keeps referencing organically.
|
||||||
|
|
||||||
|
Returns a list of promoted memory IDs.
|
||||||
|
"""
|
||||||
|
from datetime import timedelta
|
||||||
|
|
||||||
|
cutoff = (
|
||||||
|
datetime.now(timezone.utc) - timedelta(days=max_age_days)
|
||||||
|
).strftime("%Y-%m-%d %H:%M:%S")
|
||||||
|
promoted: list[str] = []
|
||||||
|
with get_connection() as conn:
|
||||||
|
rows = conn.execute(
|
||||||
|
"SELECT id, content, memory_type, project, confidence, "
|
||||||
|
"reference_count FROM memories "
|
||||||
|
"WHERE status = 'candidate' "
|
||||||
|
"AND COALESCE(reference_count, 0) >= ? "
|
||||||
|
"AND confidence >= ? "
|
||||||
|
"AND last_referenced_at >= ?",
|
||||||
|
(min_reference_count, min_confidence, cutoff),
|
||||||
|
).fetchall()
|
||||||
|
|
||||||
|
for row in rows:
|
||||||
|
mid = row["id"]
|
||||||
|
ok = promote_memory(mid)
|
||||||
|
if ok:
|
||||||
|
promoted.append(mid)
|
||||||
|
log.info(
|
||||||
|
"memory_auto_promoted",
|
||||||
|
memory_id=mid,
|
||||||
|
memory_type=row["memory_type"],
|
||||||
|
project=row["project"] or "(global)",
|
||||||
|
reference_count=row["reference_count"],
|
||||||
|
confidence=round(row["confidence"], 3),
|
||||||
|
)
|
||||||
|
return promoted
|
||||||
|
|
||||||
|
|
||||||
|
def expire_stale_candidates(
|
||||||
|
max_age_days: int = 14,
|
||||||
|
) -> list[str]:
|
||||||
|
"""Reject candidate memories that sat in queue too long unreinforced.
|
||||||
|
|
||||||
|
Candidates older than ``max_age_days`` with zero reinforcement are
|
||||||
|
auto-rejected to prevent unbounded queue growth. Returns rejected IDs.
|
||||||
|
"""
|
||||||
|
from datetime import timedelta
|
||||||
|
|
||||||
|
cutoff = (
|
||||||
|
datetime.now(timezone.utc) - timedelta(days=max_age_days)
|
||||||
|
).strftime("%Y-%m-%d %H:%M:%S")
|
||||||
|
expired: list[str] = []
|
||||||
|
with get_connection() as conn:
|
||||||
|
rows = conn.execute(
|
||||||
|
"SELECT id FROM memories "
|
||||||
|
"WHERE status = 'candidate' "
|
||||||
|
"AND COALESCE(reference_count, 0) = 0 "
|
||||||
|
"AND created_at < ?",
|
||||||
|
(cutoff,),
|
||||||
|
).fetchall()
|
||||||
|
|
||||||
|
for row in rows:
|
||||||
|
mid = row["id"]
|
||||||
|
ok = reject_candidate_memory(mid)
|
||||||
|
if ok:
|
||||||
|
expired.append(mid)
|
||||||
|
log.info("memory_expired", memory_id=mid)
|
||||||
|
return expired
|
||||||
|
|
||||||
|
|
||||||
def get_memories_for_context(
|
def get_memories_for_context(
|
||||||
memory_types: list[str] | None = None,
|
memory_types: list[str] | None = None,
|
||||||
project: str | None = None,
|
project: str | None = None,
|
||||||
|
|||||||
@@ -171,3 +171,38 @@ def test_llm_extraction_failure_returns_empty(tmp_data_dir, monkeypatch):
|
|||||||
# Nothing in the candidate queue
|
# Nothing in the candidate queue
|
||||||
queue = get_memories(status="candidate", limit=10)
|
queue = get_memories(status="candidate", limit=10)
|
||||||
assert len(queue) == 0
|
assert len(queue) == 0
|
||||||
|
|
||||||
|
|
||||||
|
def test_extract_batch_api_503_when_cli_missing(tmp_data_dir, monkeypatch):
|
||||||
|
"""R11: POST /admin/extract-batch with mode=llm must fail loud when
|
||||||
|
the `claude` CLI is unavailable, instead of silently returning a
|
||||||
|
success-with-0-candidates payload (which masked host-vs-container
|
||||||
|
truth for operators)."""
|
||||||
|
from fastapi.testclient import TestClient
|
||||||
|
from atocore.main import app
|
||||||
|
import atocore.api.routes as routes
|
||||||
|
|
||||||
|
init_db()
|
||||||
|
monkeypatch.setattr(routes, "_llm_cli_available", lambda: False)
|
||||||
|
|
||||||
|
client = TestClient(app)
|
||||||
|
response = client.post("/admin/extract-batch", json={"mode": "llm"})
|
||||||
|
|
||||||
|
assert response.status_code == 503
|
||||||
|
assert "claude" in response.json()["detail"].lower()
|
||||||
|
|
||||||
|
|
||||||
|
def test_extract_batch_api_rule_mode_ok_without_cli(tmp_data_dir, monkeypatch):
|
||||||
|
"""Rule mode must still work when the LLM CLI is missing — R11 only
|
||||||
|
affects mode=llm."""
|
||||||
|
from fastapi.testclient import TestClient
|
||||||
|
from atocore.main import app
|
||||||
|
import atocore.api.routes as routes
|
||||||
|
|
||||||
|
init_db()
|
||||||
|
monkeypatch.setattr(routes, "_llm_cli_available", lambda: False)
|
||||||
|
|
||||||
|
client = TestClient(app)
|
||||||
|
response = client.post("/admin/extract-batch", json={"mode": "rule"})
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
|||||||
@@ -186,3 +186,98 @@ def test_memories_for_context_empty(isolated_db):
|
|||||||
text, chars = get_memories_for_context()
|
text, chars = get_memories_for_context()
|
||||||
assert text == ""
|
assert text == ""
|
||||||
assert chars == 0
|
assert chars == 0
|
||||||
|
|
||||||
|
|
||||||
|
# --- Phase 10: auto-promotion + candidate expiry ---
|
||||||
|
|
||||||
|
|
||||||
|
def _get_memory_by_id(memory_id):
|
||||||
|
"""Helper: fetch a single memory by ID."""
|
||||||
|
from atocore.models.database import get_connection
|
||||||
|
with get_connection() as conn:
|
||||||
|
row = conn.execute("SELECT * FROM memories WHERE id = ?", (memory_id,)).fetchone()
|
||||||
|
return dict(row) if row else None
|
||||||
|
|
||||||
|
|
||||||
|
def test_auto_promote_reinforced_basic(isolated_db):
|
||||||
|
from atocore.memory.service import (
|
||||||
|
auto_promote_reinforced,
|
||||||
|
create_memory,
|
||||||
|
reinforce_memory,
|
||||||
|
)
|
||||||
|
|
||||||
|
mem_obj = create_memory("knowledge", "Zerodur has near-zero CTE", status="candidate", confidence=0.7)
|
||||||
|
mid = mem_obj.id
|
||||||
|
# reinforce_memory only touches active memories, so we need to
|
||||||
|
# promote first to reinforce, then demote back to candidate —
|
||||||
|
# OR just bump reference_count + last_referenced_at directly
|
||||||
|
from atocore.models.database import get_connection
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
now = datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S")
|
||||||
|
with get_connection() as conn:
|
||||||
|
conn.execute(
|
||||||
|
"UPDATE memories SET reference_count = 3, last_referenced_at = ? WHERE id = ?",
|
||||||
|
(now, mid),
|
||||||
|
)
|
||||||
|
|
||||||
|
promoted = auto_promote_reinforced(min_reference_count=3, min_confidence=0.7)
|
||||||
|
assert mid in promoted
|
||||||
|
mem = _get_memory_by_id(mid)
|
||||||
|
assert mem["status"] == "active"
|
||||||
|
|
||||||
|
|
||||||
|
def test_auto_promote_reinforced_ignores_low_refs(isolated_db):
|
||||||
|
from atocore.memory.service import auto_promote_reinforced, create_memory
|
||||||
|
from atocore.models.database import get_connection
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
|
||||||
|
mem_obj = create_memory("knowledge", "Some knowledge", status="candidate", confidence=0.7)
|
||||||
|
mid = mem_obj.id
|
||||||
|
now = datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S")
|
||||||
|
with get_connection() as conn:
|
||||||
|
conn.execute(
|
||||||
|
"UPDATE memories SET reference_count = 1, last_referenced_at = ? WHERE id = ?",
|
||||||
|
(now, mid),
|
||||||
|
)
|
||||||
|
|
||||||
|
promoted = auto_promote_reinforced(min_reference_count=3, min_confidence=0.7)
|
||||||
|
assert mid not in promoted
|
||||||
|
mem = _get_memory_by_id(mid)
|
||||||
|
assert mem["status"] == "candidate"
|
||||||
|
|
||||||
|
|
||||||
|
def test_expire_stale_candidates(isolated_db):
|
||||||
|
from atocore.memory.service import create_memory, expire_stale_candidates
|
||||||
|
from atocore.models.database import get_connection
|
||||||
|
|
||||||
|
mem_obj = create_memory("knowledge", "Old unreferenced fact", status="candidate")
|
||||||
|
mid = mem_obj.id
|
||||||
|
with get_connection() as conn:
|
||||||
|
conn.execute(
|
||||||
|
"UPDATE memories SET created_at = datetime('now', '-30 days') WHERE id = ?",
|
||||||
|
(mid,),
|
||||||
|
)
|
||||||
|
|
||||||
|
expired = expire_stale_candidates(max_age_days=14)
|
||||||
|
assert mid in expired
|
||||||
|
mem = _get_memory_by_id(mid)
|
||||||
|
assert mem["status"] == "invalid"
|
||||||
|
|
||||||
|
|
||||||
|
def test_expire_stale_candidates_keeps_reinforced(isolated_db):
|
||||||
|
from atocore.memory.service import create_memory, expire_stale_candidates
|
||||||
|
from atocore.models.database import get_connection
|
||||||
|
|
||||||
|
mem_obj = create_memory("knowledge", "Referenced fact", status="candidate")
|
||||||
|
mid = mem_obj.id
|
||||||
|
with get_connection() as conn:
|
||||||
|
conn.execute(
|
||||||
|
"UPDATE memories SET reference_count = 1, "
|
||||||
|
"created_at = datetime('now', '-30 days') WHERE id = ?",
|
||||||
|
(mid,),
|
||||||
|
)
|
||||||
|
|
||||||
|
expired = expire_stale_candidates(max_age_days=14)
|
||||||
|
assert mid not in expired
|
||||||
|
mem = _get_memory_by_id(mid)
|
||||||
|
assert mem["status"] == "candidate"
|
||||||
|
|||||||
Reference in New Issue
Block a user