Compare commits
14 Commits
codex/open
...
33a6c61ca6
| Author | SHA1 | Date | |
|---|---|---|---|
| 33a6c61ca6 | |||
| 33a106732f | |||
| 3011aa77da | |||
| ba36a28453 | |||
| 999788b790 | |||
| 775960c8c8 | |||
| b687e7fa6f | |||
| 4d4d5f437a | |||
| 5b114baa87 | |||
| c2e7064238 | |||
| dc9fdd3a38 | |||
| 58ea21df80 | |||
| 8c0f1ff6f3 | |||
| 3db1dd99b5 |
@@ -6,19 +6,23 @@
|
||||
|
||||
## Orientation
|
||||
|
||||
- **live_sha** (Dalidou `/health` build_sha): `4f8bec7` (dashboard endpoint live)
|
||||
- **last_updated**: 2026-04-12 by Claude (full session docs sync)
|
||||
- **main_tip**: `4ac4e5c` (includes OpenClaw capture plugin merge)
|
||||
- **test_count**: 290 passing
|
||||
- **harness**: `17/18 PASS` (only p06-tailscale — chunk bleed, not a memory/ranking issue)
|
||||
- **vectors**: 33,253 (was 20,781; +12,472 from atomizer-v2 ingestion)
|
||||
- **active_memories**: 47 (16 project, 16 knowledge, 6 adaptation, 3 identity, 3 preference, 3 episodic)
|
||||
- **candidate_memories**: 0
|
||||
- **registered_projects**: p04-gigabit, p05-interferometer, p06-polisher, atomizer-v2, atocore
|
||||
- **project_state_entries**: p04=5, p05=9, p06=9, atocore=38 (61 total)
|
||||
- **live_sha** (Dalidou `/health` build_sha): `775960c` (verified 2026-04-16 via /health, build_time 2026-04-16T17:59:30Z)
|
||||
- **last_updated**: 2026-04-16 by Claude ("Make It Actually Useful" sprint — observability + Phase 10)
|
||||
- **main_tip**: `999788b`
|
||||
- **test_count**: 303 (4 new Phase 10 tests)
|
||||
- **harness**: `17/18 PASS` on live Dalidou (p04-constraints expects "Zerodur" — retrieval content gap, not regression)
|
||||
- **vectors**: 33,253
|
||||
- **active_memories**: 84 (31 project, 23 knowledge, 10 episodic, 8 adaptation, 7 preference, 5 identity)
|
||||
- **candidate_memories**: 2
|
||||
- **interactions**: 234 total (192 claude-code, 38 openclaw, 4 test)
|
||||
- **registered_projects**: atocore, p04-gigabit, p05-interferometer, p06-polisher, atomizer-v2, abb-space (aliased p08)
|
||||
- **project_state_entries**: 110 total (atocore=47, p06=19, p05=18, p04=15, abb=6, atomizer=5)
|
||||
- **entities**: 35 (engineering knowledge graph, Layer 2)
|
||||
- **off_host_backup**: `papa@192.168.86.39:/home/papa/atocore-backups/` via cron, verified
|
||||
- **nightly_pipeline**: backup → cleanup → rsync → LLM extraction (sonnet) → auto-triage (sonnet)
|
||||
- **capture_clients**: claude-code (Stop hook), openclaw (plugin)
|
||||
- **nightly_pipeline**: backup → cleanup → rsync → OpenClaw import → vault refresh → extract → auto-triage → **auto-promote/expire (NEW)** → weekly synth/lint Sundays → **retrieval harness (NEW)** → **pipeline summary (NEW)**
|
||||
- **capture_clients**: claude-code (Stop hook + cwd project inference), openclaw (before_agent_start + llm_output plugin, verified live)
|
||||
- **wiki**: http://dalidou:8100/wiki (browse), /wiki/projects/{id}, /wiki/entities/{id}, /wiki/search
|
||||
- **dashboard**: http://dalidou:8100/admin/dashboard (now shows pipeline health, interaction totals by client, all registered projects)
|
||||
|
||||
## Active Plan
|
||||
|
||||
@@ -128,17 +132,17 @@ One branch `codex/extractor-eval-loop` for Day 1-5, a second `codex/retrieval-ha
|
||||
|-----|--------|----------|------------------------------------|-------------------------------------------------------------------------|--------------|--------|------------|-------------|
|
||||
| R1 | Codex | P1 | deploy/hooks/capture_stop.py:76-85 | Live Claude capture still omits `extract`, so "loop closed both sides" remains overstated in practice even though the API supports it | fixed | Claude | 2026-04-11 | c67bec0 |
|
||||
| R2 | Codex | P1 | src/atocore/context/builder.py | Project memories excluded from pack | fixed | Claude | 2026-04-11 | 8ea53f4 |
|
||||
| R3 | Claude | P2 | src/atocore/memory/extractor.py | Rule cues (`## Decision:`) never fire on conversational LLM text | open | Claude | 2026-04-11 | |
|
||||
| R3 | Claude | P2 | src/atocore/memory/extractor.py | Rule cues (`## Decision:`) never fire on conversational LLM text | declined | Claude | 2026-04-11 | see 2026-04-14 session log |
|
||||
| R4 | Codex | P2 | DEV-LEDGER.md:11 | Orientation `main_tip` was stale versus `HEAD` / `origin/main` | fixed | Codex | 2026-04-11 | 81307ce |
|
||||
| R5 | Codex | P1 | src/atocore/interactions/service.py:157-174 | The deployed extraction path still calls only the rule extractor; the new LLM extractor is eval/script-only, so Day 4 "gate cleared" is true as a benchmark result but not as an operational extraction path | fixed | Claude | 2026-04-12 | c67bec0 |
|
||||
| R6 | Codex | P1 | src/atocore/memory/extractor_llm.py:258-276 | LLM extraction accepts model-supplied `project` verbatim with no fallback to `interaction.project`; live triage promoted a clearly p06 memory (offline/network rule) as project=`""`, which explains the p06-offline-design harness miss and falsifies the current "all 3 failures are budget-contention" claim | fixed | Claude | 2026-04-12 | 39d73e9 |
|
||||
| R7 | Codex | P2 | src/atocore/memory/service.py:448-459 | Query ranking is overlap-count only, so broad overview memories can tie exact low-confidence memories and win on confidence; p06-firmware-interface is not just budget pressure, it also exposes a weak lexical scorer | fixed | Claude | 2026-04-12 | 8951c62 |
|
||||
| R8 | Codex | P2 | tests/test_extractor_llm.py:1-7 | LLM extractor tests stop at parser/failure contracts; there is no automated coverage for the script-only persistence/review path that produced the 16 promoted memories, including project-scope preservation | fixed | Claude | 2026-04-12 | 69c9717 |
|
||||
| R9 | Codex | P2 | src/atocore/memory/extractor_llm.py:258-259 | The R6 fallback only repairs empty project output. A wrong non-empty model project still overrides the interaction's known scope, so project attribution is improved but not yet trust-preserving. | fixed | Claude | 2026-04-12 | e5e9a99 |
|
||||
| R10 | Codex | P2 | docs/master-plan-status.md:31-33 | "Phase 8 - OpenClaw Integration" is fair as a baseline milestone, but not as a "primary" integration claim. `t420-openclaw/atocore.py` currently covers a narrow read-oriented subset (13 request shapes vs 32 API routes) plus fail-open health, while memory/interactions/admin write paths remain out of surface. | open | Claude | 2026-04-12 | |
|
||||
| R11 | Codex | P2 | src/atocore/api/routes.py:773-845 | `POST /admin/extract-batch` still accepts `mode="llm"` inside the container and returns a successful 0-candidate result instead of surfacing that host-only LLM extraction is unavailable from this runtime. That is a misleading API contract for operators. | open | Claude | 2026-04-12 | |
|
||||
| R12 | Codex | P2 | scripts/batch_llm_extract_live.py:39-190 | The host-side extractor duplicates the LLM system prompt and JSON parsing logic from `src/atocore/memory/extractor_llm.py`. It works today, but this is now a prompt/parser drift risk across the container and host implementations. | open | Claude | 2026-04-12 | |
|
||||
| R13 | Codex | P2 | DEV-LEDGER.md:12 | The new `286 passing` test-count claim is not reproducibly auditable from the current audit environments: neither Dalidou nor the clean worktree has `pytest` available. The claim may be true in Claude's dev shell, but it remains unverified in this audit. | open | Claude | 2026-04-12 | |
|
||||
| R10 | Codex | P2 | docs/master-plan-status.md:31-33 | "Phase 8 - OpenClaw Integration" is fair as a baseline milestone, but not as a "primary" integration claim. `t420-openclaw/atocore.py` currently covers a narrow read-oriented subset (13 request shapes vs 32 API routes) plus fail-open health, while memory/interactions/admin write paths remain out of surface. | fixed | Claude | 2026-04-12 | (pending) |
|
||||
| R11 | Codex | P2 | src/atocore/api/routes.py:773-845 | `POST /admin/extract-batch` still accepts `mode="llm"` inside the container and returns a successful 0-candidate result instead of surfacing that host-only LLM extraction is unavailable from this runtime. That is a misleading API contract for operators. | fixed | Claude | 2026-04-12 | (pending) |
|
||||
| R12 | Codex | P2 | scripts/batch_llm_extract_live.py:39-190 | The host-side extractor duplicates the LLM system prompt and JSON parsing logic from `src/atocore/memory/extractor_llm.py`. It works today, but this is now a prompt/parser drift risk across the container and host implementations. | fixed | Claude | 2026-04-12 | (pending) |
|
||||
| R13 | Codex | P2 | DEV-LEDGER.md:12 | The new `286 passing` test-count claim is not reproducibly auditable from the current audit environments: neither Dalidou nor the clean worktree has `pytest` available. The claim may be true in Claude's dev shell, but it remains unverified in this audit. | fixed | Claude | 2026-04-12 | (pending) |
|
||||
|
||||
## Recent Decisions
|
||||
|
||||
@@ -156,6 +160,21 @@ One branch `codex/extractor-eval-loop` for Day 1-5, a second `codex/retrieval-ha
|
||||
|
||||
## Session Log
|
||||
|
||||
- **2026-04-16 Claude** `b687e7f..999788b` **"Make It Actually Useful" sprint.** Two-part session: ops fixes then consolidation sprint.
|
||||
|
||||
**Part 1 — Ops fixes:** Deployed `b687e7f` (project inference from cwd). Fixed cron logging (was `/dev/null` — redirected to `~/atocore-logs/`). Fixed OpenClaw gateway crash-loop (`discord.replyToMode: "any"` invalid → `"all"`). Deployed `atocore-capture` plugin on T420 OpenClaw using `before_agent_start` + `llm_output` hooks — verified end-to-end: 38 `client=openclaw` interactions captured. Backfilled project tags on 179/181 unscoped interactions (165 atocore, 8 p06, 6 p04).
|
||||
|
||||
**Part 2 — Sprint (Phase A+C):** Pipeline observability: retrieval harness now runs nightly (Step E), pipeline summary persisted to project state (Step F), dashboard enhanced with interaction totals by client + pipeline health section + dynamic project list. Phase 10 landed: `auto_promote_reinforced()` (candidate→active when reference_count≥3, confidence≥0.7) + `expire_stale_candidates()` (14-day unreinforced→auto-reject), both wired into nightly cron Step B2. Seeding script created (26 entries across 6 projects — all already existed from prior session). Tests 299→303. Harness 17/18 on live Dalidou (p04-constraints expects "Zerodur" — retrieval content gap, not regression). Deployed `775960c`.
|
||||
|
||||
- **2026-04-15 Claude (pm)** Closed the last harness failure honestly. **p06-tailscale fixed: 18/18 PASS.** Root-caused: not a retrieval bug — the p06 `ARCHITECTURE.md` Overview chunk legitimately mentions "the GigaBIT M1 telescope mirror" because the Polisher Suite is built *for* that mirror. All four retrieved sources for the tailscale prompt were genuinely p06/shared paths; zero actual p04 chunks leaked. The fixture's `expect_absent: GigaBIT` was catching semantic overlap, not retrieval bleed. Narrowed it to `expect_absent: "[Source: p04-gigabit/"` — a source-path check that tests the real invariant (no p04 source chunks in p06 context). Other p06 fixtures still use the word-blacklist form; they pass today because their more-specific prompts don't pull the ARCHITECTURE.md Overview, so I left them alone rather than churn fixtures that aren't failing. Did NOT change retrieval/ranking — no code change, fixture-only fix. Tests unchanged at 299.
|
||||
|
||||
- **2026-04-15 Claude** Deploy + doc debt sweep. Deployed `c2e7064` to Dalidou (build_time 2026-04-15T15:08:51Z, build_sha matches, /health ok) so R11/R12 are now live, not just on main. **R11 verified on live**: `POST /admin/extract-batch {"mode":"llm"}` against http://127.0.0.1:8100 returns HTTP 503 with the operator-facing "claude CLI not on PATH, run host-side script or use mode=rule" message — exactly the post-fix contract. **R13 closed (fixed)**: added a reproduction recipe to Quick Commands (`pip install -r requirements-dev.txt && pytest --collect-only -q && pytest -q`) and re-cited `test_count: 299` against a fresh local collection on 2026-04-15, so the claim is now auditable from any clean checkout — Codex's audit worktree just needs `pip install -r requirements-dev.txt`. **R10 closed (fixed)**: rewrote the `docs/master-plan-status.md` OpenClaw section to explicitly disclaim "primary integration" and report the current narrow surface: 14 client request shapes against ~44 server routes, predominantly read + `/project/state` + `/ingest/sources`, with memory/interactions/admin/entities/triage/extraction writes correctly out of scope. Open findings now: none blocking. Next natural move: the last harness failure `p06-tailscale` (chunk bleed).
|
||||
|
||||
- **2026-04-14 Claude (pm)** Closed R11+R12, declined R3. **R11 (fixed):** `POST /admin/extract-batch` with `mode="llm"` now returns 503 when the `claude` CLI is not on PATH, with a message pointing at the host-side script. Previously it silently returned a success-0 payload, masking host-vs-container truth. 2 new tests in `test_extraction_pipeline.py` cover the 503 path and the rule-mode-still-works path. **R12 (fixed):** extracted shared `SYSTEM_PROMPT` + `parse_llm_json_array` + `normalize_candidate_item` + `build_user_message` into stdlib-only `src/atocore/memory/_llm_prompt.py`. Both `src/atocore/memory/extractor_llm.py` (container) and `scripts/batch_llm_extract_live.py` (host) now import from it. The host script uses `sys.path` to reach the stdlib-only module without needing the full atocore package. Project-attribution policy stays path-specific (container uses registry-check; host defers to server). **R3 (declined):** rule cues not firing on conversational LLM text is by design now — the LLM extractor (llm-0.4.0) is the production path for conversational content as of the Day 4 gate (2026-04-12). Expanding rules to match conversational prose risks the FP blowup Day 2 already showed. Rule extractor stays narrow for structural PKM text. Tests 297 → 299. Live `/health` still `58ea21d`; this session's changes need deploy.
|
||||
|
||||
- **2026-04-14 Claude** MAJOR session: Engineering knowledge layer V1 (Layer 2) built — entity + relationship tables, 15 types, 12 relationship kinds, 35 bootstrapped entities across p04/p05/p06. Human Mirror (Layer 3) — GET /projects/{name}/mirror.html + navigable wiki at /wiki with search. Karpathy-inspired upgrades: contradiction detection in triage, weekly lint pass, weekly synthesis pass producing "current state" paragraphs at top of project pages. Auto-detection of new projects from extraction. Registry persistence fix (ATOCORE_PROJECT_REGISTRY_DIR env var). abb-space/p08 aliases added, atomizer-v2 ingested (568 docs, +12,472 vectors). Identity/preference seed (6 new), signal-aggressive extractor rewrite (llm-0.4.0), auto vault refresh in cron. **OpenClaw one-way pull importer** built per codex proposal — reads /home/papa/clawd SOUL.md, USER.md, MEMORY.md, MODEL-ROUTING.md, memory/*.md via SSH, hash-delta import, pipeline triages. First import: 10 candidates → 10 promoted with lenient triage rule. Active memories 47→84. State entries 61→78. Tests 290→297. Dashboard at /admin/dashboard. Wiki at /wiki.
|
||||
|
||||
|
||||
- **2026-04-12 Claude** `4f8bec7..4ac4e5c` Session close. Merged OpenClaw capture plugin, ingested atomizer-v2 (568 docs, 12,472 new vectors → 33,253 total), seeded Phase 4 identity/preference memories (6 new, 47 total active), added deeper Wave 2 state entries (p05 +3, p06 +3), fixed R9 project trust hierarchy (7 case tests), built auto-triage pipeline, observability dashboard at /admin/dashboard. Updated master-plan-status.md and DEV-LEDGER.md to reflect full current state. 7/14 phases baseline complete. All P1s closed. Nightly pipeline runs unattended with both Claude Code and OpenClaw feeding the reflection loop.
|
||||
- **2026-04-12 Codex (branch `codex/openclaw-capture-plugin`)** added a minimal external OpenClaw plugin at `openclaw-plugins/atocore-capture/` that mirrors Claude Code capture semantics: user-triggered assistant turns are POSTed to AtoCore `/interactions` with `client="openclaw"` and `reinforce=true`, fail-open, no extraction in-path. For live verification, temporarily added the local plugin load path to OpenClaw config and restarted the gateway so the plugin can load. Branch truth is ready; end-to-end verification still needs one fresh post-restart OpenClaw user turn to confirm new `client=openclaw` interactions appear on Dalidou.
|
||||
- **2026-04-12 Claude** Batch 3 (R9 fix): `144dbbd..e5e9a99`. Trust hierarchy for project attribution — interaction scope always wins when set, model project only used for unscoped interactions + registered check. 7 case tests (A-G) cover every combination. Harness 17/18 (no regression). Tests 286->290. Before: wrong registered project could silently override interaction scope. After: interaction.project is the strongest signal; model project is only a fallback for unscoped captures. Not yet guaranteed: nothing prevents the *same* project's model output from being semantically wrong within that project. R9 marked fixed.
|
||||
@@ -201,4 +220,9 @@ git push origin main && ssh papa@dalidou "bash /srv/storage/atocore/app/deploy/d
|
||||
python scripts/atocore_client.py batch-extract '' '' 200 false # preview
|
||||
python scripts/atocore_client.py batch-extract '' '' 200 true # persist
|
||||
python scripts/atocore_client.py triage
|
||||
|
||||
# Reproduce the ledger's test_count claim from a clean checkout
|
||||
pip install -r requirements-dev.txt
|
||||
pytest --collect-only -q | tail -1 # -> "N tests collected"
|
||||
pytest -q # -> "N passed"
|
||||
```
|
||||
|
||||
@@ -38,7 +38,7 @@
|
||||
},
|
||||
{
|
||||
"id": "p06-polisher",
|
||||
"aliases": ["p06", "polisher"],
|
||||
"aliases": ["p06", "polisher", "p11", "polisher-fullum", "P11-Polisher-Fullum"],
|
||||
"description": "Active P06 polisher corpus from PKM, software-suite notes, and selected repo context.",
|
||||
"ingest_roots": [
|
||||
{
|
||||
@@ -47,6 +47,30 @@
|
||||
"label": "P06 staged project docs"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "abb-space",
|
||||
"aliases": ["abb", "abb-mirror", "p08", "p08-abb-space", "p08-abb-space-mirror"],
|
||||
"description": "ABB Space mirror - lead/proposition for Atomaste. Also tracked as P08.",
|
||||
"ingest_roots": [
|
||||
{
|
||||
"source": "vault",
|
||||
"subpath": "incoming/projects/abb-space",
|
||||
"label": "ABB Space docs"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "atomizer-v2",
|
||||
"aliases": ["atomizer", "aom", "aom-v2"],
|
||||
"description": "Atomizer V2 parametric optimization platform",
|
||||
"ingest_roots": [
|
||||
{
|
||||
"source": "vault",
|
||||
"subpath": "incoming/projects/atomizer-v2/repo",
|
||||
"label": "Atomizer V2 repo"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -34,22 +34,36 @@ export PYTHONPATH="$APP_DIR/src:${PYTHONPATH:-}"
|
||||
log "=== AtoCore batch extraction + triage starting ==="
|
||||
log "URL=$ATOCORE_URL LIMIT=$LIMIT"
|
||||
|
||||
# --- Pipeline stats accumulator ---
|
||||
EXTRACT_OUT=""
|
||||
TRIAGE_OUT=""
|
||||
HARNESS_OUT=""
|
||||
|
||||
# Step A: Extract candidates from recent interactions
|
||||
log "Step A: LLM extraction"
|
||||
python3 "$APP_DIR/scripts/batch_llm_extract_live.py" \
|
||||
EXTRACT_OUT=$(python3 "$APP_DIR/scripts/batch_llm_extract_live.py" \
|
||||
--base-url "$ATOCORE_URL" \
|
||||
--limit "$LIMIT" \
|
||||
2>&1 || {
|
||||
2>&1) || {
|
||||
log "WARN: batch extraction failed (non-blocking)"
|
||||
}
|
||||
echo "$EXTRACT_OUT"
|
||||
|
||||
# Step B: Auto-triage candidates in the queue
|
||||
log "Step B: auto-triage"
|
||||
python3 "$APP_DIR/scripts/auto_triage.py" \
|
||||
TRIAGE_OUT=$(python3 "$APP_DIR/scripts/auto_triage.py" \
|
||||
--base-url "$ATOCORE_URL" \
|
||||
2>&1 || {
|
||||
2>&1) || {
|
||||
log "WARN: auto-triage failed (non-blocking)"
|
||||
}
|
||||
echo "$TRIAGE_OUT"
|
||||
|
||||
# Step B2: Auto-promote reinforced candidates + expire stale ones
|
||||
log "Step B2: auto-promote + expire"
|
||||
python3 "$APP_DIR/scripts/auto_promote_reinforced.py" \
|
||||
2>&1 || {
|
||||
log "WARN: auto-promote/expire failed (non-blocking)"
|
||||
}
|
||||
|
||||
# Step C: Weekly synthesis (Sundays only)
|
||||
if [[ "$(date -u +%u)" == "7" ]]; then
|
||||
@@ -66,4 +80,73 @@ if [[ "$(date -u +%u)" == "7" ]]; then
|
||||
2>&1 || true
|
||||
fi
|
||||
|
||||
# Step E: Retrieval harness (daily)
|
||||
log "Step E: retrieval harness"
|
||||
HARNESS_OUT=$(python3 "$APP_DIR/scripts/retrieval_eval.py" \
|
||||
--json \
|
||||
--base-url "$ATOCORE_URL" \
|
||||
2>&1) || {
|
||||
log "WARN: retrieval harness failed (non-blocking)"
|
||||
}
|
||||
echo "$HARNESS_OUT"
|
||||
|
||||
# Step F: Persist pipeline summary to project state
|
||||
log "Step F: pipeline summary"
|
||||
python3 -c "
|
||||
import json, urllib.request, re, sys
|
||||
|
||||
base = '$ATOCORE_URL'
|
||||
ts = '$TIMESTAMP'
|
||||
|
||||
def post_state(key, value):
|
||||
body = json.dumps({
|
||||
'project': 'atocore', 'category': 'status',
|
||||
'key': key, 'value': value, 'source': 'nightly pipeline',
|
||||
}).encode()
|
||||
req = urllib.request.Request(
|
||||
f'{base}/project/state', data=body,
|
||||
headers={'Content-Type': 'application/json'}, method='POST',
|
||||
)
|
||||
try:
|
||||
urllib.request.urlopen(req, timeout=10)
|
||||
except Exception as e:
|
||||
print(f'WARN: failed to persist {key}: {e}', file=sys.stderr)
|
||||
|
||||
# Parse harness JSON
|
||||
harness = {}
|
||||
try:
|
||||
harness = json.loads('''$HARNESS_OUT''')
|
||||
post_state('retrieval_harness_result', json.dumps({
|
||||
'passed': harness.get('passed', 0),
|
||||
'total': harness.get('total', 0),
|
||||
'failures': [f['name'] for f in harness.get('fixtures', []) if not f.get('ok')],
|
||||
'run_at': ts,
|
||||
}))
|
||||
p, t = harness.get('passed', '?'), harness.get('total', '?')
|
||||
print(f'Harness: {p}/{t}')
|
||||
except Exception:
|
||||
print('WARN: could not parse harness output')
|
||||
|
||||
# Parse triage counts from stdout
|
||||
triage_out = '''$TRIAGE_OUT'''
|
||||
promoted = len(re.findall(r'promoted', triage_out, re.IGNORECASE))
|
||||
rejected = len(re.findall(r'rejected', triage_out, re.IGNORECASE))
|
||||
needs_human = len(re.findall(r'needs.human', triage_out, re.IGNORECASE))
|
||||
|
||||
# Build summary
|
||||
summary = {
|
||||
'run_at': ts,
|
||||
'harness_passed': harness.get('passed', -1),
|
||||
'harness_total': harness.get('total', -1),
|
||||
'triage_promoted': promoted,
|
||||
'triage_rejected': rejected,
|
||||
'triage_needs_human': needs_human,
|
||||
}
|
||||
post_state('pipeline_last_run', ts)
|
||||
post_state('pipeline_summary', json.dumps(summary))
|
||||
print(f'Pipeline summary persisted: {json.dumps(summary)}')
|
||||
" 2>&1 || {
|
||||
log "WARN: pipeline summary persistence failed (non-blocking)"
|
||||
}
|
||||
|
||||
log "=== AtoCore batch extraction + triage complete ==="
|
||||
|
||||
@@ -166,10 +166,19 @@ def _extract_last_user_prompt(transcript_path: str) -> str:
|
||||
# Project inference from working directory.
|
||||
# Maps known repo paths to AtoCore project IDs. The user can extend
|
||||
# this table or replace it with a registry lookup later.
|
||||
_VAULT = "C:\\Users\\antoi\\antoine\\My Libraries\\Antoine Brain Extension"
|
||||
|
||||
_PROJECT_PATH_MAP: dict[str, str] = {
|
||||
# Add mappings as needed, e.g.:
|
||||
# "C:\\Users\\antoi\\gigabit": "p04-gigabit",
|
||||
# "C:\\Users\\antoi\\interferometer": "p05-interferometer",
|
||||
f"{_VAULT}\\2-Projects\\P04-GigaBIT-M1": "p04-gigabit",
|
||||
f"{_VAULT}\\2-Projects\\P10-Interferometer": "p05-interferometer",
|
||||
f"{_VAULT}\\2-Projects\\P11-Polisher-Fullum": "p06-polisher",
|
||||
f"{_VAULT}\\2-Projects\\P08-ABB-Space-Mirror": "abb-space",
|
||||
f"{_VAULT}\\2-Projects\\I01-Atomizer": "atomizer-v2",
|
||||
f"{_VAULT}\\2-Projects\\I02-AtoCore": "atocore",
|
||||
"C:\\Users\\antoi\\ATOCore": "atocore",
|
||||
"C:\\Users\\antoi\\Polisher-Sim": "p06-polisher",
|
||||
"C:\\Users\\antoi\\Fullum-Interferometer": "p05-interferometer",
|
||||
"C:\\Users\\antoi\\Atomizer-V2": "atomizer-v2",
|
||||
}
|
||||
|
||||
|
||||
|
||||
284
docs/MASTER-BRAIN-PLAN.md
Normal file
284
docs/MASTER-BRAIN-PLAN.md
Normal file
@@ -0,0 +1,284 @@
|
||||
# AtoCore Master Brain Plan
|
||||
|
||||
> Vision: AtoCore becomes the **single source of truth** that grounds every LLM
|
||||
> interaction across the entire ecosystem (Claude, OpenClaw, Codex, Ollama, future
|
||||
> agents). Every prompt is automatically enriched with full project context. The
|
||||
> brain self-grows from daily work, auto-organizes its metadata, and stays
|
||||
> flawlessly reliable.
|
||||
|
||||
## The Core Insight
|
||||
|
||||
AtoCore today is a **well-architected capture + curation system with a critical
|
||||
gap on the consumption side**. We pour water into the bucket (capture from
|
||||
Claude Code Stop hook + OpenClaw message hooks) but nothing is drinking from it
|
||||
at prompt time. Fixing that gap is the single highest-leverage move.
|
||||
|
||||
**Once every LLM call is AtoCore-grounded automatically, the feedback loop
|
||||
closes**: LLMs use the context → produce better responses → those responses
|
||||
reference the injected memories → reinforcement fires → knowledge curates
|
||||
itself. The capture side is already working. The pull side is what's missing.
|
||||
|
||||
## Universal Consumption Strategy
|
||||
|
||||
MCP is great for Claude (Claude Desktop, Claude Code, Cursor, Zed, Windsurf) but
|
||||
is **not universal**. OpenClaw has its own plugin SDK. Codex, Ollama, and GPT
|
||||
don't natively support MCP. The right strategy:
|
||||
|
||||
**HTTP API is the truth; every client gets the thinnest possible adapter.**
|
||||
|
||||
```
|
||||
┌─────────────────────┐
|
||||
│ AtoCore HTTP API │ ← canonical interface
|
||||
│ /context/build │
|
||||
│ /query │
|
||||
│ /memory │
|
||||
│ /project/state │
|
||||
└──────────┬──────────┘
|
||||
│
|
||||
┌────────────┬───────────┼──────────┬────────────┐
|
||||
│ │ │ │ │
|
||||
┌──┴───┐ ┌────┴────┐ ┌───┴───┐ ┌───┴────┐ ┌───┴────┐
|
||||
│ MCP │ │OpenClaw │ │Claude │ │ Codex │ │ Ollama │
|
||||
│server│ │ plugin │ │ Code │ │ skill │ │ proxy │
|
||||
│ │ │ (pull) │ │ hook │ │ │ │ │
|
||||
└──┬───┘ └────┬────┘ └───┬───┘ └────┬───┘ └────┬───┘
|
||||
│ │ │ │ │
|
||||
Claude OpenClaw Claude Code Codex CLI Ollama
|
||||
Desktop, agent local
|
||||
Cursor, models
|
||||
Zed,
|
||||
Windsurf
|
||||
```
|
||||
|
||||
Each adapter's only job: accept a prompt, call AtoCore HTTP, prepend the
|
||||
returned context pack. The adapter itself carries no logic.
|
||||
|
||||
## Three Integration Tiers
|
||||
|
||||
### Tier 1: MCP-native clients (Claude ecosystem)
|
||||
Build **atocore-mcp** — a standalone MCP server that wraps the HTTP API. Exposes:
|
||||
- `context(query, project)` → context pack
|
||||
- `search(query)` → raw retrieval
|
||||
- `remember(type, content, project)` → create candidate memory
|
||||
- `recall(project, key)` → project state lookup
|
||||
- `list_projects()` → registered projects
|
||||
|
||||
Works with Claude Desktop, Claude Code (via `claude mcp add atocore`), Cursor,
|
||||
Zed, Windsurf without any per-client work beyond config.
|
||||
|
||||
### Tier 2: Custom plugin ecosystems (OpenClaw)
|
||||
Extend the existing `atocore-capture` plugin on T420 to also register a
|
||||
**`before_prompt_build`** hook that pulls context from AtoCore and injects it
|
||||
into the agent's system prompt. The plugin already has the HTTP client, the
|
||||
authentication, the fail-open pattern. This is ~30 lines of added code.
|
||||
|
||||
### Tier 3: Everything else (Codex, Ollama, custom agents)
|
||||
For clients without plugin/hook systems, ship a **thin proxy/middleware** the
|
||||
user configures as the LLM endpoint:
|
||||
- `atocore-proxy` listens on `localhost:PORT`
|
||||
- Intercepts OpenAI-compatible chat/completion calls
|
||||
- Pulls context from AtoCore, injects into system prompt
|
||||
- Forwards to the real model endpoint (OpenAI, Ollama, Anthropic, etc.)
|
||||
- Returns the response, then captures the interaction back to AtoCore
|
||||
|
||||
This makes AtoCore a "drop-in" layer for anything that speaks
|
||||
OpenAI-compatible HTTP — which is nearly every modern LLM runtime.
|
||||
|
||||
## Knowledge Density Plan
|
||||
|
||||
The brain is only as smart as what it knows. Current state: 80 active memories
|
||||
across 6 projects, 324 candidates in the queue being processed. Target:
|
||||
**1,000+ curated memories** to become a real master brain.
|
||||
|
||||
Mechanisms:
|
||||
1. **Finish the current triage pass** (324 → ~80 more promotions expected).
|
||||
2. **Re-extract with stronger prompt on existing 236 interactions** — tune the
|
||||
LLM extractor system prompt to pull more durable facts and fewer ephemeral
|
||||
snapshots.
|
||||
3. **Ingest all drive/vault documents as memory candidates** (not just chunks).
|
||||
Every structured markdown section with a decision/fact/requirement header
|
||||
becomes a candidate memory.
|
||||
4. **Multi-source triangulation**: same fact in 3+ sources = auto-promote to
|
||||
confidence 0.95.
|
||||
5. **Cross-project synthesis**: facts appearing in multiple project contexts
|
||||
get promoted to global domain knowledge.
|
||||
|
||||
## Auto-Organization of Metadata
|
||||
|
||||
Currently: `type`, `project`, `confidence`, `status`, `reference_count`. For
|
||||
master brain we need more structure, inferred automatically:
|
||||
|
||||
| Addition | Purpose | Mechanism |
|
||||
|---|---|---|
|
||||
| **Domain tags** (optics, mechanics, firmware, business…) | Cross-cutting retrieval | LLM inference during triage |
|
||||
| **Temporal scope** (permanent, valid_until_X, transient) | Avoid stale truth | LLM classifies during triage |
|
||||
| **Source refs** (chunk_id[], interaction_id[]) | Provenance for every fact | Enforced at creation time |
|
||||
| **Relationships** (contradicts, updates, depends_on) | Memory graph | Triage infers during review |
|
||||
| **Semantic clusters** | Detect duplicates, find gaps | Weekly HDBSCAN pass on embeddings |
|
||||
|
||||
Layer these in progressively — none of them require schema rewrites, just
|
||||
additional fields and batch jobs.
|
||||
|
||||
## Self-Growth Mechanisms
|
||||
|
||||
Four loops that make AtoCore grow autonomously:
|
||||
|
||||
### 1. Drift detection (nightly)
|
||||
Compare new chunk embeddings to existing vector distribution. Centroids >X
|
||||
cosine distance from any existing centroid = new knowledge area. Log to
|
||||
dashboard; human decides if it's noise or a domain worth curating.
|
||||
|
||||
### 2. Gap identification (continuous)
|
||||
Every `/context/build` logs `query + chunks_returned + memories_returned`.
|
||||
Weekly report: "top 10 queries with weak coverage." Those are targeted
|
||||
curation opportunities.
|
||||
|
||||
### 3. Multi-source triangulation (weekly)
|
||||
Scan memory content similarity across sources. When a fact appears in 3+
|
||||
independent sources (vault doc + drive doc + interaction), auto-promote to
|
||||
high confidence and mark as "triangulated."
|
||||
|
||||
### 4. Active learning prompts (monthly)
|
||||
Surface "you have 200 p06 memories but only 15 p04 memories. Spend 30 min
|
||||
curating p04?" via dashboard digest.
|
||||
|
||||
## Robustness Strategy (Flawless Operation Bar)
|
||||
|
||||
Current: nightly backup, off-host rsync, health endpoint, 303 tests, harness,
|
||||
enhanced dashboard with pipeline health (this session).
|
||||
|
||||
To reach "flawless":
|
||||
|
||||
| Gap | Fix | Priority |
|
||||
|---|---|---|
|
||||
| Silent pipeline failures | Alerting webhook on harness drop / pipeline skip | P1 |
|
||||
| Memory mutations untracked | Append-only audit log table | P1 |
|
||||
| Integrity drift | Nightly FK + vector-chunk parity checks | P1 |
|
||||
| Schema migrations ad-hoc | Formal migration framework with rollback | P2 |
|
||||
| Single point of failure | Daily backup to user's main computer (new) | P1 |
|
||||
| No hot standby | Second instance following primary via WAL | P3 |
|
||||
| No temporal history | Memory audit + valid_until fields | P2 |
|
||||
|
||||
### Daily Backup to Main Computer
|
||||
|
||||
Currently: Dalidou → T420 (192.168.86.39) via rsync.
|
||||
|
||||
Add: Dalidou → main computer via a pull (main computer runs the rsync,
|
||||
pulls from Dalidou). Pull-based is simpler than push — no need for SSH
|
||||
keys on Dalidou to reach the Windows machine.
|
||||
|
||||
```bash
|
||||
# On main computer, daily scheduled task:
|
||||
rsync -a papa@dalidou:/srv/storage/atocore/backups/snapshots/ \
|
||||
/path/to/local/atocore-backups/
|
||||
```
|
||||
|
||||
Configure via Windows Task Scheduler or a cron-like runner. Verify weekly
|
||||
that the latest snapshot is present.
|
||||
|
||||
## Human Interface Auto-Evolution
|
||||
|
||||
Current: wiki at `/wiki`, regenerates on every request from DB. Synthesis
|
||||
(the "current state" paragraph at top of project pages) runs **weekly on
|
||||
Sundays only**. That's why it feels stalled.
|
||||
|
||||
Fixes:
|
||||
1. **Run synthesis daily, not weekly.** It's cheap (one claude call per
|
||||
project) and keeps the human-readable overview fresh.
|
||||
2. **Trigger synthesis on major events** — when 5+ new memories land for a
|
||||
project, regenerate its synthesis.
|
||||
3. **Add "What's New" feed** — wiki homepage shows recent additions across all
|
||||
projects (last 7 days of memory promotions, state entries, entities).
|
||||
4. **Memory timeline view** — project page gets a chronological list of what
|
||||
we learned when.
|
||||
|
||||
## Phased Roadmap (8-10 weeks)
|
||||
|
||||
### Phase 1 (week 1-2): Universal Consumption
|
||||
**Goal: every LLM call is AtoCore-grounded automatically.**
|
||||
|
||||
- [ ] Build `atocore-mcp` server (wraps HTTP API, stdio transport)
|
||||
- [ ] Publish to npm / or run via `pipx` / stdlib HTTP
|
||||
- [ ] Configure in Claude Desktop (`~/.claude/mcp_servers.json`)
|
||||
- [ ] Configure in Claude Code (`claude mcp add atocore …`)
|
||||
- [ ] Extend OpenClaw plugin with `before_prompt_build` PULL
|
||||
- [ ] Write `atocore-proxy` middleware for Codex/Ollama/generic clients
|
||||
- [ ] Document configuration for each client
|
||||
|
||||
**Success:** open a fresh Claude Code session, ask a project question, verify
|
||||
the response references AtoCore memories without manual context commands.
|
||||
|
||||
### Phase 2 (week 2-3): Knowledge Density + Wiki Evolution
|
||||
- [ ] Finish current triage pass (324 candidates → active)
|
||||
- [ ] Tune extractor prompt for higher promotion rate on durable facts
|
||||
- [ ] Daily synthesis in cron (not just Sundays)
|
||||
- [ ] Event-triggered synthesis on significant project changes
|
||||
- [ ] Wiki "What's New" feed
|
||||
- [ ] Memory timeline per project
|
||||
|
||||
**Target:** 300+ active memories, wiki feels alive daily.
|
||||
|
||||
### Phase 3 (week 3-4): Auto-Organization
|
||||
- [ ] Schema: add `domain_tags`, `valid_until`, `source_refs`, `triangulated_count`
|
||||
- [ ] Triage prompt upgraded: infer tags + temporal scope + relationships
|
||||
- [ ] Weekly HDBSCAN clustering of embeddings → dup detection + gap reports
|
||||
- [ ] Relationship edges in a new `memory_relationships` table
|
||||
|
||||
### Phase 4 (week 4-5): Robustness Hardening
|
||||
- [ ] Append-only `memory_audit` table + retrofit mutations
|
||||
- [ ] Nightly integrity checks (FK validation, orphan detection, parity)
|
||||
- [ ] Alerting webhook (Discord/email) on pipeline anomalies
|
||||
- [ ] Daily backup to user's main computer (pull-based)
|
||||
- [ ] Formal migration framework
|
||||
|
||||
### Phase 5 (week 6-7): Engineering V1 Implementation
|
||||
Execute the 23 acceptance criteria in `docs/architecture/engineering-v1-acceptance.md`
|
||||
against p06-polisher as the test bed. The ontology and queries are designed;
|
||||
this phase implements them.
|
||||
|
||||
### Phase 6 (week 8-9): Self-Growth Loops
|
||||
- [ ] Drift detection (nightly)
|
||||
- [ ] Gap identification from `/context/build` logs
|
||||
- [ ] Multi-source triangulation
|
||||
- [ ] Active learning digest (monthly)
|
||||
- [ ] Cross-project synthesis
|
||||
|
||||
### Phase 7 (ongoing): Scale & Polish
|
||||
- [ ] Multi-model validation (sonnet triages, opus cross-checks on disagreements)
|
||||
- [ ] AtoDrive integration (Google Drive as trusted source)
|
||||
- [ ] Hot standby when real production dependence materializes
|
||||
- [ ] More MCP tools (write-back, memory search, entity queries)
|
||||
|
||||
## Success Criteria
|
||||
|
||||
AtoCore is a master brain when:
|
||||
|
||||
1. **Zero manual context commands.** A fresh Claude/OpenClaw session answering
|
||||
a project question without being told "use AtoCore context."
|
||||
2. **1,000+ active memories** with >90% provenance coverage (every fact
|
||||
traceable to a source).
|
||||
3. **Every project has a current, human-readable overview** updated within 24h
|
||||
of significant changes.
|
||||
4. **Harness stays >95%** across 20+ fixtures covering all active projects.
|
||||
5. **Zero silent pipeline failures** for 30 consecutive days (all failures
|
||||
surface via alert within the hour).
|
||||
6. **Claude on any task knows what we know** — user asks "what did we decide
|
||||
about X?" and the answer is grounded in AtoCore, not reconstructed from
|
||||
scratch.
|
||||
|
||||
## Where We Are Now (2026-04-16)
|
||||
|
||||
- ✅ Core infrastructure: HTTP API, SQLite, Chroma, deploy pipeline
|
||||
- ✅ Capture pipes: Claude Code Stop hook, OpenClaw message hooks
|
||||
- ✅ Nightly pipeline: backup, extract, triage, synthesis, lint, harness, summary
|
||||
- ✅ Phase 10: auto-promotion from reinforcement + candidate expiry
|
||||
- ✅ Dashboard shows pipeline health + interaction totals + all projects
|
||||
- ⚡ 324 candidates being triaged (down from 439), ~80 active memories, growing
|
||||
- ❌ No consumption at prompt time (capture-only)
|
||||
- ❌ Wiki auto-evolves only on Sundays (synthesis cadence)
|
||||
- ❌ No MCP adapter
|
||||
- ❌ No daily backup to main computer
|
||||
- ❌ Engineering V1 not implemented
|
||||
- ❌ No alerting on pipeline failures
|
||||
|
||||
The path is clear. Phase 1 is the keystone.
|
||||
@@ -33,15 +33,21 @@ read-only additive mode.
|
||||
at 5% budget ratio. Future identity/preference extraction happens
|
||||
organically via the nightly LLM extraction pipeline.
|
||||
|
||||
- Phase 8 - OpenClaw Integration. As of 2026-04-12 the T420 OpenClaw
|
||||
helper (`t420-openclaw/atocore.py`) is verified end-to-end against
|
||||
live Dalidou: health check, auto-context with project detection,
|
||||
Trusted Project State surfacing, project-memory band, fail-open on
|
||||
unreachable host. Tested from both the development machine and the
|
||||
T420 via SSH. The helper covers 15 of the 33 API endpoints — the
|
||||
excluded endpoints (memory management, interactions, backup) are
|
||||
correctly scoped to the operator client (`scripts/atocore_client.py`)
|
||||
per the read-only additive integration model.
|
||||
- Phase 8 - OpenClaw Integration (baseline only, not primary surface).
|
||||
As of 2026-04-15 the T420 OpenClaw helper (`t420-openclaw/atocore.py`)
|
||||
is verified end-to-end against live Dalidou: health check, auto-context
|
||||
with project detection, Trusted Project State surfacing, project-memory
|
||||
band, fail-open on unreachable host. Tested from both the development
|
||||
machine and the T420 via SSH. Scope is narrow: **14 request shapes
|
||||
against ~44 server routes**, predominantly read-oriented plus
|
||||
`POST/DELETE /project/state` and `POST /ingest/sources`. Memory
|
||||
management, interactions capture (covered separately by the OpenClaw
|
||||
capture plugin), admin/backup, entities, triage, and extraction write
|
||||
paths remain out of this client's surface by design — they are scoped
|
||||
to the operator client (`scripts/atocore_client.py`) per the
|
||||
read-heavy additive integration model. "Primary integration" is
|
||||
therefore overclaim; "baseline read + project-state write helper" is
|
||||
the accurate framing.
|
||||
|
||||
### Baseline Complete
|
||||
|
||||
@@ -120,25 +126,29 @@ This sits implicitly between Phase 8 (OpenClaw) and Phase 11
|
||||
(multi-model). Memory-review and engineering-entity commands are
|
||||
deferred from the shared client until their workflows are exercised.
|
||||
|
||||
## What Is Real Today (updated 2026-04-12)
|
||||
## What Is Real Today (updated 2026-04-16)
|
||||
|
||||
- canonical AtoCore runtime on Dalidou (build_sha tracked, deploy.sh verified)
|
||||
- 33,253 vectors across 5 registered projects
|
||||
- project registry with template, proposal, register, update, refresh
|
||||
- 5 registered projects:
|
||||
- `p04-gigabit` (483 docs, 5 state entries)
|
||||
- `p05-interferometer` (109 docs, 9 state entries)
|
||||
- `p06-polisher` (564 docs, 9 state entries)
|
||||
- `atomizer-v2` (568 docs, newly ingested 2026-04-12)
|
||||
- `atocore` (drive source, 38 state entries)
|
||||
- 47 active memories (16 project, 16 knowledge, 6 adaptation, 3 identity, 3 preference, 3 episodic)
|
||||
- canonical AtoCore runtime on Dalidou (`775960c`, deploy.sh verified)
|
||||
- 33,253 vectors across 6 registered projects
|
||||
- 234 captured interactions (192 claude-code, 38 openclaw, 4 test)
|
||||
- 6 registered projects:
|
||||
- `p04-gigabit` (483 docs, 15 state entries)
|
||||
- `p05-interferometer` (109 docs, 18 state entries)
|
||||
- `p06-polisher` (564 docs, 19 state entries)
|
||||
- `atomizer-v2` (568 docs, 5 state entries)
|
||||
- `abb-space` (6 state entries)
|
||||
- `atocore` (drive source, 47 state entries)
|
||||
- 110 Trusted Project State entries across all projects (decisions, requirements, facts, contacts, milestones)
|
||||
- 84 active memories (31 project, 23 knowledge, 10 episodic, 8 adaptation, 7 preference, 5 identity)
|
||||
- context pack assembly with 4 tiers: Trusted Project State > identity/preference > project memories > retrieved chunks
|
||||
- query-relevance memory ranking with overlap-density scoring
|
||||
- retrieval eval harness: 18 fixtures, 17/18 passing
|
||||
- 290 tests passing
|
||||
- nightly pipeline: backup → cleanup → rsync → LLM extraction (sonnet) → auto-triage
|
||||
- retrieval eval harness: 18 fixtures, 17/18 passing on live
|
||||
- 303 tests passing
|
||||
- nightly pipeline: backup → cleanup → rsync → OpenClaw import → vault refresh → extract → triage → **auto-promote/expire** → weekly synth/lint → **retrieval harness** → **pipeline summary to project state**
|
||||
- Phase 10 operational: reinforcement-based auto-promotion (ref_count ≥ 3, confidence ≥ 0.7) + stale candidate expiry (14 days unreinforced)
|
||||
- pipeline health visible in dashboard: interaction totals by client, pipeline last_run, harness results, triage stats
|
||||
- off-host backup to clawdbot (T420) via rsync
|
||||
- both Claude Code and OpenClaw capture interactions to AtoCore
|
||||
- both Claude Code and OpenClaw capture interactions to AtoCore (OpenClaw via `before_agent_start` + `llm_output` plugin, verified live)
|
||||
- DEV-LEDGER.md as shared operating memory between Claude and Codex
|
||||
- observability dashboard at GET /admin/dashboard
|
||||
|
||||
@@ -146,26 +156,28 @@ deferred from the shared client until their workflows are exercised.
|
||||
|
||||
These are the current practical priorities.
|
||||
|
||||
1. **Observe and stabilize** — let the nightly pipeline run for a week,
|
||||
check the dashboard daily, verify memories accumulate correctly
|
||||
from organic Claude Code and OpenClaw use
|
||||
2. **Multi-model triage** (Phase 11 entry) — switch auto-triage to a
|
||||
1. **Observe the enhanced pipeline** — let the nightly pipeline run for a
|
||||
week with the new harness + summary + auto-promote steps. Check the
|
||||
dashboard daily. Verify pipeline summary populates correctly.
|
||||
2. **Knowledge density** — run batch extraction over the full 234
|
||||
interactions (`--since 2026-01-01`) to mine the backlog for knowledge.
|
||||
Target: 100+ active memories.
|
||||
3. **Multi-model triage** (Phase 11 entry) — switch auto-triage to a
|
||||
different model than the extractor for independent validation
|
||||
3. **Automated eval in cron** (Phase 12 entry) — add retrieval harness
|
||||
to the nightly cron so regressions are caught automatically
|
||||
4. **Atomizer-v2 state entries** — curate Trusted Project State for the
|
||||
newly ingested Atomizer knowledge base
|
||||
4. **Fix p04-constraints harness failure** — retrieval doesn't surface
|
||||
"Zerodur" for p04 constraint queries. Investigate if it's a missing
|
||||
memory or retrieval ranking issue.
|
||||
|
||||
## Next
|
||||
|
||||
These are the next major layers after the current stabilization pass.
|
||||
|
||||
1. Phase 10 Write-back — confidence-based auto-promotion from
|
||||
reinforcement signal (a memory reinforced N times auto-promotes)
|
||||
2. Phase 6 AtoDrive — clarify Google Drive as a trusted operational
|
||||
1. Phase 6 AtoDrive — clarify Google Drive as a trusted operational
|
||||
source and ingest from it
|
||||
3. Phase 13 Hardening — Chroma backup policy, monitoring, alerting,
|
||||
2. Phase 13 Hardening — Chroma backup policy, monitoring, alerting,
|
||||
failure visibility beyond log files
|
||||
3. Engineering V1 implementation sprint — once knowledge density is
|
||||
sufficient and the pipeline feels boring and dependable
|
||||
|
||||
## Later
|
||||
|
||||
@@ -187,9 +199,10 @@ These remain intentionally deferred.
|
||||
plugin now exists (`openclaw-plugins/atocore-capture/`), interactions
|
||||
flow. Write-back of promoted memories back to OpenClaw's own memory
|
||||
system is still deferred.
|
||||
- ~~automatic memory promotion~~ — auto-triage now handles promote/reject
|
||||
for extraction candidates. Reinforcement-based auto-promotion
|
||||
(Phase 10) is the remaining piece.
|
||||
- ~~automatic memory promotion~~ — Phase 10 complete: auto-triage handles
|
||||
extraction candidates, reinforcement-based auto-promotion graduates
|
||||
candidates referenced 3+ times to active, stale candidates expire
|
||||
after 14 days unreinforced.
|
||||
- ~~reflection loop integration~~ — fully operational: capture (both
|
||||
clients) → reinforce (automatic) → extract (nightly cron, sonnet) →
|
||||
auto-triage (nightly, sonnet) → only needs_human reaches the user.
|
||||
|
||||
140
docs/windows-backup-setup.md
Normal file
140
docs/windows-backup-setup.md
Normal file
@@ -0,0 +1,140 @@
|
||||
# Windows Main-Computer Backup Setup
|
||||
|
||||
The AtoCore backup pipeline runs nightly on Dalidou and already pushes snapshots
|
||||
off-host to the T420 (`papa@192.168.86.39`). This doc sets up a **second**,
|
||||
pull-based daily backup to your Windows main computer at
|
||||
`C:\Users\antoi\Documents\ATOCore_Backups\`.
|
||||
|
||||
Pull-based means the Windows machine pulls from Dalidou. This is simpler than
|
||||
push because Dalidou doesn't need SSH keys to reach Windows, and the backup
|
||||
only runs when the Windows machine is powered on and can reach Dalidou.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Windows 10/11 with OpenSSH client (built-in since Win10 1809)
|
||||
- SSH key-based auth to `papa@dalidou` already working (you're using it today)
|
||||
- `C:\Users\antoi\ATOCore\scripts\windows\atocore-backup-pull.ps1` present
|
||||
|
||||
## Test the script manually
|
||||
|
||||
```powershell
|
||||
powershell.exe -ExecutionPolicy Bypass -File `
|
||||
C:\Users\antoi\ATOCore\scripts\windows\atocore-backup-pull.ps1
|
||||
```
|
||||
|
||||
Expected output:
|
||||
```
|
||||
[timestamp] === AtoCore backup pull starting ===
|
||||
[timestamp] Dalidou reachable.
|
||||
[timestamp] Pulling snapshots via scp...
|
||||
[timestamp] Pulled N snapshots successfully (total X MB, latest: ...)
|
||||
[timestamp] === backup complete ===
|
||||
```
|
||||
|
||||
Target directory: `C:\Users\antoi\Documents\ATOCore_Backups\snapshots\`
|
||||
Logs: `C:\Users\antoi\Documents\ATOCore_Backups\_logs\backup-*.log`
|
||||
|
||||
## Register the Task Scheduler task
|
||||
|
||||
### Option A — automatic registration (recommended)
|
||||
|
||||
Run this PowerShell command **as your user** (no admin needed — uses HKCU task):
|
||||
|
||||
```powershell
|
||||
$action = New-ScheduledTaskAction -Execute 'powershell.exe' `
|
||||
-Argument '-ExecutionPolicy Bypass -NonInteractive -WindowStyle Hidden -File C:\Users\antoi\ATOCore\scripts\windows\atocore-backup-pull.ps1'
|
||||
|
||||
# Run daily at 10:00 local time; if missed (computer off), run at next logon
|
||||
$trigger = New-ScheduledTaskTrigger -Daily -At 10:00AM
|
||||
$trigger.StartBoundary = (Get-Date -Format 'yyyy-MM-ddTHH:mm:ss')
|
||||
|
||||
$settings = New-ScheduledTaskSettingsSet `
|
||||
-AllowStartIfOnBatteries `
|
||||
-DontStopIfGoingOnBatteries `
|
||||
-StartWhenAvailable `
|
||||
-ExecutionTimeLimit (New-TimeSpan -Minutes 10) `
|
||||
-RestartCount 2 `
|
||||
-RestartInterval (New-TimeSpan -Minutes 30)
|
||||
|
||||
Register-ScheduledTask -TaskName 'AtoCore Backup Pull' `
|
||||
-Description 'Daily pull of AtoCore backup snapshots from Dalidou' `
|
||||
-Action $action -Trigger $trigger -Settings $settings `
|
||||
-User $env:USERNAME
|
||||
```
|
||||
|
||||
Key settings:
|
||||
- `-StartWhenAvailable`: if the computer was off at 10:00, run as soon as it
|
||||
comes online
|
||||
- `-AllowStartIfOnBatteries`: works on laptop battery too
|
||||
- `-ExecutionTimeLimit 10min`: kill hung tasks
|
||||
- `-RestartCount 2`: retry twice if it fails (Dalidou temporarily unreachable)
|
||||
|
||||
### Option B -- Task Scheduler GUI
|
||||
|
||||
1. Open Task Scheduler (`taskschd.msc`)
|
||||
2. Create Basic Task -> name: `AtoCore Backup Pull`
|
||||
3. Trigger: Daily, 10:00 AM, recur every 1 day
|
||||
4. Action: Start a program
|
||||
- Program: `powershell.exe`
|
||||
- Arguments: `-ExecutionPolicy Bypass -NonInteractive -WindowStyle Hidden -File "C:\Users\antoi\ATOCore\scripts\windows\atocore-backup-pull.ps1"`
|
||||
5. Finish, then edit the task:
|
||||
- Settings tab: check "Run task as soon as possible after a scheduled start is missed"
|
||||
- Settings tab: "If the task fails, restart every 30 minutes, up to 2 times"
|
||||
- Conditions tab: uncheck "Start only if computer is on AC power" (if you want it on battery)
|
||||
|
||||
## Verify
|
||||
|
||||
After the first scheduled run:
|
||||
|
||||
```powershell
|
||||
# Most recent log
|
||||
Get-ChildItem C:\Users\antoi\Documents\ATOCore_Backups\_logs\ |
|
||||
Sort-Object Name -Descending |
|
||||
Select-Object -First 1 |
|
||||
Get-Content
|
||||
|
||||
# Latest snapshot present?
|
||||
Get-ChildItem C:\Users\antoi\Documents\ATOCore_Backups\snapshots\ |
|
||||
Sort-Object Name -Descending |
|
||||
Select-Object -First 3
|
||||
```
|
||||
|
||||
## Unregister (if needed)
|
||||
|
||||
```powershell
|
||||
Unregister-ScheduledTask -TaskName 'AtoCore Backup Pull' -Confirm:$false
|
||||
```
|
||||
|
||||
## How it behaves
|
||||
|
||||
- **Computer on, Dalidou reachable**: pulls latest snapshots silently in ~15s
|
||||
- **Computer on, Dalidou unreachable** (remote work, network down): fail-open,
|
||||
exits without error, logs "Dalidou unreachable"
|
||||
- **Computer off at scheduled time**: Task Scheduler runs it as soon as the
|
||||
computer wakes up
|
||||
- **Many days off**: one run catches up; scp only transfers files not already
|
||||
present (snapshots are date-stamped directories, idempotent overwrites)
|
||||
|
||||
## What gets backed up
|
||||
|
||||
The snapshots tree contains:
|
||||
- `YYYYMMDDTHHMMSSZ/config/` — project registry, AtoCore config
|
||||
- `YYYYMMDDTHHMMSSZ/db/` — SQLite snapshot of all memory, state, interactions
|
||||
- `YYYYMMDDTHHMMSSZ/backup-metadata.json` — SHA, timestamp, source info
|
||||
|
||||
Chroma vectors are **not** in the snapshot by default
|
||||
(`ATOCORE_BACKUP_CHROMA=false` on Dalidou). They can be rebuilt from the
|
||||
source documents if lost. To include them, set `ATOCORE_BACKUP_CHROMA=true`
|
||||
in the Dalidou cron environment.
|
||||
|
||||
## Three-tier backup summary
|
||||
|
||||
After this setup:
|
||||
|
||||
| Tier | Location | Cadence | Purpose |
|
||||
|---|---|---|---|
|
||||
| Live | Dalidou `/srv/storage/atocore/backups/snapshots/` | Nightly 03:00 UTC | Fast restore |
|
||||
| Off-host | T420 `papa@192.168.86.39:/home/papa/atocore-backups/` | Nightly after Dalidou | Dalidou dies |
|
||||
| User machine | `C:\Users\antoi\Documents\ATOCore_Backups\` | Daily 10:00 local | Full home-network failure |
|
||||
|
||||
Three independent copies. Any two can be lost simultaneously without data loss.
|
||||
63
openclaw-plugins/atocore-capture/handler.js
Normal file
63
openclaw-plugins/atocore-capture/handler.js
Normal file
@@ -0,0 +1,63 @@
|
||||
/**
|
||||
* AtoCore capture hook for OpenClaw.
|
||||
*
|
||||
* Listens on message:received (buffer prompt) and message:sent (POST pair).
|
||||
* Fail-open: errors are caught silently.
|
||||
*/
|
||||
|
||||
const BASE_URL = process.env.ATOCORE_BASE_URL || "http://dalidou:8100";
|
||||
const MIN_LEN = 15;
|
||||
const MAX_RESP = 50000;
|
||||
|
||||
let lastPrompt = null; // simple single-slot buffer
|
||||
|
||||
const atocoreCaptureHook = async (event) => {
|
||||
try {
|
||||
if (process.env.ATOCORE_CAPTURE_DISABLED === "1") return;
|
||||
|
||||
if (event.type === "message" && event.action === "received") {
|
||||
const content = (event.context?.content || "").trim();
|
||||
if (content.length >= MIN_LEN && !content.startsWith("<")) {
|
||||
lastPrompt = { text: content, ts: Date.now() };
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
if (event.type === "message" && event.action === "sent") {
|
||||
if (!event.context?.success) return;
|
||||
const response = (event.context?.content || "").trim();
|
||||
if (!response || !lastPrompt) return;
|
||||
|
||||
// Discard stale prompts (>5 min old)
|
||||
if (Date.now() - lastPrompt.ts > 300000) {
|
||||
lastPrompt = null;
|
||||
return;
|
||||
}
|
||||
|
||||
const prompt = lastPrompt.text;
|
||||
lastPrompt = null;
|
||||
|
||||
const body = JSON.stringify({
|
||||
prompt,
|
||||
response: response.length > MAX_RESP
|
||||
? response.slice(0, MAX_RESP) + "\n\n[truncated]"
|
||||
: response,
|
||||
client: "openclaw",
|
||||
session_id: event.sessionKey || "",
|
||||
project: "",
|
||||
reinforce: true,
|
||||
});
|
||||
|
||||
fetch(BASE_URL.replace(/\/$/, "") + "/interactions", {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
body,
|
||||
signal: AbortSignal.timeout(10000),
|
||||
}).catch(() => {});
|
||||
}
|
||||
} catch {
|
||||
// fail-open: never crash the gateway
|
||||
}
|
||||
};
|
||||
|
||||
export default atocoreCaptureHook;
|
||||
79
scripts/auto_promote_reinforced.py
Normal file
79
scripts/auto_promote_reinforced.py
Normal file
@@ -0,0 +1,79 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Auto-promote reinforced candidates + expire stale ones.
|
||||
|
||||
Phase 10: reinforcement-based auto-promotion. Candidates referenced
|
||||
by 3+ interactions with confidence >= 0.7 graduate to active.
|
||||
Candidates unreinforced for 14+ days are auto-rejected.
|
||||
|
||||
Usage:
|
||||
python3 scripts/auto_promote_reinforced.py [--base-url URL] [--dry-run]
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
|
||||
# Allow importing from src/ when run from repo root
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "src"))
|
||||
|
||||
from atocore.memory.service import auto_promote_reinforced, expire_stale_candidates
|
||||
|
||||
|
||||
def main() -> None:
|
||||
parser = argparse.ArgumentParser(description="Auto-promote + expire candidates")
|
||||
parser.add_argument("--dry-run", action="store_true", help="Report only, don't change anything")
|
||||
parser.add_argument("--min-refs", type=int, default=3, help="Min reference_count for promotion")
|
||||
parser.add_argument("--min-confidence", type=float, default=0.7, help="Min confidence for promotion")
|
||||
parser.add_argument("--expire-days", type=int, default=14, help="Days before unreinforced candidates expire")
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.dry_run:
|
||||
print("DRY RUN — no changes will be made")
|
||||
# For dry-run, query directly and report
|
||||
from atocore.models.database import get_connection
|
||||
from datetime import datetime, timedelta, timezone
|
||||
|
||||
cutoff_promote = (datetime.now(timezone.utc) - timedelta(days=args.expire_days)).strftime("%Y-%m-%d %H:%M:%S")
|
||||
cutoff_expire = cutoff_promote
|
||||
|
||||
with get_connection() as conn:
|
||||
promotable = conn.execute(
|
||||
"SELECT id, content, memory_type, project, confidence, reference_count "
|
||||
"FROM memories WHERE status = 'candidate' "
|
||||
"AND COALESCE(reference_count, 0) >= ? AND confidence >= ? "
|
||||
"AND last_referenced_at >= ?",
|
||||
(args.min_refs, args.min_confidence, cutoff_promote),
|
||||
).fetchall()
|
||||
expirable = conn.execute(
|
||||
"SELECT id, content, memory_type, project "
|
||||
"FROM memories WHERE status = 'candidate' "
|
||||
"AND COALESCE(reference_count, 0) = 0 AND created_at < ?",
|
||||
(cutoff_expire,),
|
||||
).fetchall()
|
||||
|
||||
print(f"\nWould promote {len(promotable)} candidates:")
|
||||
for r in promotable:
|
||||
print(f" [{r['memory_type']}] refs={r['reference_count']} conf={r['confidence']:.2f} | {r['content'][:80]}...")
|
||||
print(f"\nWould expire {len(expirable)} stale candidates:")
|
||||
for r in expirable:
|
||||
print(f" [{r['memory_type']}] {r['project'] or 'global'} | {r['content'][:80]}...")
|
||||
return
|
||||
|
||||
promoted = auto_promote_reinforced(
|
||||
min_reference_count=args.min_refs,
|
||||
min_confidence=args.min_confidence,
|
||||
)
|
||||
expired = expire_stale_candidates(max_age_days=args.expire_days)
|
||||
|
||||
print(f"promoted={len(promoted)} expired={len(expired)}")
|
||||
if promoted:
|
||||
print(f"Promoted IDs: {promoted}")
|
||||
if expired:
|
||||
print(f"Expired IDs: {expired}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -29,6 +29,7 @@ import os
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
import tempfile
|
||||
import urllib.error
|
||||
import urllib.parse
|
||||
@@ -63,9 +64,11 @@ Rules:
|
||||
|
||||
3. CONTRADICTS when the candidate *conflicts* with an existing active memory (not a duplicate, but states something that can't both be true). Set `conflicts_with` to the existing memory id. This flags the tension for human review instead of silently rejecting or double-storing. Examples: "Option A selected" vs "Option B selected" for the same decision; "uses material X" vs "uses material Y" for the same component.
|
||||
|
||||
4. NEEDS_HUMAN when you're genuinely unsure — the candidate might be valuable but you can't tell without domain knowledge. This should be rare (< 20% of candidates).
|
||||
4. OPENCLAW-CURATED content (candidate content starts with "From OpenClaw/"): apply a MUCH LOWER bar. OpenClaw's SOUL.md, USER.md, MEMORY.md, MODEL-ROUTING.md, and dated memory/*.md files are ALREADY curated by OpenClaw as canonical continuity. Promote unless clearly wrong or a genuine duplicate. Do NOT reject OpenClaw content as "process rule belongs elsewhere" or "session log" — that's exactly what AtoCore wants to absorb. Session events, project updates, stakeholder notes, and decisions from OpenClaw daily memory files ARE valuable context and should promote.
|
||||
|
||||
5. Output ONLY the JSON object. No prose, no markdown, no explanation outside the reason field."""
|
||||
5. NEEDS_HUMAN when you're genuinely unsure — the candidate might be valuable but you can't tell without domain knowledge. This should be rare (< 20% of candidates).
|
||||
|
||||
6. Output ONLY the JSON object. No prose, no markdown, no explanation outside the reason field."""
|
||||
|
||||
_sandbox_cwd = None
|
||||
|
||||
@@ -129,22 +132,33 @@ def triage_one(candidate, active_memories, model, timeout_s):
|
||||
user_message,
|
||||
]
|
||||
|
||||
try:
|
||||
completed = subprocess.run(
|
||||
args, capture_output=True, text=True,
|
||||
timeout=timeout_s, cwd=get_sandbox_cwd(),
|
||||
encoding="utf-8", errors="replace",
|
||||
)
|
||||
except subprocess.TimeoutExpired:
|
||||
return {"verdict": "needs_human", "confidence": 0.0, "reason": "triage model timed out"}
|
||||
except Exception as exc:
|
||||
return {"verdict": "needs_human", "confidence": 0.0, "reason": f"subprocess error: {exc}"}
|
||||
# Retry with exponential backoff on transient failures (rate limits etc)
|
||||
last_error = ""
|
||||
for attempt in range(3):
|
||||
if attempt > 0:
|
||||
time.sleep(2 ** attempt) # 2s, 4s
|
||||
try:
|
||||
completed = subprocess.run(
|
||||
args, capture_output=True, text=True,
|
||||
timeout=timeout_s, cwd=get_sandbox_cwd(),
|
||||
encoding="utf-8", errors="replace",
|
||||
)
|
||||
except subprocess.TimeoutExpired:
|
||||
last_error = "triage model timed out"
|
||||
continue
|
||||
except Exception as exc:
|
||||
last_error = f"subprocess error: {exc}"
|
||||
continue
|
||||
|
||||
if completed.returncode != 0:
|
||||
return {"verdict": "needs_human", "confidence": 0.0, "reason": f"claude exit {completed.returncode}"}
|
||||
if completed.returncode == 0:
|
||||
raw = (completed.stdout or "").strip()
|
||||
return parse_verdict(raw)
|
||||
|
||||
raw = (completed.stdout or "").strip()
|
||||
return parse_verdict(raw)
|
||||
# Capture stderr for diagnostics (truncate to 200 chars)
|
||||
stderr = (completed.stderr or "").strip()[:200]
|
||||
last_error = f"claude exit {completed.returncode}: {stderr}" if stderr else f"claude exit {completed.returncode}"
|
||||
|
||||
return {"verdict": "needs_human", "confidence": 0.0, "reason": last_error}
|
||||
|
||||
|
||||
def parse_verdict(raw):
|
||||
@@ -211,6 +225,13 @@ def main():
|
||||
promoted = rejected = needs_human = errors = 0
|
||||
|
||||
for i, cand in enumerate(candidates, 1):
|
||||
# Light rate-limit pacing: 0.5s between triage calls so a burst
|
||||
# doesn't overwhelm the claude CLI's backend. With ~60s per call
|
||||
# this is negligible overhead but avoids the "all-failed" pattern
|
||||
# we saw on large batches.
|
||||
if i > 1:
|
||||
time.sleep(0.5)
|
||||
|
||||
project = cand.get("project") or ""
|
||||
if project not in active_cache:
|
||||
active_cache[project] = fetch_active_memories_for_project(args.base_url, project)
|
||||
|
||||
@@ -1,12 +1,15 @@
|
||||
"""Host-side LLM batch extraction — pure HTTP client, no atocore imports.
|
||||
"""Host-side LLM batch extraction — HTTP client + shared prompt module.
|
||||
|
||||
Fetches interactions from the AtoCore API, runs ``claude -p`` locally
|
||||
for each, and POSTs candidates back. Zero dependency on atocore source
|
||||
or Python packages — only uses stdlib + the ``claude`` CLI on PATH.
|
||||
for each, and POSTs candidates back. Uses stdlib + the ``claude`` CLI
|
||||
on PATH, plus the stdlib-only shared prompt/parser module at
|
||||
``atocore.memory._llm_prompt`` to eliminate prompt/parser drift
|
||||
against the in-container extractor (R12).
|
||||
|
||||
This is necessary because the ``claude`` CLI is on the Dalidou HOST
|
||||
but not inside the Docker container, and the host's Python doesn't
|
||||
have the container's dependencies (pydantic_settings, etc.).
|
||||
have the container's dependencies (pydantic_settings, etc.) — so we
|
||||
only import the one stdlib-only module, not the full atocore package.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
@@ -23,88 +26,26 @@ import urllib.parse
|
||||
import urllib.request
|
||||
from datetime import datetime, timezone
|
||||
|
||||
# R12: share the prompt + parser with the in-container extractor so
|
||||
# the two paths can't drift. The imported module is stdlib-only by
|
||||
# design; see src/atocore/memory/_llm_prompt.py.
|
||||
_SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
_SRC_DIR = os.path.abspath(os.path.join(_SCRIPT_DIR, "..", "src"))
|
||||
if _SRC_DIR not in sys.path:
|
||||
sys.path.insert(0, _SRC_DIR)
|
||||
|
||||
from atocore.memory._llm_prompt import ( # noqa: E402
|
||||
MEMORY_TYPES,
|
||||
SYSTEM_PROMPT,
|
||||
build_user_message,
|
||||
normalize_candidate_item,
|
||||
parse_llm_json_array,
|
||||
)
|
||||
|
||||
DEFAULT_BASE_URL = os.environ.get("ATOCORE_BASE_URL", "http://localhost:8100")
|
||||
DEFAULT_MODEL = os.environ.get("ATOCORE_LLM_EXTRACTOR_MODEL", "sonnet")
|
||||
DEFAULT_TIMEOUT_S = float(os.environ.get("ATOCORE_LLM_EXTRACTOR_TIMEOUT_S", "90"))
|
||||
MAX_RESPONSE_CHARS = 8000
|
||||
MAX_PROMPT_CHARS = 2000
|
||||
|
||||
MEMORY_TYPES = {"identity", "preference", "project", "episodic", "knowledge", "adaptation"}
|
||||
|
||||
SYSTEM_PROMPT = """You extract memory candidates from LLM conversation turns for a personal context engine called AtoCore.
|
||||
|
||||
AtoCore is the brain for Atomaste's engineering work. Known projects:
|
||||
p04-gigabit, p05-interferometer, p06-polisher, atomizer-v2, atocore,
|
||||
abb-space. Unknown project names — still tag them, the system auto-detects.
|
||||
|
||||
Your job is to emit SIGNALS that matter for future context. Be aggressive:
|
||||
err on the side of capturing useful signal. Triage filters noise downstream.
|
||||
|
||||
WHAT TO EMIT (in order of importance):
|
||||
|
||||
1. PROJECT ACTIVITY — any mention of a project with context worth remembering:
|
||||
- "Schott quote received for ABB-Space" (event + project)
|
||||
- "Cédric asked about p06 firmware timing" (stakeholder event)
|
||||
- "Still waiting on Zygo lead-time from Nabeel" (blocker status)
|
||||
- "p05 vendor decision needs to happen this week" (action item)
|
||||
|
||||
2. DECISIONS AND CHOICES — anything that commits to a direction:
|
||||
- "Going with Zygo Verifire SV for p05" (decision)
|
||||
- "Dropping stitching from primary workflow" (design choice)
|
||||
- "USB SSD mandatory, not SD card" (architectural commitment)
|
||||
|
||||
3. DURABLE ENGINEERING INSIGHT — earned knowledge that generalizes:
|
||||
- "CTE gradient dominates WFE at F/1.2" (materials insight)
|
||||
- "Preston model breaks below 5N because contact assumption fails"
|
||||
- "m=1 coma NOT correctable by force modulation" (controls insight)
|
||||
Test: would a competent engineer NEED experience to know this?
|
||||
If it's textbook/google-findable, skip it.
|
||||
|
||||
4. STAKEHOLDER AND VENDOR EVENTS:
|
||||
- "Email sent to Nabeel 2026-04-13 asking for lead time"
|
||||
- "Meeting with Jason on Table 7 next Tuesday"
|
||||
- "Starspec wants updated CAD by Friday"
|
||||
|
||||
5. PREFERENCES AND ADAPTATIONS that shape how Antoine works:
|
||||
- "Antoine prefers OAuth over API keys"
|
||||
- "Extraction stays off the capture hot path"
|
||||
|
||||
WHAT TO SKIP:
|
||||
|
||||
- Pure conversational filler ("ok thanks", "let me check")
|
||||
- Instructional help content ("run this command", "here's how to...")
|
||||
- Obvious textbook facts anyone can google in 30 seconds
|
||||
- Session meta-chatter ("let me commit this", "deploy running")
|
||||
- Transient system state snapshots ("36 active memories right now")
|
||||
|
||||
CANDIDATE TYPES — choose the best fit:
|
||||
|
||||
- project — a fact, decision, or event specific to one named project
|
||||
- knowledge — durable engineering insight (use domain, not project)
|
||||
- preference — how Antoine works / wants things done
|
||||
- adaptation — a standing rule or adjustment to behavior
|
||||
- episodic — a stakeholder event or milestone worth remembering
|
||||
|
||||
DOMAINS for knowledge candidates (required when type=knowledge and project is empty):
|
||||
physics, materials, optics, mechanics, manufacturing, metrology,
|
||||
controls, software, math, finance, business
|
||||
|
||||
TRUST HIERARCHY:
|
||||
|
||||
- project-specific: set project to the project id, leave domain empty
|
||||
- domain knowledge: set domain, leave project empty
|
||||
- events/activity: use project, type=project or episodic
|
||||
- one conversation can produce MULTIPLE candidates — emit them all
|
||||
|
||||
OUTPUT RULES:
|
||||
|
||||
- Each candidate content under 250 characters, stands alone
|
||||
- Default confidence 0.5. Raise to 0.7 only for ratified/committed claims.
|
||||
- Raw JSON array, no prose, no markdown fences
|
||||
- Empty array [] is fine when the conversation has no durable signal
|
||||
|
||||
Each element:
|
||||
{"type": "project|knowledge|preference|adaptation|episodic", "content": "...", "project": "...", "domain": "", "confidence": 0.5}"""
|
||||
|
||||
_sandbox_cwd = None
|
||||
|
||||
@@ -175,14 +116,7 @@ def extract_one(prompt, response, project, model, timeout_s):
|
||||
if not shutil.which("claude"):
|
||||
return [], "claude_cli_missing"
|
||||
|
||||
prompt_excerpt = prompt[:MAX_PROMPT_CHARS]
|
||||
response_excerpt = response[:MAX_RESPONSE_CHARS]
|
||||
user_message = (
|
||||
f"PROJECT HINT (may be empty): {project}\n\n"
|
||||
f"USER PROMPT:\n{prompt_excerpt}\n\n"
|
||||
f"ASSISTANT RESPONSE:\n{response_excerpt}\n\n"
|
||||
"Return the JSON array now."
|
||||
)
|
||||
user_message = build_user_message(prompt, response, project)
|
||||
|
||||
args = [
|
||||
"claude", "-p",
|
||||
@@ -192,85 +126,56 @@ def extract_one(prompt, response, project, model, timeout_s):
|
||||
user_message,
|
||||
]
|
||||
|
||||
try:
|
||||
completed = subprocess.run(
|
||||
args, capture_output=True, text=True,
|
||||
timeout=timeout_s, cwd=get_sandbox_cwd(),
|
||||
encoding="utf-8", errors="replace",
|
||||
)
|
||||
except subprocess.TimeoutExpired:
|
||||
return [], "timeout"
|
||||
except Exception as exc:
|
||||
return [], f"subprocess_error: {exc}"
|
||||
# Retry with exponential backoff on transient failures (rate limits etc)
|
||||
import time as _time
|
||||
last_error = ""
|
||||
for attempt in range(3):
|
||||
if attempt > 0:
|
||||
_time.sleep(2 ** attempt) # 2s, 4s
|
||||
try:
|
||||
completed = subprocess.run(
|
||||
args, capture_output=True, text=True,
|
||||
timeout=timeout_s, cwd=get_sandbox_cwd(),
|
||||
encoding="utf-8", errors="replace",
|
||||
)
|
||||
except subprocess.TimeoutExpired:
|
||||
last_error = "timeout"
|
||||
continue
|
||||
except Exception as exc:
|
||||
last_error = f"subprocess_error: {exc}"
|
||||
continue
|
||||
|
||||
if completed.returncode != 0:
|
||||
return [], f"exit_{completed.returncode}"
|
||||
if completed.returncode == 0:
|
||||
raw = (completed.stdout or "").strip()
|
||||
return parse_candidates(raw, project), ""
|
||||
|
||||
raw = (completed.stdout or "").strip()
|
||||
return parse_candidates(raw, project), ""
|
||||
# Capture stderr for diagnostics (truncate to 200 chars)
|
||||
stderr = (completed.stderr or "").strip()[:200]
|
||||
last_error = f"exit_{completed.returncode}: {stderr}" if stderr else f"exit_{completed.returncode}"
|
||||
|
||||
return [], last_error
|
||||
|
||||
|
||||
def parse_candidates(raw, interaction_project):
|
||||
"""Parse model JSON output into candidate dicts."""
|
||||
text = raw.strip()
|
||||
if text.startswith("```"):
|
||||
text = text.strip("`")
|
||||
nl = text.find("\n")
|
||||
if nl >= 0:
|
||||
text = text[nl + 1:]
|
||||
if text.endswith("```"):
|
||||
text = text[:-3]
|
||||
text = text.strip()
|
||||
|
||||
if not text or text == "[]":
|
||||
return []
|
||||
|
||||
if not text.lstrip().startswith("["):
|
||||
start = text.find("[")
|
||||
end = text.rfind("]")
|
||||
if start >= 0 and end > start:
|
||||
text = text[start:end + 1]
|
||||
|
||||
try:
|
||||
parsed = json.loads(text)
|
||||
except json.JSONDecodeError:
|
||||
return []
|
||||
|
||||
if not isinstance(parsed, list):
|
||||
return []
|
||||
"""Parse model JSON output into candidate dicts.
|
||||
|
||||
Stripping + per-item normalization come from the shared
|
||||
``_llm_prompt`` module. Host-side project attribution: interaction
|
||||
scope wins, otherwise keep the model's tag (the API's own R9
|
||||
registry-check will happen server-side in the container on write;
|
||||
here we preserve the signal instead of dropping it).
|
||||
"""
|
||||
results = []
|
||||
for item in parsed:
|
||||
if not isinstance(item, dict):
|
||||
for item in parse_llm_json_array(raw):
|
||||
normalized = normalize_candidate_item(item)
|
||||
if normalized is None:
|
||||
continue
|
||||
mem_type = str(item.get("type") or "").strip().lower()
|
||||
content = str(item.get("content") or "").strip()
|
||||
model_project = str(item.get("project") or "").strip()
|
||||
domain = str(item.get("domain") or "").strip().lower()
|
||||
# R9 trust hierarchy: interaction scope always wins when set.
|
||||
# For unscoped interactions, keep model's project tag even if
|
||||
# unregistered — the system will detect new projects/leads.
|
||||
if interaction_project:
|
||||
project = interaction_project
|
||||
elif model_project:
|
||||
project = model_project
|
||||
else:
|
||||
project = ""
|
||||
# Domain knowledge: embed tag in content for cross-project retrieval
|
||||
if domain and not project:
|
||||
content = f"[{domain}] {content}"
|
||||
conf = item.get("confidence", 0.5)
|
||||
if mem_type not in MEMORY_TYPES or not content:
|
||||
continue
|
||||
try:
|
||||
conf = max(0.0, min(1.0, float(conf)))
|
||||
except (TypeError, ValueError):
|
||||
conf = 0.5
|
||||
project = interaction_project or normalized["project"] or ""
|
||||
results.append({
|
||||
"memory_type": mem_type,
|
||||
"content": content[:1000],
|
||||
"memory_type": normalized["type"],
|
||||
"content": normalized["content"],
|
||||
"project": project,
|
||||
"confidence": conf,
|
||||
"confidence": normalized["confidence"],
|
||||
})
|
||||
return results
|
||||
|
||||
@@ -299,10 +204,14 @@ def main():
|
||||
total_persisted = 0
|
||||
errors = 0
|
||||
|
||||
for summary in interaction_summaries:
|
||||
import time as _time
|
||||
for ix, summary in enumerate(interaction_summaries):
|
||||
resp_chars = summary.get("response_chars", 0) or 0
|
||||
if resp_chars < 50:
|
||||
continue
|
||||
# Light pacing between calls to avoid bursting the claude CLI
|
||||
if ix > 0:
|
||||
_time.sleep(0.5)
|
||||
iid = summary["id"]
|
||||
try:
|
||||
raw = api_get(
|
||||
|
||||
@@ -42,7 +42,7 @@ from pathlib import Path
|
||||
|
||||
DEFAULT_BASE_URL = os.environ.get("ATOCORE_BASE_URL", "http://localhost:8100")
|
||||
DEFAULT_OPENCLAW_HOST = os.environ.get("ATOCORE_OPENCLAW_HOST", "papa@192.168.86.39")
|
||||
DEFAULT_OPENCLAW_PATH = os.environ.get("ATOCORE_OPENCLAW_PATH", "/home/papa/openclaw-workspace")
|
||||
DEFAULT_OPENCLAW_PATH = os.environ.get("ATOCORE_OPENCLAW_PATH", "/home/papa/clawd")
|
||||
|
||||
# Files to pull and how to classify them
|
||||
DURABLE_FILES = [
|
||||
|
||||
@@ -218,8 +218,8 @@
|
||||
"Tailscale"
|
||||
],
|
||||
"expect_absent": [
|
||||
"GigaBIT"
|
||||
"[Source: p04-gigabit/"
|
||||
],
|
||||
"notes": "New p06 memory: Tailscale mesh for RPi remote access"
|
||||
"notes": "New p06 memory: Tailscale mesh for RPi remote access. Cross-project guard is a source-path check, not a word blacklist: the polisher ARCHITECTURE.md legitimately mentions the GigaBIT M1 mirror (it is what the polisher is built for), so testing for absence of that word produces false positives. The real invariant is that no p04 source chunks are retrieved into p06 context."
|
||||
}
|
||||
]
|
||||
|
||||
159
scripts/seed_project_state.py
Normal file
159
scripts/seed_project_state.py
Normal file
@@ -0,0 +1,159 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Seed Trusted Project State entries for all active projects.
|
||||
|
||||
Populates the project_state table with curated decisions, requirements,
|
||||
facts, contacts, and milestones so context packs have real content
|
||||
in the highest-trust tier.
|
||||
|
||||
Usage:
|
||||
python3 scripts/seed_project_state.py --base-url http://dalidou:8100
|
||||
python3 scripts/seed_project_state.py --base-url http://dalidou:8100 --dry-run
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import urllib.request
|
||||
import sys
|
||||
|
||||
# Each entry: (project, category, key, value, source)
|
||||
SEED_ENTRIES: list[tuple[str, str, str, str, str]] = [
|
||||
# ---- p04-gigabit (GigaBIT M1 1.2m Primary Mirror) ----
|
||||
("p04-gigabit", "fact", "mirror-spec",
|
||||
"1.2m borosilicate primary mirror for GigaBIT telescope. F/1.5, lightweight isogrid back structure.",
|
||||
"CDR docs + vault"),
|
||||
("p04-gigabit", "decision", "back-structure",
|
||||
"Option B selected: conical isogrid back structure with variable rib density. Chosen over flat-back for stiffness-to-weight ratio.",
|
||||
"CDR 2026-01"),
|
||||
("p04-gigabit", "decision", "polishing-vendor",
|
||||
"ABB Space (formerly INO) selected as polishing vendor. Contract includes computer-controlled polishing (CCP) and ion beam figuring (IBF).",
|
||||
"Entente de service 2026-01"),
|
||||
("p04-gigabit", "requirement", "surface-quality",
|
||||
"Surface figure accuracy: < 25nm RMS after final figuring. Microroughness: < 2nm RMS.",
|
||||
"CDR requirements"),
|
||||
("p04-gigabit", "contact", "abb-space",
|
||||
"ABB Space (INO), Quebec City. Primary contact for mirror polishing, CCP, and IBF. Project lead: coordinating FDR deliverables.",
|
||||
"vendor records"),
|
||||
("p04-gigabit", "milestone", "fdr",
|
||||
"Final Design Review (FDR) in preparation. Deliverables include interface drawings, thermal analysis, and updated error budget.",
|
||||
"project timeline"),
|
||||
|
||||
# ---- p05-interferometer (Fullum Interferometer) ----
|
||||
("p05-interferometer", "fact", "system-overview",
|
||||
"Custom Fizeau interferometer for in-situ metrology of large optics. Designed for the Fullum observatory polishing facility.",
|
||||
"vault docs"),
|
||||
("p05-interferometer", "decision", "cgh-design",
|
||||
"Computer-generated hologram (CGH) selected for null testing of the 1.2m mirror. Vendor: Diffraction International.",
|
||||
"vendor correspondence"),
|
||||
("p05-interferometer", "requirement", "measurement-accuracy",
|
||||
"Measurement accuracy target: lambda/20 (< 30nm PV) for surface figure verification.",
|
||||
"system requirements"),
|
||||
("p05-interferometer", "fact", "laser-source",
|
||||
"HeNe laser source at 632.8nm. Beam expansion to cover full 1.2m aperture via diverger + CGH.",
|
||||
"optical design docs"),
|
||||
("p05-interferometer", "contact", "diffraction-intl",
|
||||
"Diffraction International: CGH vendor. Fabricates the computer-generated hologram for null testing.",
|
||||
"vendor records"),
|
||||
|
||||
# ---- p06-polisher (Polisher Suite / P11-Polisher-Fullum) ----
|
||||
("p06-polisher", "fact", "suite-overview",
|
||||
"Integrated CNC polishing suite for the Fullum observatory. Includes 3-axis polishing machine, metrology integration, and real-time process control.",
|
||||
"vault docs"),
|
||||
("p06-polisher", "decision", "control-architecture",
|
||||
"Beckhoff TwinCAT 3 selected for real-time motion control. EtherCAT fieldbus for servo drives and I/O.",
|
||||
"architecture docs"),
|
||||
("p06-polisher", "decision", "firmware-split",
|
||||
"Firmware split into safety layer (PLC-level interlocks) and application layer (trajectory generation, adaptive dwell-time).",
|
||||
"architecture docs"),
|
||||
("p06-polisher", "requirement", "axis-travel",
|
||||
"Z-axis: 200mm travel for tool engagement. X/Y: covers 1.2m mirror diameter plus overshoot margin.",
|
||||
"mechanical requirements"),
|
||||
("p06-polisher", "fact", "telemetry",
|
||||
"Real-time telemetry via MQTT. Metrics: spindle RPM, force sensor, temperature probes, position feedback at 1kHz.",
|
||||
"control design docs"),
|
||||
("p06-polisher", "contact", "fullum-observatory",
|
||||
"Fullum Observatory: site where the polishing suite will be installed. Provides infrastructure (power, vibration isolation, clean environment).",
|
||||
"project records"),
|
||||
|
||||
# ---- atomizer-v2 ----
|
||||
("atomizer-v2", "fact", "product-overview",
|
||||
"Atomizer V2: internal project management and multi-agent orchestration platform. War-room based task coordination.",
|
||||
"repo docs"),
|
||||
("atomizer-v2", "decision", "projects-first-architecture",
|
||||
"Migration to projects-first architecture: each project is a workspace with its own agents, tasks, and knowledge.",
|
||||
"war-room-migration-plan-v2.md"),
|
||||
|
||||
# ---- abb-space (P08) ----
|
||||
("abb-space", "fact", "contract-overview",
|
||||
"ABB Space mirror polishing contract. Phase 1: spherical mirror polishing (200mm). Schott Zerodur substrate.",
|
||||
"quotes + correspondence"),
|
||||
("abb-space", "contact", "schott",
|
||||
"Schott AG: substrate supplier for Zerodur mirror blanks. Quote received for 200mm blank.",
|
||||
"vendor records"),
|
||||
|
||||
# ---- atocore ----
|
||||
("atocore", "fact", "architecture",
|
||||
"AtoCore: runtime memory and knowledge layer. FastAPI + SQLite + ChromaDB. Hosted on Dalidou (Docker). Nightly pipeline: backup, extract, triage, synthesis.",
|
||||
"codebase"),
|
||||
("atocore", "decision", "no-api-keys",
|
||||
"No API keys allowed in AtoCore. LLM-assisted features use OAuth via 'claude -p' CLI or equivalent CLI-authenticated paths.",
|
||||
"DEV-LEDGER 2026-04-12"),
|
||||
("atocore", "decision", "storage-separation",
|
||||
"Human-readable sources (vault, drive) and machine operational storage (SQLite, ChromaDB) must remain separate. Machine DB is derived state.",
|
||||
"AGENTS.md"),
|
||||
("atocore", "decision", "extraction-off-hot-path",
|
||||
"Extraction stays off the capture hot path. Batch/manual only. Never block interaction recording with extraction.",
|
||||
"DEV-LEDGER 2026-04-11"),
|
||||
]
|
||||
|
||||
|
||||
def main() -> None:
|
||||
parser = argparse.ArgumentParser(description="Seed Trusted Project State")
|
||||
parser.add_argument("--base-url", default="http://dalidou:8100")
|
||||
parser.add_argument("--dry-run", action="store_true")
|
||||
args = parser.parse_args()
|
||||
|
||||
base = args.base_url.rstrip("/")
|
||||
created = 0
|
||||
skipped = 0
|
||||
errors = 0
|
||||
|
||||
for project, category, key, value, source in SEED_ENTRIES:
|
||||
if args.dry_run:
|
||||
print(f" [DRY] {project}/{category}/{key}: {value[:60]}...")
|
||||
created += 1
|
||||
continue
|
||||
|
||||
body = json.dumps({
|
||||
"project": project,
|
||||
"category": category,
|
||||
"key": key,
|
||||
"value": value,
|
||||
"source": source,
|
||||
"confidence": 1.0,
|
||||
}).encode()
|
||||
req = urllib.request.Request(
|
||||
f"{base}/project/state",
|
||||
data=body,
|
||||
headers={"Content-Type": "application/json"},
|
||||
method="POST",
|
||||
)
|
||||
try:
|
||||
resp = urllib.request.urlopen(req, timeout=10)
|
||||
result = json.loads(resp.read())
|
||||
if result.get("created"):
|
||||
created += 1
|
||||
print(f" + {project}/{category}/{key}")
|
||||
else:
|
||||
skipped += 1
|
||||
print(f" = {project}/{category}/{key} (already exists)")
|
||||
except Exception as e:
|
||||
errors += 1
|
||||
print(f" ! {project}/{category}/{key}: {e}", file=sys.stderr)
|
||||
|
||||
print(f"\nDone: {created} created, {skipped} skipped, {errors} errors")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
87
scripts/windows/atocore-backup-pull.ps1
Normal file
87
scripts/windows/atocore-backup-pull.ps1
Normal file
@@ -0,0 +1,87 @@
|
||||
# atocore-backup-pull.ps1
|
||||
#
|
||||
# Pull the latest AtoCore backup snapshot from Dalidou to this Windows machine.
|
||||
# Designed to be run by Windows Task Scheduler. Fail-open by design -- if
|
||||
# Dalidou is unreachable (laptop on the road, etc.), exit cleanly without error.
|
||||
#
|
||||
# Usage (manual test):
|
||||
# powershell.exe -ExecutionPolicy Bypass -File atocore-backup-pull.ps1
|
||||
#
|
||||
# Scheduled task: see docs/windows-backup-setup.md for Task Scheduler config.
|
||||
|
||||
$ErrorActionPreference = "Continue"
|
||||
|
||||
# --- Configuration ---
|
||||
$Remote = "papa@dalidou"
|
||||
$RemoteSnapshots = "/srv/storage/atocore/backups/snapshots"
|
||||
$LocalBackupDir = "$env:USERPROFILE\Documents\ATOCore_Backups"
|
||||
$LogDir = "$LocalBackupDir\_logs"
|
||||
$ReachabilityTest = 5 # seconds timeout for SSH probe
|
||||
|
||||
# --- Setup ---
|
||||
if (-not (Test-Path $LocalBackupDir)) {
|
||||
New-Item -ItemType Directory -Path $LocalBackupDir -Force | Out-Null
|
||||
}
|
||||
if (-not (Test-Path $LogDir)) {
|
||||
New-Item -ItemType Directory -Path $LogDir -Force | Out-Null
|
||||
}
|
||||
|
||||
$Timestamp = Get-Date -Format "yyyy-MM-dd_HHmmss"
|
||||
$LogFile = "$LogDir\backup-$Timestamp.log"
|
||||
|
||||
function Log($msg) {
|
||||
$line = "[{0}] {1}" -f (Get-Date -Format "yyyy-MM-dd HH:mm:ss"), $msg
|
||||
Write-Host $line
|
||||
Add-Content -Path $LogFile -Value $line
|
||||
}
|
||||
|
||||
Log "=== AtoCore backup pull starting ==="
|
||||
Log "Remote: $Remote"
|
||||
Log "Local target: $LocalBackupDir"
|
||||
|
||||
# --- Reachability check: fail open if Dalidou is offline ---
|
||||
Log "Checking Dalidou reachability..."
|
||||
$probe = & ssh -o ConnectTimeout=$ReachabilityTest -o BatchMode=yes `
|
||||
-o StrictHostKeyChecking=accept-new `
|
||||
$Remote "echo ok" 2>&1
|
||||
if ($LASTEXITCODE -ne 0 -or $probe -ne "ok") {
|
||||
Log "Dalidou unreachable ($probe) -- fail-open exit"
|
||||
exit 0
|
||||
}
|
||||
Log "Dalidou reachable."
|
||||
|
||||
# --- Pull the entire snapshots directory ---
|
||||
# Dalidou's retention policy (7 daily + 4 weekly + 6 monthly) already caps
|
||||
# the snapshot count, so pulling the whole dir is bounded and simple. scp
|
||||
# will overwrite local files -- we rely on this to pick up new snapshots.
|
||||
Log "Pulling snapshots via scp..."
|
||||
$LocalSnapshotsDir = Join-Path $LocalBackupDir "snapshots"
|
||||
if (-not (Test-Path $LocalSnapshotsDir)) {
|
||||
New-Item -ItemType Directory -Path $LocalSnapshotsDir -Force | Out-Null
|
||||
}
|
||||
|
||||
& scp -o BatchMode=yes -r "${Remote}:${RemoteSnapshots}/*" "$LocalSnapshotsDir\" 2>&1 |
|
||||
ForEach-Object { Add-Content -Path $LogFile -Value $_ }
|
||||
|
||||
if ($LASTEXITCODE -ne 0) {
|
||||
Log "scp failed with exit $LASTEXITCODE"
|
||||
exit 0 # fail-open
|
||||
}
|
||||
|
||||
# --- Stats ---
|
||||
$snapshots = Get-ChildItem -Path $LocalSnapshotsDir -Directory |
|
||||
Where-Object { $_.Name -match "^\d{8}T\d{6}Z$" } |
|
||||
Sort-Object Name -Descending
|
||||
|
||||
$totalSize = (Get-ChildItem $LocalSnapshotsDir -Recurse -File | Measure-Object -Property Length -Sum).Sum
|
||||
$SizeMB = [math]::Round($totalSize / 1MB, 2)
|
||||
$latest = if ($snapshots.Count -gt 0) { $snapshots[0].Name } else { "(none)" }
|
||||
|
||||
Log ("Pulled {0} snapshots successfully (total {1} MB, latest: {2})" -f $snapshots.Count, $SizeMB, $latest)
|
||||
Log "=== backup complete ==="
|
||||
|
||||
# --- Log retention: keep last 30 log files ---
|
||||
Get-ChildItem -Path $LogDir -Filter "backup-*.log" |
|
||||
Sort-Object Name -Descending |
|
||||
Select-Object -Skip 30 |
|
||||
ForEach-Object { Remove-Item $_.FullName -Force -ErrorAction SilentlyContinue }
|
||||
@@ -55,6 +55,7 @@ from atocore.memory.extractor import (
|
||||
)
|
||||
from atocore.memory.extractor_llm import (
|
||||
LLM_EXTRACTOR_VERSION,
|
||||
_cli_available as _llm_cli_available,
|
||||
extract_candidates_llm,
|
||||
)
|
||||
from atocore.memory.reinforcement import reinforce_from_interaction
|
||||
@@ -832,6 +833,18 @@ def api_extract_batch(req: ExtractBatchRequest | None = None) -> dict:
|
||||
invoke this endpoint explicitly (cron, manual curl, CLI).
|
||||
"""
|
||||
payload = req or ExtractBatchRequest()
|
||||
|
||||
if payload.mode == "llm" and not _llm_cli_available():
|
||||
raise HTTPException(
|
||||
status_code=503,
|
||||
detail=(
|
||||
"LLM extraction unavailable in this runtime: the `claude` CLI "
|
||||
"is not on PATH. Run host-side via "
|
||||
"`scripts/batch_llm_extract_live.py` instead, or call this "
|
||||
"endpoint with mode=\"rule\"."
|
||||
),
|
||||
)
|
||||
|
||||
since = payload.since
|
||||
|
||||
if not since:
|
||||
@@ -916,11 +929,14 @@ def api_dashboard() -> dict:
|
||||
"""One-shot system observability dashboard.
|
||||
|
||||
Returns memory counts by type/project/status, project state
|
||||
entry counts, recent interaction volume, and extraction pipeline
|
||||
entry counts, interaction volume by client, pipeline health
|
||||
(harness, triage stats, last run), and extraction pipeline
|
||||
status — everything an operator needs to understand AtoCore's
|
||||
health beyond the basic /health endpoint.
|
||||
"""
|
||||
import json as _json
|
||||
from collections import Counter
|
||||
from datetime import datetime as _dt, timezone as _tz
|
||||
|
||||
all_memories = get_memories(active_only=False, limit=500)
|
||||
active = [m for m in all_memories if m.status == "active"]
|
||||
@@ -930,27 +946,81 @@ def api_dashboard() -> dict:
|
||||
project_counts = dict(Counter(m.project or "(none)" for m in active))
|
||||
reinforced = [m for m in active if m.reference_count > 0]
|
||||
|
||||
interactions = list_interactions(limit=1)
|
||||
recent_interaction = interactions[0].created_at if interactions else None
|
||||
# Interaction stats — total + by_client from DB directly
|
||||
interaction_stats: dict = {"most_recent": None, "total": 0, "by_client": {}}
|
||||
try:
|
||||
from atocore.models.database import get_connection as _gc
|
||||
|
||||
# Extraction pipeline status
|
||||
extract_state = {}
|
||||
with _gc() as conn:
|
||||
row = conn.execute("SELECT count(*) FROM interactions").fetchone()
|
||||
interaction_stats["total"] = row[0] if row else 0
|
||||
rows = conn.execute(
|
||||
"SELECT client, count(*) FROM interactions GROUP BY client"
|
||||
).fetchall()
|
||||
interaction_stats["by_client"] = {r[0]: r[1] for r in rows}
|
||||
row = conn.execute(
|
||||
"SELECT created_at FROM interactions ORDER BY created_at DESC LIMIT 1"
|
||||
).fetchone()
|
||||
interaction_stats["most_recent"] = row[0] if row else None
|
||||
except Exception:
|
||||
interactions = list_interactions(limit=1)
|
||||
interaction_stats["most_recent"] = (
|
||||
interactions[0].created_at if interactions else None
|
||||
)
|
||||
|
||||
# Pipeline health from project state
|
||||
pipeline: dict = {}
|
||||
extract_state: dict = {}
|
||||
try:
|
||||
state_entries = get_state("atocore")
|
||||
for entry in state_entries:
|
||||
if entry.category == "status" and entry.key == "last_extract_batch_run":
|
||||
if entry.category != "status":
|
||||
continue
|
||||
if entry.key == "last_extract_batch_run":
|
||||
extract_state["last_run"] = entry.value
|
||||
elif entry.key == "pipeline_last_run":
|
||||
pipeline["last_run"] = entry.value
|
||||
try:
|
||||
last = _dt.fromisoformat(entry.value.replace("Z", "+00:00"))
|
||||
delta = _dt.now(_tz.utc) - last
|
||||
pipeline["hours_since_last_run"] = round(
|
||||
delta.total_seconds() / 3600, 1
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
elif entry.key == "pipeline_summary":
|
||||
try:
|
||||
pipeline["summary"] = _json.loads(entry.value)
|
||||
except Exception:
|
||||
pipeline["summary_raw"] = entry.value
|
||||
elif entry.key == "retrieval_harness_result":
|
||||
try:
|
||||
pipeline["harness"] = _json.loads(entry.value)
|
||||
except Exception:
|
||||
pipeline["harness_raw"] = entry.value
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Project state counts
|
||||
# Project state counts — include all registered projects
|
||||
ps_counts = {}
|
||||
for proj_id in ["p04-gigabit", "p05-interferometer", "p06-polisher", "atocore"]:
|
||||
try:
|
||||
entries = get_state(proj_id)
|
||||
ps_counts[proj_id] = len(entries)
|
||||
except Exception:
|
||||
pass
|
||||
try:
|
||||
from atocore.projects.registry import load_project_registry as _lpr
|
||||
|
||||
for proj in _lpr():
|
||||
try:
|
||||
entries = get_state(proj.project_id)
|
||||
ps_counts[proj.project_id] = len(entries)
|
||||
except Exception:
|
||||
pass
|
||||
except Exception:
|
||||
for proj_id in [
|
||||
"p04-gigabit", "p05-interferometer", "p06-polisher", "atocore",
|
||||
]:
|
||||
try:
|
||||
entries = get_state(proj_id)
|
||||
ps_counts[proj_id] = len(entries)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return {
|
||||
"memories": {
|
||||
@@ -964,10 +1034,9 @@ def api_dashboard() -> dict:
|
||||
"counts": ps_counts,
|
||||
"total": sum(ps_counts.values()),
|
||||
},
|
||||
"interactions": {
|
||||
"most_recent": recent_interaction,
|
||||
},
|
||||
"interactions": interaction_stats,
|
||||
"extraction_pipeline": extract_state,
|
||||
"pipeline": pipeline,
|
||||
}
|
||||
|
||||
|
||||
|
||||
@@ -104,6 +104,21 @@ class Settings(BaseSettings):
|
||||
|
||||
@property
|
||||
def resolved_project_registry_path(self) -> Path:
|
||||
"""Path to the project registry JSON file.
|
||||
|
||||
If ``ATOCORE_PROJECT_REGISTRY_DIR`` env var is set, the registry
|
||||
lives at ``<that dir>/project-registry.json``. Otherwise falls
|
||||
back to the configured ``project_registry_path`` field.
|
||||
|
||||
This lets Docker deployments point at a mounted volume via env
|
||||
var without the ephemeral in-image ``/app/config/`` getting
|
||||
wiped on every rebuild.
|
||||
"""
|
||||
import os
|
||||
|
||||
registry_dir = os.environ.get("ATOCORE_PROJECT_REGISTRY_DIR", "").strip()
|
||||
if registry_dir:
|
||||
return Path(registry_dir) / "project-registry.json"
|
||||
return self._resolve_path(self.project_registry_path)
|
||||
|
||||
@property
|
||||
|
||||
183
src/atocore/memory/_llm_prompt.py
Normal file
183
src/atocore/memory/_llm_prompt.py
Normal file
@@ -0,0 +1,183 @@
|
||||
"""Shared LLM-extractor prompt + parser (stdlib-only).
|
||||
|
||||
R12: single source of truth for the system prompt, memory type set,
|
||||
size limits, and raw JSON parsing used by both paths that shell out
|
||||
to ``claude -p``:
|
||||
|
||||
- ``atocore.memory.extractor_llm`` (in-container extractor, wraps the
|
||||
parsed dicts in ``MemoryCandidate`` with registry-checked project
|
||||
attribution)
|
||||
- ``scripts/batch_llm_extract_live.py`` (host-side extractor, can't
|
||||
import the full atocore package because Dalidou's host Python lacks
|
||||
the container's deps; imports this module via ``sys.path``)
|
||||
|
||||
This module MUST stay stdlib-only. No ``atocore`` imports, no third-
|
||||
party packages. Callers apply their own project-attribution policy on
|
||||
top of the normalized dicts this module emits.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from typing import Any
|
||||
|
||||
LLM_EXTRACTOR_VERSION = "llm-0.4.0"
|
||||
MAX_RESPONSE_CHARS = 8000
|
||||
MAX_PROMPT_CHARS = 2000
|
||||
MEMORY_TYPES = {"identity", "preference", "project", "episodic", "knowledge", "adaptation"}
|
||||
|
||||
SYSTEM_PROMPT = """You extract memory candidates from LLM conversation turns for a personal context engine called AtoCore.
|
||||
|
||||
AtoCore is the brain for Atomaste's engineering work. Known projects:
|
||||
p04-gigabit, p05-interferometer, p06-polisher, atomizer-v2, atocore,
|
||||
abb-space. Unknown project names — still tag them, the system auto-detects.
|
||||
|
||||
Your job is to emit SIGNALS that matter for future context. Be aggressive:
|
||||
err on the side of capturing useful signal. Triage filters noise downstream.
|
||||
|
||||
WHAT TO EMIT (in order of importance):
|
||||
|
||||
1. PROJECT ACTIVITY — any mention of a project with context worth remembering:
|
||||
- "Schott quote received for ABB-Space" (event + project)
|
||||
- "Cédric asked about p06 firmware timing" (stakeholder event)
|
||||
- "Still waiting on Zygo lead-time from Nabeel" (blocker status)
|
||||
- "p05 vendor decision needs to happen this week" (action item)
|
||||
|
||||
2. DECISIONS AND CHOICES — anything that commits to a direction:
|
||||
- "Going with Zygo Verifire SV for p05" (decision)
|
||||
- "Dropping stitching from primary workflow" (design choice)
|
||||
- "USB SSD mandatory, not SD card" (architectural commitment)
|
||||
|
||||
3. DURABLE ENGINEERING INSIGHT — earned knowledge that generalizes:
|
||||
- "CTE gradient dominates WFE at F/1.2" (materials insight)
|
||||
- "Preston model breaks below 5N because contact assumption fails"
|
||||
- "m=1 coma NOT correctable by force modulation" (controls insight)
|
||||
Test: would a competent engineer NEED experience to know this?
|
||||
If it's textbook/google-findable, skip it.
|
||||
|
||||
4. STAKEHOLDER AND VENDOR EVENTS:
|
||||
- "Email sent to Nabeel 2026-04-13 asking for lead time"
|
||||
- "Meeting with Jason on Table 7 next Tuesday"
|
||||
- "Starspec wants updated CAD by Friday"
|
||||
|
||||
5. PREFERENCES AND ADAPTATIONS that shape how Antoine works:
|
||||
- "Antoine prefers OAuth over API keys"
|
||||
- "Extraction stays off the capture hot path"
|
||||
|
||||
WHAT TO SKIP:
|
||||
|
||||
- Pure conversational filler ("ok thanks", "let me check")
|
||||
- Instructional help content ("run this command", "here's how to...")
|
||||
- Obvious textbook facts anyone can google in 30 seconds
|
||||
- Session meta-chatter ("let me commit this", "deploy running")
|
||||
- Transient system state snapshots ("36 active memories right now")
|
||||
|
||||
CANDIDATE TYPES — choose the best fit:
|
||||
|
||||
- project — a fact, decision, or event specific to one named project
|
||||
- knowledge — durable engineering insight (use domain, not project)
|
||||
- preference — how Antoine works / wants things done
|
||||
- adaptation — a standing rule or adjustment to behavior
|
||||
- episodic — a stakeholder event or milestone worth remembering
|
||||
|
||||
DOMAINS for knowledge candidates (required when type=knowledge and project is empty):
|
||||
physics, materials, optics, mechanics, manufacturing, metrology,
|
||||
controls, software, math, finance, business
|
||||
|
||||
TRUST HIERARCHY:
|
||||
|
||||
- project-specific: set project to the project id, leave domain empty
|
||||
- domain knowledge: set domain, leave project empty
|
||||
- events/activity: use project, type=project or episodic
|
||||
- one conversation can produce MULTIPLE candidates — emit them all
|
||||
|
||||
OUTPUT RULES:
|
||||
|
||||
- Each candidate content under 250 characters, stands alone
|
||||
- Default confidence 0.5. Raise to 0.7 only for ratified/committed claims.
|
||||
- Raw JSON array, no prose, no markdown fences
|
||||
- Empty array [] is fine when the conversation has no durable signal
|
||||
|
||||
Each element:
|
||||
{"type": "project|knowledge|preference|adaptation|episodic", "content": "...", "project": "...", "domain": "", "confidence": 0.5}"""
|
||||
|
||||
|
||||
def build_user_message(prompt: str, response: str, project_hint: str) -> str:
|
||||
prompt_excerpt = (prompt or "")[:MAX_PROMPT_CHARS]
|
||||
response_excerpt = (response or "")[:MAX_RESPONSE_CHARS]
|
||||
return (
|
||||
f"PROJECT HINT (may be empty): {project_hint or ''}\n\n"
|
||||
f"USER PROMPT:\n{prompt_excerpt}\n\n"
|
||||
f"ASSISTANT RESPONSE:\n{response_excerpt}\n\n"
|
||||
"Return the JSON array now."
|
||||
)
|
||||
|
||||
|
||||
def parse_llm_json_array(raw_output: str) -> list[dict[str, Any]]:
|
||||
"""Strip markdown fences / leading prose and return the parsed JSON
|
||||
array as a list of raw dicts. Returns an empty list on any parse
|
||||
failure — callers decide whether to log."""
|
||||
text = (raw_output or "").strip()
|
||||
if text.startswith("```"):
|
||||
text = text.strip("`")
|
||||
nl = text.find("\n")
|
||||
if nl >= 0:
|
||||
text = text[nl + 1:]
|
||||
if text.endswith("```"):
|
||||
text = text[:-3]
|
||||
text = text.strip()
|
||||
|
||||
if not text or text == "[]":
|
||||
return []
|
||||
|
||||
if not text.lstrip().startswith("["):
|
||||
start = text.find("[")
|
||||
end = text.rfind("]")
|
||||
if start >= 0 and end > start:
|
||||
text = text[start:end + 1]
|
||||
|
||||
try:
|
||||
parsed = json.loads(text)
|
||||
except json.JSONDecodeError:
|
||||
return []
|
||||
|
||||
if not isinstance(parsed, list):
|
||||
return []
|
||||
return [item for item in parsed if isinstance(item, dict)]
|
||||
|
||||
|
||||
def normalize_candidate_item(item: dict[str, Any]) -> dict[str, Any] | None:
|
||||
"""Validate and normalize one raw model item into a candidate dict.
|
||||
|
||||
Returns None if the item fails basic validation (unknown type,
|
||||
empty content). Does NOT apply project-attribution policy — that's
|
||||
the caller's job, since the registry-check differs between the
|
||||
in-container path and the host path.
|
||||
|
||||
Output keys: type, content, project (raw model value), domain,
|
||||
confidence.
|
||||
"""
|
||||
mem_type = str(item.get("type") or "").strip().lower()
|
||||
content = str(item.get("content") or "").strip()
|
||||
if mem_type not in MEMORY_TYPES or not content:
|
||||
return None
|
||||
|
||||
model_project = str(item.get("project") or "").strip()
|
||||
domain = str(item.get("domain") or "").strip().lower()
|
||||
|
||||
try:
|
||||
confidence = float(item.get("confidence", 0.5))
|
||||
except (TypeError, ValueError):
|
||||
confidence = 0.5
|
||||
confidence = max(0.0, min(1.0, confidence))
|
||||
|
||||
if domain and not model_project:
|
||||
content = f"[{domain}] {content}"
|
||||
|
||||
return {
|
||||
"type": mem_type,
|
||||
"content": content[:1000],
|
||||
"project": model_project,
|
||||
"domain": domain,
|
||||
"confidence": confidence,
|
||||
}
|
||||
@@ -49,7 +49,6 @@ Implementation notes:
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
@@ -58,92 +57,21 @@ from dataclasses import dataclass
|
||||
from functools import lru_cache
|
||||
|
||||
from atocore.interactions.service import Interaction
|
||||
from atocore.memory._llm_prompt import (
|
||||
LLM_EXTRACTOR_VERSION,
|
||||
SYSTEM_PROMPT as _SYSTEM_PROMPT,
|
||||
build_user_message,
|
||||
normalize_candidate_item,
|
||||
parse_llm_json_array,
|
||||
)
|
||||
from atocore.memory.extractor import MemoryCandidate
|
||||
from atocore.memory.service import MEMORY_TYPES
|
||||
from atocore.observability.logger import get_logger
|
||||
|
||||
log = get_logger("extractor_llm")
|
||||
|
||||
LLM_EXTRACTOR_VERSION = "llm-0.4.0"
|
||||
DEFAULT_MODEL = os.environ.get("ATOCORE_LLM_EXTRACTOR_MODEL", "sonnet")
|
||||
DEFAULT_TIMEOUT_S = float(os.environ.get("ATOCORE_LLM_EXTRACTOR_TIMEOUT_S", "90"))
|
||||
MAX_RESPONSE_CHARS = 8000
|
||||
MAX_PROMPT_CHARS = 2000
|
||||
|
||||
_SYSTEM_PROMPT = """You extract memory candidates from LLM conversation turns for a personal context engine called AtoCore.
|
||||
|
||||
AtoCore is the brain for Atomaste's engineering work. Known projects:
|
||||
p04-gigabit, p05-interferometer, p06-polisher, atomizer-v2, atocore,
|
||||
abb-space. Unknown project names — still tag them, the system auto-detects.
|
||||
|
||||
Your job is to emit SIGNALS that matter for future context. Be aggressive:
|
||||
err on the side of capturing useful signal. Triage filters noise downstream.
|
||||
|
||||
WHAT TO EMIT (in order of importance):
|
||||
|
||||
1. PROJECT ACTIVITY — any mention of a project with context worth remembering:
|
||||
- "Schott quote received for ABB-Space" (event + project)
|
||||
- "Cédric asked about p06 firmware timing" (stakeholder event)
|
||||
- "Still waiting on Zygo lead-time from Nabeel" (blocker status)
|
||||
- "p05 vendor decision needs to happen this week" (action item)
|
||||
|
||||
2. DECISIONS AND CHOICES — anything that commits to a direction:
|
||||
- "Going with Zygo Verifire SV for p05" (decision)
|
||||
- "Dropping stitching from primary workflow" (design choice)
|
||||
- "USB SSD mandatory, not SD card" (architectural commitment)
|
||||
|
||||
3. DURABLE ENGINEERING INSIGHT — earned knowledge that generalizes:
|
||||
- "CTE gradient dominates WFE at F/1.2" (materials insight)
|
||||
- "Preston model breaks below 5N because contact assumption fails"
|
||||
- "m=1 coma NOT correctable by force modulation" (controls insight)
|
||||
Test: would a competent engineer NEED experience to know this?
|
||||
If it's textbook/google-findable, skip it.
|
||||
|
||||
4. STAKEHOLDER AND VENDOR EVENTS:
|
||||
- "Email sent to Nabeel 2026-04-13 asking for lead time"
|
||||
- "Meeting with Jason on Table 7 next Tuesday"
|
||||
- "Starspec wants updated CAD by Friday"
|
||||
|
||||
5. PREFERENCES AND ADAPTATIONS that shape how Antoine works:
|
||||
- "Antoine prefers OAuth over API keys"
|
||||
- "Extraction stays off the capture hot path"
|
||||
|
||||
WHAT TO SKIP:
|
||||
|
||||
- Pure conversational filler ("ok thanks", "let me check")
|
||||
- Instructional help content ("run this command", "here's how to...")
|
||||
- Obvious textbook facts anyone can google in 30 seconds
|
||||
- Session meta-chatter ("let me commit this", "deploy running")
|
||||
- Transient system state snapshots ("36 active memories right now")
|
||||
|
||||
CANDIDATE TYPES — choose the best fit:
|
||||
|
||||
- project — a fact, decision, or event specific to one named project
|
||||
- knowledge — durable engineering insight (use domain, not project)
|
||||
- preference — how Antoine works / wants things done
|
||||
- adaptation — a standing rule or adjustment to behavior
|
||||
- episodic — a stakeholder event or milestone worth remembering
|
||||
|
||||
DOMAINS for knowledge candidates (required when type=knowledge and project is empty):
|
||||
physics, materials, optics, mechanics, manufacturing, metrology,
|
||||
controls, software, math, finance, business
|
||||
|
||||
TRUST HIERARCHY:
|
||||
|
||||
- project-specific: set project to the project id, leave domain empty
|
||||
- domain knowledge: set domain, leave project empty
|
||||
- events/activity: use project, type=project or episodic
|
||||
- one conversation can produce MULTIPLE candidates — emit them all
|
||||
|
||||
OUTPUT RULES:
|
||||
|
||||
- Each candidate content under 250 characters, stands alone
|
||||
- Default confidence 0.5. Raise to 0.7 only for ratified/committed claims.
|
||||
- Raw JSON array, no prose, no markdown fences
|
||||
- Empty array [] is fine when the conversation has no durable signal
|
||||
|
||||
Each element:
|
||||
{"type": "project|knowledge|preference|adaptation|episodic", "content": "...", "project": "...", "domain": "", "confidence": 0.5}"""
|
||||
|
||||
|
||||
@dataclass
|
||||
@@ -206,13 +134,10 @@ def extract_candidates_llm_verbose(
|
||||
if not response_text:
|
||||
return LLMExtractionResult(candidates=[], raw_output="", error="empty_response")
|
||||
|
||||
prompt_excerpt = (interaction.prompt or "")[:MAX_PROMPT_CHARS]
|
||||
response_excerpt = response_text[:MAX_RESPONSE_CHARS]
|
||||
user_message = (
|
||||
f"PROJECT HINT (may be empty): {interaction.project or ''}\n\n"
|
||||
f"USER PROMPT:\n{prompt_excerpt}\n\n"
|
||||
f"ASSISTANT RESPONSE:\n{response_excerpt}\n\n"
|
||||
"Return the JSON array now."
|
||||
user_message = build_user_message(
|
||||
interaction.prompt or "",
|
||||
response_text,
|
||||
interaction.project or "",
|
||||
)
|
||||
|
||||
args = [
|
||||
@@ -270,50 +195,25 @@ def extract_candidates_llm_verbose(
|
||||
def _parse_candidates(raw_output: str, interaction: Interaction) -> list[MemoryCandidate]:
|
||||
"""Parse the model's JSON output into MemoryCandidate objects.
|
||||
|
||||
Tolerates common model glitches: surrounding whitespace, stray
|
||||
markdown fences, leading/trailing prose. Silently drops malformed
|
||||
array elements rather than raising.
|
||||
Shared stripping + per-item validation live in
|
||||
``atocore.memory._llm_prompt``. This function adds the container-
|
||||
only R9 project attribution: registry-check model_project and fall
|
||||
back to the interaction scope when set.
|
||||
"""
|
||||
text = raw_output.strip()
|
||||
if text.startswith("```"):
|
||||
text = text.strip("`")
|
||||
first_newline = text.find("\n")
|
||||
if first_newline >= 0:
|
||||
text = text[first_newline + 1 :]
|
||||
if text.endswith("```"):
|
||||
text = text[:-3]
|
||||
text = text.strip()
|
||||
|
||||
if not text or text == "[]":
|
||||
return []
|
||||
|
||||
if not text.lstrip().startswith("["):
|
||||
start = text.find("[")
|
||||
end = text.rfind("]")
|
||||
if start >= 0 and end > start:
|
||||
text = text[start : end + 1]
|
||||
|
||||
try:
|
||||
parsed = json.loads(text)
|
||||
except json.JSONDecodeError as exc:
|
||||
log.error("llm_extractor_parse_failed", error=str(exc), raw_prefix=raw_output[:120])
|
||||
return []
|
||||
|
||||
if not isinstance(parsed, list):
|
||||
return []
|
||||
raw_items = parse_llm_json_array(raw_output)
|
||||
if not raw_items and raw_output.strip() not in ("", "[]"):
|
||||
log.error("llm_extractor_parse_failed", raw_prefix=raw_output[:120])
|
||||
|
||||
results: list[MemoryCandidate] = []
|
||||
for item in parsed:
|
||||
if not isinstance(item, dict):
|
||||
for raw_item in raw_items:
|
||||
normalized = normalize_candidate_item(raw_item)
|
||||
if normalized is None:
|
||||
continue
|
||||
mem_type = str(item.get("type") or "").strip().lower()
|
||||
content = str(item.get("content") or "").strip()
|
||||
model_project = str(item.get("project") or "").strip()
|
||||
# R9 trust hierarchy for project attribution:
|
||||
# 1. Interaction scope always wins when set (strongest signal)
|
||||
# 2. Model project used only when interaction is unscoped
|
||||
# AND model project resolves to a registered project
|
||||
# 3. Empty string when both are empty/unregistered
|
||||
|
||||
model_project = normalized["project"]
|
||||
# R9 trust hierarchy: interaction scope wins; else registry-
|
||||
# resolve the model's tag; else keep the model's tag so auto-
|
||||
# triage can surface unregistered projects.
|
||||
if interaction.project:
|
||||
project = interaction.project
|
||||
elif model_project:
|
||||
@@ -328,9 +228,6 @@ def _parse_candidates(raw_output: str, interaction: Interaction) -> list[MemoryC
|
||||
if resolved in registered_ids:
|
||||
project = resolved
|
||||
else:
|
||||
# Unregistered project — keep the model's tag so
|
||||
# auto-triage / the operator can see it and decide
|
||||
# whether to register it as a new project or lead.
|
||||
project = model_project
|
||||
log.info(
|
||||
"unregistered_project_detected",
|
||||
@@ -338,34 +235,19 @@ def _parse_candidates(raw_output: str, interaction: Interaction) -> list[MemoryC
|
||||
interaction_id=interaction.id,
|
||||
)
|
||||
except Exception:
|
||||
project = model_project if model_project else ""
|
||||
project = model_project
|
||||
else:
|
||||
project = ""
|
||||
domain = str(item.get("domain") or "").strip().lower()
|
||||
confidence_raw = item.get("confidence", 0.5)
|
||||
if mem_type not in MEMORY_TYPES:
|
||||
continue
|
||||
if not content:
|
||||
continue
|
||||
# Domain knowledge: embed the domain tag in the content so it
|
||||
# survives without a schema migration. The context builder
|
||||
# can match on it via query-relevance ranking, and a future
|
||||
# migration can parse it into a proper column.
|
||||
if domain and not project:
|
||||
content = f"[{domain}] {content}"
|
||||
try:
|
||||
confidence = float(confidence_raw)
|
||||
except (TypeError, ValueError):
|
||||
confidence = 0.5
|
||||
confidence = max(0.0, min(1.0, confidence))
|
||||
|
||||
content = normalized["content"]
|
||||
results.append(
|
||||
MemoryCandidate(
|
||||
memory_type=mem_type,
|
||||
content=content[:1000],
|
||||
memory_type=normalized["type"],
|
||||
content=content,
|
||||
rule="llm_extraction",
|
||||
source_span=content[:200],
|
||||
project=project,
|
||||
confidence=confidence,
|
||||
confidence=normalized["confidence"],
|
||||
source_interaction_id=interaction.id,
|
||||
extractor_version=LLM_EXTRACTOR_VERSION,
|
||||
)
|
||||
|
||||
@@ -340,6 +340,84 @@ def reinforce_memory(
|
||||
return True, old_confidence, new_confidence
|
||||
|
||||
|
||||
def auto_promote_reinforced(
|
||||
min_reference_count: int = 3,
|
||||
min_confidence: float = 0.7,
|
||||
max_age_days: int = 14,
|
||||
) -> list[str]:
|
||||
"""Auto-promote candidate memories with strong reinforcement signals.
|
||||
|
||||
Phase 10: memories that have been reinforced by multiple interactions
|
||||
graduate from candidate to active without human review. This rewards
|
||||
knowledge that the system keeps referencing organically.
|
||||
|
||||
Returns a list of promoted memory IDs.
|
||||
"""
|
||||
from datetime import timedelta
|
||||
|
||||
cutoff = (
|
||||
datetime.now(timezone.utc) - timedelta(days=max_age_days)
|
||||
).strftime("%Y-%m-%d %H:%M:%S")
|
||||
promoted: list[str] = []
|
||||
with get_connection() as conn:
|
||||
rows = conn.execute(
|
||||
"SELECT id, content, memory_type, project, confidence, "
|
||||
"reference_count FROM memories "
|
||||
"WHERE status = 'candidate' "
|
||||
"AND COALESCE(reference_count, 0) >= ? "
|
||||
"AND confidence >= ? "
|
||||
"AND last_referenced_at >= ?",
|
||||
(min_reference_count, min_confidence, cutoff),
|
||||
).fetchall()
|
||||
|
||||
for row in rows:
|
||||
mid = row["id"]
|
||||
ok = promote_memory(mid)
|
||||
if ok:
|
||||
promoted.append(mid)
|
||||
log.info(
|
||||
"memory_auto_promoted",
|
||||
memory_id=mid,
|
||||
memory_type=row["memory_type"],
|
||||
project=row["project"] or "(global)",
|
||||
reference_count=row["reference_count"],
|
||||
confidence=round(row["confidence"], 3),
|
||||
)
|
||||
return promoted
|
||||
|
||||
|
||||
def expire_stale_candidates(
|
||||
max_age_days: int = 14,
|
||||
) -> list[str]:
|
||||
"""Reject candidate memories that sat in queue too long unreinforced.
|
||||
|
||||
Candidates older than ``max_age_days`` with zero reinforcement are
|
||||
auto-rejected to prevent unbounded queue growth. Returns rejected IDs.
|
||||
"""
|
||||
from datetime import timedelta
|
||||
|
||||
cutoff = (
|
||||
datetime.now(timezone.utc) - timedelta(days=max_age_days)
|
||||
).strftime("%Y-%m-%d %H:%M:%S")
|
||||
expired: list[str] = []
|
||||
with get_connection() as conn:
|
||||
rows = conn.execute(
|
||||
"SELECT id FROM memories "
|
||||
"WHERE status = 'candidate' "
|
||||
"AND COALESCE(reference_count, 0) = 0 "
|
||||
"AND created_at < ?",
|
||||
(cutoff,),
|
||||
).fetchall()
|
||||
|
||||
for row in rows:
|
||||
mid = row["id"]
|
||||
ok = reject_candidate_memory(mid)
|
||||
if ok:
|
||||
expired.append(mid)
|
||||
log.info("memory_expired", memory_id=mid)
|
||||
return expired
|
||||
|
||||
|
||||
def get_memories_for_context(
|
||||
memory_types: list[str] | None = None,
|
||||
project: str | None = None,
|
||||
|
||||
@@ -171,3 +171,38 @@ def test_llm_extraction_failure_returns_empty(tmp_data_dir, monkeypatch):
|
||||
# Nothing in the candidate queue
|
||||
queue = get_memories(status="candidate", limit=10)
|
||||
assert len(queue) == 0
|
||||
|
||||
|
||||
def test_extract_batch_api_503_when_cli_missing(tmp_data_dir, monkeypatch):
|
||||
"""R11: POST /admin/extract-batch with mode=llm must fail loud when
|
||||
the `claude` CLI is unavailable, instead of silently returning a
|
||||
success-with-0-candidates payload (which masked host-vs-container
|
||||
truth for operators)."""
|
||||
from fastapi.testclient import TestClient
|
||||
from atocore.main import app
|
||||
import atocore.api.routes as routes
|
||||
|
||||
init_db()
|
||||
monkeypatch.setattr(routes, "_llm_cli_available", lambda: False)
|
||||
|
||||
client = TestClient(app)
|
||||
response = client.post("/admin/extract-batch", json={"mode": "llm"})
|
||||
|
||||
assert response.status_code == 503
|
||||
assert "claude" in response.json()["detail"].lower()
|
||||
|
||||
|
||||
def test_extract_batch_api_rule_mode_ok_without_cli(tmp_data_dir, monkeypatch):
|
||||
"""Rule mode must still work when the LLM CLI is missing — R11 only
|
||||
affects mode=llm."""
|
||||
from fastapi.testclient import TestClient
|
||||
from atocore.main import app
|
||||
import atocore.api.routes as routes
|
||||
|
||||
init_db()
|
||||
monkeypatch.setattr(routes, "_llm_cli_available", lambda: False)
|
||||
|
||||
client = TestClient(app)
|
||||
response = client.post("/admin/extract-batch", json={"mode": "rule"})
|
||||
|
||||
assert response.status_code == 200
|
||||
|
||||
@@ -186,3 +186,98 @@ def test_memories_for_context_empty(isolated_db):
|
||||
text, chars = get_memories_for_context()
|
||||
assert text == ""
|
||||
assert chars == 0
|
||||
|
||||
|
||||
# --- Phase 10: auto-promotion + candidate expiry ---
|
||||
|
||||
|
||||
def _get_memory_by_id(memory_id):
|
||||
"""Helper: fetch a single memory by ID."""
|
||||
from atocore.models.database import get_connection
|
||||
with get_connection() as conn:
|
||||
row = conn.execute("SELECT * FROM memories WHERE id = ?", (memory_id,)).fetchone()
|
||||
return dict(row) if row else None
|
||||
|
||||
|
||||
def test_auto_promote_reinforced_basic(isolated_db):
|
||||
from atocore.memory.service import (
|
||||
auto_promote_reinforced,
|
||||
create_memory,
|
||||
reinforce_memory,
|
||||
)
|
||||
|
||||
mem_obj = create_memory("knowledge", "Zerodur has near-zero CTE", status="candidate", confidence=0.7)
|
||||
mid = mem_obj.id
|
||||
# reinforce_memory only touches active memories, so we need to
|
||||
# promote first to reinforce, then demote back to candidate —
|
||||
# OR just bump reference_count + last_referenced_at directly
|
||||
from atocore.models.database import get_connection
|
||||
from datetime import datetime, timezone
|
||||
now = datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S")
|
||||
with get_connection() as conn:
|
||||
conn.execute(
|
||||
"UPDATE memories SET reference_count = 3, last_referenced_at = ? WHERE id = ?",
|
||||
(now, mid),
|
||||
)
|
||||
|
||||
promoted = auto_promote_reinforced(min_reference_count=3, min_confidence=0.7)
|
||||
assert mid in promoted
|
||||
mem = _get_memory_by_id(mid)
|
||||
assert mem["status"] == "active"
|
||||
|
||||
|
||||
def test_auto_promote_reinforced_ignores_low_refs(isolated_db):
|
||||
from atocore.memory.service import auto_promote_reinforced, create_memory
|
||||
from atocore.models.database import get_connection
|
||||
from datetime import datetime, timezone
|
||||
|
||||
mem_obj = create_memory("knowledge", "Some knowledge", status="candidate", confidence=0.7)
|
||||
mid = mem_obj.id
|
||||
now = datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S")
|
||||
with get_connection() as conn:
|
||||
conn.execute(
|
||||
"UPDATE memories SET reference_count = 1, last_referenced_at = ? WHERE id = ?",
|
||||
(now, mid),
|
||||
)
|
||||
|
||||
promoted = auto_promote_reinforced(min_reference_count=3, min_confidence=0.7)
|
||||
assert mid not in promoted
|
||||
mem = _get_memory_by_id(mid)
|
||||
assert mem["status"] == "candidate"
|
||||
|
||||
|
||||
def test_expire_stale_candidates(isolated_db):
|
||||
from atocore.memory.service import create_memory, expire_stale_candidates
|
||||
from atocore.models.database import get_connection
|
||||
|
||||
mem_obj = create_memory("knowledge", "Old unreferenced fact", status="candidate")
|
||||
mid = mem_obj.id
|
||||
with get_connection() as conn:
|
||||
conn.execute(
|
||||
"UPDATE memories SET created_at = datetime('now', '-30 days') WHERE id = ?",
|
||||
(mid,),
|
||||
)
|
||||
|
||||
expired = expire_stale_candidates(max_age_days=14)
|
||||
assert mid in expired
|
||||
mem = _get_memory_by_id(mid)
|
||||
assert mem["status"] == "invalid"
|
||||
|
||||
|
||||
def test_expire_stale_candidates_keeps_reinforced(isolated_db):
|
||||
from atocore.memory.service import create_memory, expire_stale_candidates
|
||||
from atocore.models.database import get_connection
|
||||
|
||||
mem_obj = create_memory("knowledge", "Referenced fact", status="candidate")
|
||||
mid = mem_obj.id
|
||||
with get_connection() as conn:
|
||||
conn.execute(
|
||||
"UPDATE memories SET reference_count = 1, "
|
||||
"created_at = datetime('now', '-30 days') WHERE id = ?",
|
||||
(mid,),
|
||||
)
|
||||
|
||||
expired = expire_stale_candidates(max_age_days=14)
|
||||
assert mid not in expired
|
||||
mem = _get_memory_by_id(mid)
|
||||
assert mem["status"] == "candidate"
|
||||
|
||||
Reference in New Issue
Block a user