Compare commits
31 Commits
codex/dali
...
codex/audi
| Author | SHA1 | Date | |
|---|---|---|---|
| 89c7964237 | |||
| 146f2e4a5e | |||
| 5c69f77b45 | |||
| 3921c5ffc7 | |||
| 93f796207f | |||
| b98a658831 | |||
| 06792d862e | |||
| 95daa5c040 | |||
| 3a7e8ccba4 | |||
| a29b5e22f2 | |||
| b309e7fd49 | |||
| 330ecfb6a6 | |||
| 7d8d599030 | |||
| d9dc55f841 | |||
| 81307cec47 | |||
| 59331e522d | |||
| b3253f35ee | |||
| 30ee857d62 | |||
| 38f6e525af | |||
| 37331d53ef | |||
| 5aeeb1cad1 | |||
| 4da81c9e4e | |||
| 7bf83bf46a | |||
| 1161645415 | |||
| 5913da53c5 | |||
| 8ea53f4003 | |||
| 9366ba7879 | |||
| c5bad996a7 | |||
| 0b1742770a | |||
| 2829d5ec1c | |||
| f49637b5cc |
@@ -1,5 +1,13 @@
|
||||
# AGENTS.md
|
||||
|
||||
## Session protocol (read first, every session)
|
||||
|
||||
**Before doing anything else, read `DEV-LEDGER.md` at the repo root.** It is the one-file source of truth for "what is currently true" — live SHA, active plan, open review findings, recent decisions. The narrative docs under `docs/` may lag; the ledger does not.
|
||||
|
||||
**Before ending a session, append a Session Log line to `DEV-LEDGER.md`** with what you did and which commit range it covers, and bump the Orientation section if anything there changed.
|
||||
|
||||
This rule applies equally to Claude, Codex, and any future agent working in this repo.
|
||||
|
||||
## Project role
|
||||
This repository is AtoCore, the runtime and machine-memory layer of the Ato ecosystem.
|
||||
|
||||
|
||||
30
CLAUDE.md
Normal file
30
CLAUDE.md
Normal file
@@ -0,0 +1,30 @@
|
||||
# CLAUDE.md — project instructions for AtoCore
|
||||
|
||||
## Session protocol
|
||||
|
||||
Before doing anything else in this repo, read `DEV-LEDGER.md` at the repo root. It is the shared operating memory between Claude, Codex, and the human operator — live Dalidou SHA, active plan, open P1/P2 review findings, recent decisions, and session log. The narrative docs under `docs/` sometimes lag; the ledger does not.
|
||||
|
||||
Before ending a session, append a Session Log line to `DEV-LEDGER.md` covering:
|
||||
|
||||
- which commits you produced (sha range)
|
||||
- what changed at a high level
|
||||
- any harness / test count deltas
|
||||
- anything you overclaimed and later corrected
|
||||
|
||||
Bump the **Orientation** section if `live_sha`, `main_tip`, `test_count`, or `harness` changed.
|
||||
|
||||
`AGENTS.md` at the repo root carries the broader project principles (storage separation, deployment model, coding guidance). Read it when you need the "why" behind a constraint.
|
||||
|
||||
## Deploy workflow
|
||||
|
||||
```bash
|
||||
git push origin main && ssh papa@dalidou "bash /srv/storage/atocore/app/deploy/dalidou/deploy.sh"
|
||||
```
|
||||
|
||||
The deploy script self-verifies via `/health` build_sha — if it exits non-zero, do not assume the change is live.
|
||||
|
||||
## Working model
|
||||
|
||||
- Claude builds; Codex audits. No parallel work on the same files.
|
||||
- P1 review findings block further `main` commits until acknowledged in the ledger's **Open Review Findings** table.
|
||||
- Codex branches must fork from `origin/main` (no orphan commits that require `--allow-unrelated-histories`).
|
||||
184
DEV-LEDGER.md
Normal file
184
DEV-LEDGER.md
Normal file
@@ -0,0 +1,184 @@
|
||||
# AtoCore Dev Ledger
|
||||
|
||||
> Shared operating memory between humans, Claude, and Codex.
|
||||
> **Every session MUST read this file at start and append a Session Log entry before ending.**
|
||||
> Section headers are stable - do not rename them. Trim Session Log and Recent Decisions to the last 20 entries at session end; older history lives in `git log` and `docs/`.
|
||||
|
||||
## Orientation
|
||||
|
||||
- **live_sha** (Dalidou `/health` build_sha): `5c69f77`
|
||||
- **last_updated**: 2026-04-12 by Codex (audit branch `codex/audit-2026-04-12`)
|
||||
- **main_tip**: `146f2e4`
|
||||
- **test_count**: 278 passing
|
||||
- **harness**: `15/18 PASS` (remaining failures are mixed: p06-firmware-interface exposes a lexical-ranking tie, p06-offline-design is a live triage scoping miss, p06-tailscale still has retrieved-chunk bleed)
|
||||
- **active_memories**: 36 (was 20 before mini-phase; p06-polisher 2->16, atocore 0->5)
|
||||
- **off_host_backup**: `papa@192.168.86.39:/home/papa/atocore-backups/` via cron env `ATOCORE_BACKUP_RSYNC`, verified
|
||||
|
||||
## Active Plan
|
||||
|
||||
**Mini-phase**: Extractor improvement (eval-driven) + retrieval harness expansion.
|
||||
**Duration**: 8 days, hard gates at each day boundary.
|
||||
**Plan author**: Codex (2026-04-11). **Executor**: Claude. **Audit**: Codex.
|
||||
|
||||
### Preflight (before Day 1)
|
||||
|
||||
Stop if any of these fail:
|
||||
|
||||
- `git rev-parse HEAD` on `main` matches the expected branching tip
|
||||
- Live `/health` on Dalidou reports the SHA you think is deployed
|
||||
- `python scripts/retrieval_eval.py --json` still passes at the current baseline
|
||||
- `batch-extract` over the known 42-capture slice reproduces the current low-yield baseline
|
||||
- A frozen sample set exists for extractor labeling so the target does not move mid-phase
|
||||
|
||||
Success: baseline eval output saved, baseline extract output saved, working branch created from `origin/main`.
|
||||
|
||||
### Day 1 - Labeled extractor eval set
|
||||
|
||||
Pick 30 real captures: 10 that should produce 0 candidates, 10 that should plausibly produce 1, 10 ambiguous/hard. Store as a stable artifact (interaction id, expected count, expected type, notes). Add a runner that scores extractor output against labels.
|
||||
|
||||
Success: 30 labeled interactions in a stable artifact, one-command precision/recall output.
|
||||
Fail-early: if labeling 30 takes more than a day because the concept is unclear, tighten the extraction target before touching code.
|
||||
|
||||
### Day 2 - Measure current extractor
|
||||
|
||||
Run the rule-based extractor on all 30. Record yield, TP, FP, FN. Bucket misses by class (conversational preference, decision summary, status/constraint, meta chatter).
|
||||
|
||||
Success: short scorecard with counts by miss type, top 2 miss classes obvious.
|
||||
Fail-early: if the labeled set shows fewer than 5 plausible positives total, the corpus is too weak - relabel before tuning.
|
||||
|
||||
### Day 3 - Smallest rule expansion for top miss class
|
||||
|
||||
Add 1-2 narrow, explainable rules for the worst miss class. Add unit tests from real paraphrase examples in the labeled set. Then rerun eval.
|
||||
|
||||
Success: recall up on the labeled set, false positives do not materially rise, new tests cover the new cue class.
|
||||
Fail-early: if one rule expansion raises FP above ~20% of extracted candidates, revert or narrow before adding more.
|
||||
|
||||
### Day 4 - Decision gate: more rules or LLM-assisted prototype
|
||||
|
||||
If rule expansion reaches a **meaningfully reviewable queue**, keep going with rules. Otherwise prototype an LLM-assisted extraction mode behind a flag.
|
||||
|
||||
"Meaningfully reviewable queue":
|
||||
- >= 15-25% candidate yield on the 30 labeled captures
|
||||
- FP rate low enough that manual triage feels tolerable
|
||||
- >= 2 real non-synthetic candidates worth review
|
||||
|
||||
Hard stop: if candidate yield is still under 10% after this point, stop rule tinkering and switch to architecture review (LLM-assisted OR narrower extraction scope).
|
||||
|
||||
### Day 5 - Stabilize and document
|
||||
|
||||
Add remaining focused rules or the flagged LLM-assisted path. Write down in-scope and out-of-scope utterance kinds.
|
||||
|
||||
Success: labeled eval green against target threshold, extractor scope explainable in <= 5 bullets.
|
||||
|
||||
### Day 6 - Retrieval harness expansion (6 -> 15-20 fixtures)
|
||||
|
||||
Grow across p04/p05/p06. Include short ambiguous prompts, cross-project collision cases, expected project-state wins, expected project-memory wins, and 1-2 "should fail open / low confidence" cases.
|
||||
|
||||
Success: >= 15 fixtures, each active project has easy + medium + hard cases.
|
||||
Fail-early: if fixtures are mostly obvious wins, add harder adversarial cases before claiming coverage.
|
||||
|
||||
### Day 7 - Regression pass and calibration
|
||||
|
||||
Run harness on current code vs live Dalidou. Inspect failures (ranking, ingestion gap, project bleed, budget). Make at most ONE ranking/budget tweak if the harness clearly justifies it. Do not mix harness expansion and ranking changes in a single commit unless tightly coupled.
|
||||
|
||||
Success: harness still passes or improves after extractor work; any ranking tweak is justified by a concrete fixture delta.
|
||||
Fail-early: if > 20-25% of harness fixtures regress after extractor changes, separate concerns before merging.
|
||||
|
||||
### Day 8 - Merge and close
|
||||
|
||||
Clean commit sequence. Save before/after metrics (extractor scorecard, harness results). Update docs only with claims the metrics support.
|
||||
|
||||
Merge order: labeled corpus + runner -> extractor improvements + tests -> harness expansion -> any justified ranking tweak -> docs sync last.
|
||||
|
||||
Success: point to a before/after delta for both extraction and retrieval; docs do not overclaim.
|
||||
|
||||
### Hard Gates (stop/rethink points)
|
||||
|
||||
- Extractor yield < 10% after 30 labeled interactions -> stop, reconsider rule-only extraction
|
||||
- FP rate > 20% on labeled set -> narrow rules before adding more
|
||||
- Harness expansion finds < 3 genuinely hard cases -> harness still too soft
|
||||
- Ranking change improves one project but regresses another -> do not merge without explicit tradeoff note
|
||||
|
||||
### Branching
|
||||
|
||||
One branch `codex/extractor-eval-loop` for Day 1-5, a second `codex/retrieval-harness-expansion` for Day 6-7. Keeps extraction and retrieval judgments auditable.
|
||||
|
||||
## Review Protocol
|
||||
|
||||
- Codex records review findings in **Open Review Findings**.
|
||||
- Claude must read **Open Review Findings** at session start before coding.
|
||||
- Codex owns finding text. Claude may update operational fields only:
|
||||
- `status`
|
||||
- `owner`
|
||||
- `resolved_by`
|
||||
- If Claude disagrees with a finding, do not rewrite it. Mark it `declined` and explain why in the **Session Log**.
|
||||
- Any commit or session that addresses a finding should reference the finding id in the commit message or **Session Log**.
|
||||
- `P1` findings block further commits in the affected area until they are at least acknowledged and explicitly tracked.
|
||||
- Findings may be code-level, claim-level, or ops-level. If the implementation boundary changes, retarget the finding instead of silently closing it.
|
||||
|
||||
## Open Review Findings
|
||||
|
||||
| id | finder | severity | file:line | summary | status | owner | opened_at | resolved_by |
|
||||
|-----|--------|----------|------------------------------------|-------------------------------------------------------------------------|--------------|--------|------------|-------------|
|
||||
| R1 | Codex | P1 | deploy/hooks/capture_stop.py:76-85 | Live Claude capture still omits `extract`, so "loop closed both sides" remains overstated in practice even though the API supports it | acknowledged | Claude | 2026-04-11 | |
|
||||
| R2 | Codex | P1 | src/atocore/context/builder.py | Project memories excluded from pack | fixed | Claude | 2026-04-11 | 8ea53f4 |
|
||||
| R3 | Claude | P2 | src/atocore/memory/extractor.py | Rule cues (`## Decision:`) never fire on conversational LLM text | open | Claude | 2026-04-11 | |
|
||||
| R4 | Codex | P2 | DEV-LEDGER.md:11 | Orientation `main_tip` was stale versus `HEAD` / `origin/main` | fixed | Codex | 2026-04-11 | 81307ce |
|
||||
| R5 | Codex | P1 | src/atocore/interactions/service.py:157-174 | The deployed extraction path still calls only the rule extractor; the new LLM extractor is eval/script-only, so Day 4 "gate cleared" is true as a benchmark result but not as an operational extraction path | open | Claude | 2026-04-12 | |
|
||||
| R6 | Codex | P1 | src/atocore/memory/extractor_llm.py:258-276 | LLM extraction accepts model-supplied `project` verbatim with no fallback to `interaction.project`; live triage promoted a clearly p06 memory (offline/network rule) as project=`""`, which explains the p06-offline-design harness miss and falsifies the current "all 3 failures are budget-contention" claim | open | Claude | 2026-04-12 | |
|
||||
| R7 | Codex | P2 | src/atocore/memory/service.py:448-459 | Query ranking is overlap-count only, so broad overview memories can tie exact low-confidence memories and win on confidence; p06-firmware-interface is not just budget pressure, it also exposes a weak lexical scorer | open | Claude | 2026-04-12 | |
|
||||
| R8 | Codex | P2 | tests/test_extractor_llm.py:1-7 | LLM extractor tests stop at parser/failure contracts; there is no automated coverage for the script-only persistence/review path that produced the 16 promoted memories, including project-scope preservation | open | Claude | 2026-04-12 | |
|
||||
|
||||
## Recent Decisions
|
||||
|
||||
- **2026-04-12** Day 4 gate cleared: LLM-assisted extraction via `claude -p` (OAuth, no API key) is the path forward. Rule extractor stays as default for structural cues. *Proposed by:* Claude. *Ratified by:* Antoine.
|
||||
- **2026-04-12** First live triage: 16 promoted, 35 rejected from 51 LLM-extracted candidates. 31% accept rate. Active memory count 20->36. *Executed by:* Claude. *Ratified by:* Antoine.
|
||||
- **2026-04-12** No API keys allowed in AtoCore — LLM-assisted features use OAuth via `claude -p` or equivalent CLI-authenticated paths. *Proposed by:* Antoine.
|
||||
- **2026-04-12** Multi-model extraction direction: extraction/triage should be model-agnostic, with Codex/Gemini/Ollama as second-pass reviewers for robustness. *Proposed by:* Antoine.
|
||||
- **2026-04-11** Adopt this ledger as shared operating memory between Claude and Codex. *Proposed by:* Antoine. *Ratified by:* Antoine.
|
||||
- **2026-04-11** Accept Codex's 8-day mini-phase plan verbatim as Active Plan. *Proposed by:* Codex. *Ratified by:* Antoine.
|
||||
- **2026-04-11** Review findings live in `DEV-LEDGER.md` with Codex owning finding text and Claude updating status fields only. *Proposed by:* Codex. *Ratified by:* Antoine.
|
||||
- **2026-04-11** Project memories land in the pack under `--- Project Memories ---` at 25% budget ratio, gated on canonical project hint. *Proposed by:* Claude.
|
||||
- **2026-04-11** Extraction stays off the capture hot path. Batch / manual only. *Proposed by:* Antoine.
|
||||
- **2026-04-11** 4-step roadmap: extractor -> harness expansion -> Wave 2 ingestion -> OpenClaw finish. Steps 1+2 as one mini-phase. *Ratified by:* Antoine.
|
||||
- **2026-04-11** Codex branches must fork from `main`, not be orphan commits. *Proposed by:* Claude. *Agreed by:* Codex.
|
||||
|
||||
## Session Log
|
||||
|
||||
- **2026-04-12 Codex (audit branch `codex/audit-2026-04-12`)** audited `c5bad99..146f2e4` against code, live Dalidou, and the 36 active memories. Confirmed: `claude -p` invocation is not shell-injection-prone (`subprocess.run(args)` with no shell), off-host backup wiring matches the ledger, and R1 remains unresolved in practice. Added R5-R8. Corrected Orientation `main_tip` (`146f2e4`, not `5c69f77`) and tightened the harness note: p06-firmware-interface is a ranking-tie issue, p06-offline-design comes from a project-scope miss in live triage, and p06-tailscale is retrieved-chunk bleed rather than memory-band budget contention.
|
||||
- **2026-04-12 Claude** `06792d8..5c69f77` Day 5-8 close. Documented extractor scope (5 in-scope, 6 out-of-scope categories). Expanded harness from 6 to 18 fixtures (p04 +1, p05 +1, p06 +7, adversarial +2). Per-entry memory cap at 250 chars fixed 1 of 4 budget-contention failures. Final harness: 15/18 PASS. Mini-phase complete. Before/after: rule extractor 0% recall -> LLM 100%; harness 6/6 -> 15/18; active memories 20 -> 36.
|
||||
- **2026-04-12 Claude** `330ecfb..06792d8` (merged eval-loop branch + triage). Day 1-4 of the mini-phase completed in one session. Day 2 baseline: rule extractor 0% recall, 5 distinct miss classes. Day 4 gate cleared: LLM extractor (claude -p haiku, OAuth) hit 100% recall, 2.55 yield/interaction. Refactored from anthropic SDK to subprocess after "no API key" rule. First live triage: 51 candidates -> 16 promoted, 35 rejected. Active memories 20->36. p06-polisher went from 2 to 16 memories (firmware/telemetry architecture set). POST /memory now accepts status field. Test count 264->278.
|
||||
- **2026-04-11 Claude** `claude/extractor-eval-loop @ 7d8d599` — Day 1+2 of the mini-phase. Froze a 64-interaction snapshot (`scripts/eval_data/interactions_snapshot_2026-04-11.json`) and labeled 20 by length-stratified random sample (5 positive, 15 zero; 7 total expected candidates). Built `scripts/extractor_eval.py` as a file-based eval runner. **Day 2 baseline: rule extractor hit 0% yield / 0% recall / 0% precision on the labeled set; 5 false negatives across 5 distinct miss classes (recommendation_prose, architectural_change_summary, spec_update_announcement, layered_recommendation, alignment_assertion).** This is the Day 4 hard-stop signal arriving two days early — a single rule expansion cannot close a 5-way miss, and widening rules blindly will collapse precision. The Day 4 decision gate is escalated to Antoine for ratification before Day 3 touches any extractor code. No extractor code on main has changed.
|
||||
- **2026-04-11 Codex (ledger audit)** fixed stale `main_tip`, retargeted R1 from the API surface to the live Claude Stop hook, and formalized the review write protocol so Claude can consume findings without rewriting them.
|
||||
- **2026-04-11 Claude** `b3253f3..59331e5` (1 commit). Wired the DEV-LEDGER, added session protocol to AGENTS.md, created project-local CLAUDE.md, deleted stale `codex/port-atocore-ops-client` remote branch. No code changes, no redeploy needed.
|
||||
- **2026-04-11 Claude** `c5bad99..b3253f3` (11 commits + 1 merge). Length-aware reinforcement, project memories in pack, query-relevance memory ranking, hyphenated-identifier tokenizer, retrieval eval harness seeded, off-host backup wired end-to-end, docs synced, codex integration-pass branch merged. Harness went 0->6/6 on live Dalidou.
|
||||
- **2026-04-11 Codex (async review)** identified 2 P1s against a stale checkout. R1 was fair (extraction not automated), R2 was outdated (project memories already landed on main). Delivered the 8-day execution plan now in Active Plan.
|
||||
- **2026-04-06 Antoine** created `codex/atocore-integration-pass` with the `t420-openclaw/` workspace (merged 2026-04-11).
|
||||
|
||||
## Working Rules
|
||||
|
||||
- Claude builds; Codex audits. No parallel work on the same files.
|
||||
- Codex branches fork from `main`: `git fetch origin && git checkout -b codex/<topic> origin/main`.
|
||||
- P1 findings block further main commits until acknowledged in Open Review Findings.
|
||||
- Every session appends at least one Session Log line and bumps Orientation.
|
||||
- Trim Session Log and Recent Decisions to the last 20 at session end.
|
||||
- Docs in `docs/` may overclaim stale status; the ledger is the one-file source of truth for "what is true right now."
|
||||
|
||||
## Quick Commands
|
||||
|
||||
```bash
|
||||
# Check live state
|
||||
ssh papa@dalidou "curl -s http://localhost:8100/health"
|
||||
|
||||
# Run the retrieval harness
|
||||
python scripts/retrieval_eval.py # human-readable
|
||||
python scripts/retrieval_eval.py --json # machine-readable
|
||||
|
||||
# Deploy a new main tip
|
||||
git push origin main && ssh papa@dalidou "bash /srv/storage/atocore/app/deploy/dalidou/deploy.sh"
|
||||
|
||||
# Reflection-loop ops
|
||||
python scripts/atocore_client.py batch-extract '' '' 200 false # preview
|
||||
python scripts/atocore_client.py batch-extract '' '' 200 true # persist
|
||||
python scripts/atocore_client.py triage
|
||||
```
|
||||
85
deploy/dalidou/cron-backup.sh
Executable file
85
deploy/dalidou/cron-backup.sh
Executable file
@@ -0,0 +1,85 @@
|
||||
#!/usr/bin/env bash
|
||||
#
|
||||
# deploy/dalidou/cron-backup.sh
|
||||
# ------------------------------
|
||||
# Daily backup + retention cleanup via the AtoCore API.
|
||||
#
|
||||
# Intended to run from cron on Dalidou:
|
||||
#
|
||||
# # Daily at 03:00 UTC
|
||||
# 0 3 * * * /srv/storage/atocore/app/deploy/dalidou/cron-backup.sh >> /var/log/atocore-backup.log 2>&1
|
||||
#
|
||||
# What it does:
|
||||
# 1. Creates a runtime backup (db + registry, no chroma by default)
|
||||
# 2. Runs retention cleanup with --confirm to delete old snapshots
|
||||
# 3. Logs results to stdout (captured by cron into the log file)
|
||||
#
|
||||
# Fail-open: exits 0 even on API errors so cron doesn't send noise
|
||||
# emails. Check /var/log/atocore-backup.log for diagnostics.
|
||||
#
|
||||
# Environment variables:
|
||||
# ATOCORE_URL default http://127.0.0.1:8100
|
||||
# ATOCORE_BACKUP_CHROMA default false (set to "true" for cold chroma copy)
|
||||
# ATOCORE_BACKUP_DIR default /srv/storage/atocore/backups
|
||||
# ATOCORE_BACKUP_RSYNC optional rsync destination for off-host copies
|
||||
# (e.g. papa@laptop:/home/papa/atocore-backups/)
|
||||
# When set, the local snapshots tree is rsynced to
|
||||
# the destination after cleanup. Unset = skip.
|
||||
# SSH key auth must already be configured from this
|
||||
# host to the destination.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
ATOCORE_URL="${ATOCORE_URL:-http://127.0.0.1:8100}"
|
||||
INCLUDE_CHROMA="${ATOCORE_BACKUP_CHROMA:-false}"
|
||||
BACKUP_DIR="${ATOCORE_BACKUP_DIR:-/srv/storage/atocore/backups}"
|
||||
RSYNC_TARGET="${ATOCORE_BACKUP_RSYNC:-}"
|
||||
TIMESTAMP="$(date -u +%Y-%m-%dT%H:%M:%SZ)"
|
||||
|
||||
log() { printf '[%s] %s\n' "$TIMESTAMP" "$*"; }
|
||||
|
||||
log "=== AtoCore daily backup starting ==="
|
||||
|
||||
# Step 1: Create backup
|
||||
log "Step 1: creating backup (chroma=$INCLUDE_CHROMA)"
|
||||
BACKUP_RESULT=$(curl -sf -X POST \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{\"include_chroma\": $INCLUDE_CHROMA}" \
|
||||
"$ATOCORE_URL/admin/backup" 2>&1) || {
|
||||
log "ERROR: backup creation failed: $BACKUP_RESULT"
|
||||
exit 0
|
||||
}
|
||||
log "Backup created: $BACKUP_RESULT"
|
||||
|
||||
# Step 2: Retention cleanup (confirm=true to actually delete)
|
||||
log "Step 2: running retention cleanup"
|
||||
CLEANUP_RESULT=$(curl -sf -X POST \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"confirm": true}' \
|
||||
"$ATOCORE_URL/admin/backup/cleanup" 2>&1) || {
|
||||
log "ERROR: cleanup failed: $CLEANUP_RESULT"
|
||||
exit 0
|
||||
}
|
||||
log "Cleanup result: $CLEANUP_RESULT"
|
||||
|
||||
# Step 3: Off-host rsync (optional). Fail-open: log but don't abort
|
||||
# the cron so a laptop being offline at 03:00 UTC never turns the
|
||||
# local backup path red.
|
||||
if [[ -n "$RSYNC_TARGET" ]]; then
|
||||
log "Step 3: rsyncing snapshots to $RSYNC_TARGET"
|
||||
if [[ ! -d "$BACKUP_DIR/snapshots" ]]; then
|
||||
log "WARN: $BACKUP_DIR/snapshots does not exist, skipping rsync"
|
||||
else
|
||||
RSYNC_OUTPUT=$(rsync -a --delete \
|
||||
-e "ssh -o ConnectTimeout=10 -o BatchMode=yes -o StrictHostKeyChecking=accept-new" \
|
||||
"$BACKUP_DIR/snapshots/" "$RSYNC_TARGET" 2>&1) && {
|
||||
log "Rsync complete"
|
||||
} || {
|
||||
log "WARN: rsync to $RSYNC_TARGET failed (offline or auth?): $RSYNC_OUTPUT"
|
||||
}
|
||||
fi
|
||||
else
|
||||
log "Step 3: ATOCORE_BACKUP_RSYNC not set, skipping off-host copy"
|
||||
fi
|
||||
|
||||
log "=== AtoCore daily backup complete ==="
|
||||
@@ -3,7 +3,7 @@
|
||||
|
||||
Reads the Stop hook JSON from stdin, extracts the last user prompt
|
||||
from the transcript JSONL, and POSTs to the AtoCore /interactions
|
||||
endpoint in conservative mode (reinforce=false, no extraction).
|
||||
endpoint with reinforcement enabled (no extraction).
|
||||
|
||||
Fail-open: always exits 0, logs errors to stderr only.
|
||||
|
||||
@@ -81,7 +81,7 @@ def _capture() -> None:
|
||||
"client": "claude-code",
|
||||
"session_id": session_id,
|
||||
"project": project,
|
||||
"reinforce": False,
|
||||
"reinforce": True,
|
||||
}
|
||||
|
||||
body = json.dumps(payload, ensure_ascii=True).encode("utf-8")
|
||||
|
||||
@@ -32,7 +32,18 @@ read-only additive mode.
|
||||
### Baseline Complete
|
||||
|
||||
- Phase 9 - Reflection (all three foundation commits landed:
|
||||
A capture, B reinforcement, C candidate extraction + review queue)
|
||||
A capture, B reinforcement, C candidate extraction + review queue).
|
||||
As of 2026-04-11 the capture → reinforce half runs automatically on
|
||||
every Stop-hook capture (length-aware token-overlap matcher handles
|
||||
paragraph-length memories), and project-scoped memories now reach
|
||||
the context pack via a dedicated `--- Project Memories ---` band
|
||||
between identity/preference and retrieved chunks. The extract half
|
||||
is still a manual / batch flow by design (`scripts/atocore_client.py
|
||||
batch-extract` + `triage`). First live batch-extract run over 42
|
||||
captured interactions produced 1 candidate (rule extractor is
|
||||
conservative and keys on structural cues like `## Decision:`
|
||||
headings that rarely appear in conversational LLM responses) —
|
||||
extractor tuning is a known follow-up.
|
||||
|
||||
### Not Yet Complete In The Intended Sense
|
||||
|
||||
@@ -167,7 +178,9 @@ These remain intentionally deferred.
|
||||
|
||||
- automatic write-back from OpenClaw into AtoCore
|
||||
- automatic memory promotion
|
||||
- reflection loop integration
|
||||
- ~~reflection loop integration~~ — baseline now in (capture→reinforce
|
||||
auto, extract batch/manual). Extractor tuning and scheduled batch
|
||||
extraction still open.
|
||||
- replacing OpenClaw's own memory system
|
||||
- live machine-DB sync between machines
|
||||
- full ontology / graph expansion before the current baseline is stable
|
||||
|
||||
@@ -137,7 +137,12 @@ P06:
|
||||
|
||||
- automatic write-back from OpenClaw into AtoCore
|
||||
- automatic memory promotion
|
||||
- reflection loop integration
|
||||
- ~~reflection loop integration~~ — baseline now landed (2026-04-11):
|
||||
Stop hook runs reinforce automatically, project memories are folded
|
||||
into the context pack, batch-extract and triage CLIs exist. What
|
||||
remains deferred: scheduled/automatic batch extraction and extractor
|
||||
rule tuning (rule-based extractor produced 1 candidate from 42 real
|
||||
captures — needs new cues for conversational LLM content).
|
||||
- replacing OpenClaw's own memory system
|
||||
- syncing the live machine DB between machines
|
||||
|
||||
@@ -159,6 +164,116 @@ The next batch is successful if:
|
||||
- project ingestion remains controlled rather than noisy
|
||||
- the canonical Dalidou instance stays stable
|
||||
|
||||
## Retrieval Quality Review — 2026-04-11
|
||||
|
||||
First sweep with real project-hinted queries on Dalidou. Used
|
||||
`POST /context/build` against p04, p05, p06 with representative
|
||||
questions and inspected `formatted_context`.
|
||||
|
||||
Findings:
|
||||
|
||||
- **Trusted Project State is surfacing correctly.** The DECISION and
|
||||
REQUIREMENT categories appear at the top of the pack and include
|
||||
the expected key facts (e.g. p04 "Option B conical-back mirror
|
||||
architecture"). This is the strongest signal in the pack today.
|
||||
- **Chunk retrieval is relevant on-topic but broad.** Top chunks for
|
||||
the p04 architecture query are PDR intro, CAD assembly overview,
|
||||
and the index — all on the right project but none of them directly
|
||||
answer the "why was Option B chosen" question. The authoritative
|
||||
answer sits in Project State, not in the chunks.
|
||||
- **Active memories are NOT reaching the pack.** The context builder
|
||||
surfaces Trusted Project State and retrieved chunks but does not
|
||||
include the 21 active project/knowledge memories. Reinforcement
|
||||
(Phase 9 Commit B) bumps memory confidence without the memory ever
|
||||
being read back into a prompt — the reflection loop has no outlet
|
||||
on the retrieval side. This is a design gap, not a bug: needs a
|
||||
decision on whether memories should feed into context assembly,
|
||||
and if so at what trust level (below project_state, above chunks).
|
||||
- **Cross-project bleed is low.** The p04 query did pull one p05
|
||||
chunk (CGH_Design_Input_for_AOM) as the bottom hit but the top-4
|
||||
were all p04.
|
||||
|
||||
Proposed follow-ups (not yet scheduled):
|
||||
|
||||
1. ~~Decide whether memories should be folded into `formatted_context`
|
||||
and under what section header.~~ DONE 2026-04-11 (commits 8ea53f4,
|
||||
5913da5, 1161645). A `--- Project Memories ---` band now sits
|
||||
between identity/preference and retrieved chunks, gated on a
|
||||
canonical project hint to prevent cross-project bleed. Budget
|
||||
ratio 0.25 (tuned empirically — paragraph memories are ~400 chars
|
||||
and earlier 0.15 ratio starved the first entry by one char).
|
||||
Verified live: p04 architecture query surfaces the Option B memory.
|
||||
2. Re-run the same three queries after any builder change and compare
|
||||
`formatted_context` diffs — still open, and is the natural entry
|
||||
point for the retrieval eval harness on the roadmap.
|
||||
|
||||
## Reflection Loop Live Check — 2026-04-11
|
||||
|
||||
First real run of `batch-extract` across 42 captured Claude Code
|
||||
interactions on Dalidou produced exactly **1 candidate**, and that
|
||||
candidate was a synthetic test capture from earlier in the session
|
||||
(rejected). Finding:
|
||||
|
||||
- The rule-based extractor in `src/atocore/memory/extractor.py` keys
|
||||
on explicit structural cues (decision headings like
|
||||
`## Decision: ...`, preference sentences, etc.). Real Claude Code
|
||||
responses are conversational and almost never contain those cues.
|
||||
- This means the capture → extract half of the reflection loop is
|
||||
effectively inert against organic LLM sessions until either the
|
||||
rules are broadened (new cue families: "we chose X because...",
|
||||
"the selected approach is...", etc.) or an LLM-assisted extraction
|
||||
path is added alongside the rule-based one.
|
||||
- Capture → reinforce is working correctly on live data (length-aware
|
||||
matcher verified on live paraphrase of a p04 memory).
|
||||
|
||||
Follow-up candidates:
|
||||
|
||||
1. ~~Extractor rule expansion~~ — Day 2 baseline showed 0% recall
|
||||
across 5 distinct miss classes; rule expansion cannot close a
|
||||
5-way miss. Deprioritized.
|
||||
2. ~~LLM-assisted extractor~~ — DONE 2026-04-12. `extractor_llm.py`
|
||||
shells out to `claude -p` (Haiku, OAuth, no API key). First live
|
||||
run: 100% recall, 2.55 yield/interaction on a 20-interaction
|
||||
labeled set. First triage: 51 candidates → 16 promoted, 35
|
||||
rejected (31% accept rate). Active memories 20 → 36.
|
||||
3. ~~Retrieval eval harness~~ — DONE 2026-04-11 (scripts/retrieval_eval.py,
|
||||
6/6 passing). Expansion to 15-20 fixtures is mini-phase Day 6.
|
||||
|
||||
## Extractor Scope — 2026-04-12
|
||||
|
||||
What the LLM-assisted extractor (`src/atocore/memory/extractor_llm.py`)
|
||||
extracts from conversational Claude Code captures:
|
||||
|
||||
**In scope:**
|
||||
|
||||
- Architectural commitments (e.g. "Z-axis is engage/retract, not
|
||||
continuous position")
|
||||
- Ratified decisions with project scope (e.g. "USB SSD mandatory on
|
||||
RPi for telemetry storage")
|
||||
- Durable engineering facts (e.g. "telemetry data rate ~29 MB/hour")
|
||||
- Working rules and adaptation patterns (e.g. "extraction stays off
|
||||
the capture hot path")
|
||||
- Interface invariants (e.g. "controller-job.v1 in, run-log.v1 out;
|
||||
no firmware change needed")
|
||||
|
||||
**Out of scope (intentionally rejected by triage):**
|
||||
|
||||
- Transient roadmap / plan steps that will be stale in a week
|
||||
- Operational instructions ("run this command to deploy")
|
||||
- Process rules that live in DEV-LEDGER.md / AGENTS.md, not in memory
|
||||
- Implementation details that are too granular (individual field names
|
||||
when the parent concept is already captured)
|
||||
- Already-fixed review findings (P1/P2 that no longer apply)
|
||||
- Duplicates of existing active memories with wrong project tags
|
||||
|
||||
**Trust model:**
|
||||
|
||||
- Extraction stays off the capture hot path (batch / manual only)
|
||||
- All candidates land as `status=candidate`, never auto-promoted
|
||||
- Human or auto-triage reviews before promotion to active
|
||||
- Future direction: multi-model extraction + triage (Codex/Gemini as
|
||||
second-pass reviewers for robustness against single-model bias)
|
||||
|
||||
## Long-Run Goal
|
||||
|
||||
The long-run target is:
|
||||
|
||||
@@ -340,6 +340,22 @@ def build_parser() -> argparse.ArgumentParser:
|
||||
p = sub.add_parser("reject")
|
||||
p.add_argument("memory_id")
|
||||
|
||||
# batch-extract: fan out /interactions/{id}/extract?persist=true across
|
||||
# recent interactions. Idempotent — the extractor create_memory path
|
||||
# silently skips duplicates, so re-running is safe.
|
||||
p = sub.add_parser("batch-extract")
|
||||
p.add_argument("since", nargs="?", default="")
|
||||
p.add_argument("project", nargs="?", default="")
|
||||
p.add_argument("limit", nargs="?", type=int, default=100)
|
||||
p.add_argument("persist", nargs="?", default="true")
|
||||
|
||||
# triage: interactive candidate review loop. Fetches the queue, shows
|
||||
# each candidate, accepts p/r/s (promote / reject / skip) / q (quit).
|
||||
p = sub.add_parser("triage")
|
||||
p.add_argument("memory_type", nargs="?", default="")
|
||||
p.add_argument("project", nargs="?", default="")
|
||||
p.add_argument("limit", nargs="?", type=int, default=50)
|
||||
|
||||
return parser
|
||||
|
||||
|
||||
@@ -474,10 +490,141 @@ def main() -> int:
|
||||
{},
|
||||
)
|
||||
)
|
||||
elif cmd == "batch-extract":
|
||||
print_json(run_batch_extract(args.since, args.project, args.limit, args.persist))
|
||||
elif cmd == "triage":
|
||||
return run_triage(args.memory_type, args.project, args.limit)
|
||||
else:
|
||||
return 1
|
||||
return 0
|
||||
|
||||
|
||||
def run_batch_extract(since: str, project: str, limit: int, persist_flag: str) -> dict:
|
||||
"""Fetch recent interactions and run the extractor against each one.
|
||||
|
||||
Returns an aggregated summary. Safe to re-run: the server-side
|
||||
persist path catches ValueError on duplicates and the endpoint
|
||||
reports per-interaction candidate counts either way.
|
||||
"""
|
||||
persist = persist_flag.lower() in {"1", "true", "yes", "y"}
|
||||
query_parts: list[str] = []
|
||||
if project:
|
||||
query_parts.append(f"project={urllib.parse.quote(project)}")
|
||||
if since:
|
||||
query_parts.append(f"since={urllib.parse.quote(since)}")
|
||||
query_parts.append(f"limit={int(limit)}")
|
||||
query = "?" + "&".join(query_parts)
|
||||
|
||||
listing = request("GET", f"/interactions{query}")
|
||||
interactions = listing.get("interactions", []) if isinstance(listing, dict) else []
|
||||
|
||||
processed = 0
|
||||
total_candidates = 0
|
||||
total_persisted = 0
|
||||
errors: list[dict] = []
|
||||
per_interaction: list[dict] = []
|
||||
|
||||
for item in interactions:
|
||||
iid = item.get("id") or ""
|
||||
if not iid:
|
||||
continue
|
||||
try:
|
||||
result = request(
|
||||
"POST",
|
||||
f"/interactions/{urllib.parse.quote(iid, safe='')}/extract",
|
||||
{"persist": persist},
|
||||
)
|
||||
except Exception as exc: # pragma: no cover - network errors land here
|
||||
errors.append({"interaction_id": iid, "error": str(exc)})
|
||||
continue
|
||||
processed += 1
|
||||
count = int(result.get("candidate_count", 0) or 0)
|
||||
persisted_ids = result.get("persisted_ids") or []
|
||||
total_candidates += count
|
||||
total_persisted += len(persisted_ids)
|
||||
if count:
|
||||
per_interaction.append(
|
||||
{
|
||||
"interaction_id": iid,
|
||||
"candidate_count": count,
|
||||
"persisted_count": len(persisted_ids),
|
||||
"project": item.get("project") or "",
|
||||
}
|
||||
)
|
||||
|
||||
return {
|
||||
"processed": processed,
|
||||
"total_candidates": total_candidates,
|
||||
"total_persisted": total_persisted,
|
||||
"persist": persist,
|
||||
"errors": errors,
|
||||
"interactions_with_candidates": per_interaction,
|
||||
}
|
||||
|
||||
|
||||
def run_triage(memory_type: str, project: str, limit: int) -> int:
|
||||
"""Interactive review of candidate memories.
|
||||
|
||||
Loads the queue once, walks through entries, prompts for
|
||||
(p)romote / (r)eject / (s)kip / (q)uit. Stateless between runs —
|
||||
re-running picks up whatever is still status=candidate.
|
||||
"""
|
||||
query_parts = ["status=candidate"]
|
||||
if memory_type:
|
||||
query_parts.append(f"memory_type={urllib.parse.quote(memory_type)}")
|
||||
if project:
|
||||
query_parts.append(f"project={urllib.parse.quote(project)}")
|
||||
query_parts.append(f"limit={int(limit)}")
|
||||
listing = request("GET", "/memory?" + "&".join(query_parts))
|
||||
memories = listing.get("memories", []) if isinstance(listing, dict) else []
|
||||
|
||||
if not memories:
|
||||
print_json({"status": "empty_queue", "count": 0})
|
||||
return 0
|
||||
|
||||
promoted = 0
|
||||
rejected = 0
|
||||
skipped = 0
|
||||
stopped_early = False
|
||||
|
||||
print(f"Triage queue: {len(memories)} candidate(s)\n", file=sys.stderr)
|
||||
for idx, mem in enumerate(memories, 1):
|
||||
mid = mem.get("id", "")
|
||||
print(f"[{idx}/{len(memories)}] {mem.get('memory_type','?')} project={mem.get('project','')} conf={mem.get('confidence','?')}", file=sys.stderr)
|
||||
print(f" id: {mid}", file=sys.stderr)
|
||||
print(f" {mem.get('content','')}", file=sys.stderr)
|
||||
try:
|
||||
choice = input(" (p)romote / (r)eject / (s)kip / (q)uit > ").strip().lower()
|
||||
except EOFError:
|
||||
stopped_early = True
|
||||
break
|
||||
if choice in {"q", "quit"}:
|
||||
stopped_early = True
|
||||
break
|
||||
if choice in {"p", "promote"}:
|
||||
request("POST", f"/memory/{urllib.parse.quote(mid, safe='')}/promote", {})
|
||||
promoted += 1
|
||||
print(" -> promoted", file=sys.stderr)
|
||||
elif choice in {"r", "reject"}:
|
||||
request("POST", f"/memory/{urllib.parse.quote(mid, safe='')}/reject", {})
|
||||
rejected += 1
|
||||
print(" -> rejected", file=sys.stderr)
|
||||
else:
|
||||
skipped += 1
|
||||
print(" -> skipped", file=sys.stderr)
|
||||
|
||||
print_json(
|
||||
{
|
||||
"reviewed": promoted + rejected + skipped,
|
||||
"promoted": promoted,
|
||||
"rejected": rejected,
|
||||
"skipped": skipped,
|
||||
"stopped_early": stopped_early,
|
||||
"remaining_in_queue": len(memories) - (promoted + rejected + skipped) - (1 if stopped_early else 0),
|
||||
}
|
||||
)
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
|
||||
51
scripts/eval_data/candidate_queue_snapshot.jsonl
Normal file
51
scripts/eval_data/candidate_queue_snapshot.jsonl
Normal file
@@ -0,0 +1,51 @@
|
||||
{"id": "0dd85386-cace-4f9a-9098-c6732f3c64fa", "type": "project", "project": "atocore", "confidence": 0.5, "content": "AtoCore roadmap: (1) extractor improvement, (2) harness expansion, (3) Wave 2 ingestion, (4) OpenClaw finish; steps 1+2 are current mini-phase"}
|
||||
{"id": "8939b875-152c-4c90-8614-3cfdc64cd1d6", "type": "knowledge", "project": "atocore", "confidence": 0.5, "content": "AtoCore is FastAPI (Python 3.12, SQLite + ChromaDB) on Dalidou home server (dalidou:8100), repo C:\\Users\\antoi\\ATOCore, data /srv/storage/atocore/, ingests Obsidian vault + Google Drive into vector memory system."}
|
||||
{"id": "93e37d2a-b512-4a97-b230-e64ac913d087", "type": "knowledge", "project": "atocore", "confidence": 0.5, "content": "Deploy AtoCore: git push origin main, then ssh papa@dalidou and run /srv/storage/atocore/app/deploy/dalidou/deploy.sh"}
|
||||
{"id": "4b82fe01-4393-464a-b935-9ad5d112d3d8", "type": "adaptation", "project": "atocore", "confidence": 0.5, "content": "Do not add memory extraction to interaction capture hot path; keep extraction as separate batch/manual step. Reason: latency and queue noise before review rhythm is comfortable."}
|
||||
{"id": "c873ec00-063e-488c-ad32-1233290a3feb", "type": "project", "project": "atocore", "confidence": 0.5, "content": "As of 2026-04-11, approved roadmap in order: observe reinforcement, batch extraction, candidate triage, off-Dalidou backup, retrieval quality review."}
|
||||
{"id": "665cdd27-0057-4e73-82f5-5d4f47189b5d", "type": "project", "project": "atocore", "confidence": 0.5, "content": "AtoCore adopts DEV-LEDGER.md as shared operating memory with stable headers; updated at session boundaries"}
|
||||
{"id": "5f89c51d-7e8b-4fb9-830d-a35bb649f9f7", "type": "adaptation", "project": "atocore", "confidence": 0.5, "content": "Codex branches for AtoCore fork from main (never orphan); use naming pattern codex/<topic>"}
|
||||
{"id": "25ac367c-8bbe-4ba4-8d8e-d533db33f2d9", "type": "adaptation", "project": "atocore", "confidence": 0.5, "content": "In AtoCore, Claude builds and Codex audits; never work in parallel on same files"}
|
||||
{"id": "89446ebe-fd42-4177-80db-3657bc41d048", "type": "adaptation", "project": "atocore", "confidence": 0.5, "content": "In AtoCore, P1-severity findings in DEV-LEDGER.md block further main commits until acknowledged"}
|
||||
{"id": "1f077e98-f945-4480-96ab-110b0671ebc6", "type": "adaptation", "project": "atocore", "confidence": 0.5, "content": "Every AtoCore session appends to DEV-LEDGER.md Session Log and updates Orientation before ending"}
|
||||
{"id": "89f60018-c23b-4b2f-80ca-e6f7d02c5cd3", "type": "preference", "project": "atocore", "confidence": 0.5, "content": "User prefers receiving standalone testing prompts they can paste into Claude Code on target deployments rather than having the assistant run tests directly."}
|
||||
{"id": "2f69a6ed-6de2-4565-87df-1ea3e8c42963", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "USB SSD on RPi is mandatory for polishing telemetry storage; must be independent of network for data integrity during runs."}
|
||||
{"id": "6bcaebde-9e45-4de5-a220-65d9c4cd451e", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Use Tailscale mesh for RPi remote access to provide SSH, file transfer, and NAT traversal without port forwarding."}
|
||||
{"id": "82f17880-92da-485e-a24a-0599ab1836e7", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Auto-sync telemetry data via rsync over Tailscale after runs complete; fire-and-forget pattern with automatic retry on network interruption."}
|
||||
{"id": "2dd36f74-db47-4c72-a185-fec025d07d4f", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Real-time telemetry monitoring should target 10 Hz downsampling; full 100 Hz streaming over network is not necessary."}
|
||||
{"id": "7519d82b-8065-41f0-812e-9c1a3573d7b9", "type": "knowledge", "project": "p06-polisher", "confidence": 0.5, "content": "Polishing telemetry data rate is approximately 29 MB per hour (100 Hz × 20 channels × 4 bytes = 8 KB/s)."}
|
||||
{"id": "78678162-5754-478b-b1fc-e25f22e0ee03", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Machine spec (shareable) + Atomaste spec (internal) separate concerns. Machine spec hides program generation as 'separate scope' to protect IP/business strategy."}
|
||||
{"id": "6657b4ae-d4ec-4fec-a66f-2975cdb10d13", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Firmware interface contract is invariant: controller-job.v1 input, run-log.v1 + telemetry output. No firmware changes needed regardless of program generation implementation."}
|
||||
{"id": "6d6f4fe9-73e5-449f-a802-6dc0a974f87b", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Atomaste sim spec documents forward/return paths, calibration model (Preston k), translation loss, and service/IP strategy—details hidden from shareable machine spec."}
|
||||
{"id": "932f38df-58f3-49c2-9968-8d422dc54b42", "type": "project", "project": "", "confidence": 0.5, "content": "USB SSD mandatory for storage (not SD card); directory structure /data/runs/{id}/, /data/manual/{id}/; status.json for machine state"}
|
||||
{"id": "2b3178e8-fe38-4338-b2b0-75a01da18cea", "type": "project", "project": "", "confidence": 0.5, "content": "RPi joins Tailscale mesh for remote access over SSH VPN; no public IP or port forwarding; fully offline operation"}
|
||||
{"id": "254c394d-3f80-4b34-a891-9f1cbfec74d7", "type": "project", "project": "", "confidence": 0.5, "content": "Data synchronization via rsync over Tailscale, failure-tolerant and non-blocking; USB stick as manual fallback"}
|
||||
{"id": "ee626650-1ee0-439c-85c9-6d32a876f239", "type": "project", "project": "", "confidence": 0.5, "content": "Machine design principle: works fully offline and independently; network connection is for remote access only"}
|
||||
{"id": "34add99d-8d2e-4586-b002-fc7b7d22bcb3", "type": "project", "project": "", "confidence": 0.5, "content": "No cloud, no real-time streaming, no remote control features in design scope"}
|
||||
{"id": "993e0afe-9910-4984-b608-f5e9de7c0453", "type": "project", "project": "atocore", "confidence": 0.5, "content": "P1: Reflection loop integration incomplete—extraction remains manual (POST /interactions/{id}/extract), not auto-triggered with reinforcement. Live capture won't auto-populate candidate review queue."}
|
||||
{"id": "bdf488d7-9200-441e-afbf-5335020ea78b", "type": "project", "project": "atocore", "confidence": 0.5, "content": "P1: Project memories excluded from context injection; build_context() requests [\"identity\", \"preference\"] only. Reinforcement signal doesn't reach assembled context packs."}
|
||||
{"id": "188197af-a61d-4616-9e39-712aeaaadf61", "type": "project", "project": "atocore", "confidence": 0.5, "content": "Current batch-extract rules produce only 1 candidate from 42 real captures. Extractor needs conversational-cue detection or LLM-assisted path to improve yield."}
|
||||
{"id": "acffcaa4-5966-4ec1-a0b2-3b8dcebe75bd", "type": "project", "project": "atocore", "confidence": 0.5, "content": "Next priority: extractor rule expansion (cheapest validation of reflection loop), then Wave 2 trusted operational ingestion (master-plan priority). Defer retrieval eval harness focus."}
|
||||
{"id": "1b44a886-a5af-4426-bf10-a92baf3a6502", "type": "knowledge", "project": "atocore", "confidence": 0.5, "content": "Alias canonicalization fix (resolve_project_name() boundary) is consistently applied across project state, memories, interactions, and context lookup. Code review approved directionally."}
|
||||
{"id": "e8f4e704-367b-4759-b20c-da0ccf06cf7d", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Machine capabilities now define z_type: engage_retract and cam_type: mechanical_with_encoder instead of actuator-driven setpoints."}
|
||||
{"id": "ab2b607c-52b1-405f-a874-c6078393c21c", "type": "knowledge", "project": "", "confidence": 0.5, "content": "Codex is an audit agent; communicate with it via markdown prompts with numbered steps; it updates findings via commits to codex/* branches or direct messages."}
|
||||
{"id": "5a5fd29d-291f-4e22-88fe-825cf55f745a", "type": "preference", "project": "", "confidence": 0.5, "content": "Audit-first workflow recommended: have codex audit DEV-LEDGER.md and recent commits before execution; validates round-trip, catches errors early."}
|
||||
{"id": "4c238106-017e-4283-99a1-639497b6ddde", "type": "knowledge", "project": "", "confidence": 0.5, "content": "DEV-LEDGER.md at repo root is the shared coordination document with Orientation, Active Plan, and Open Review Findings sections."}
|
||||
{"id": "83aed988-4257-4220-b612-6c725d6cd95a", "type": "project", "project": "atocore", "confidence": 0.5, "content": "Roadmap: Extractor improvement → Harness expansion → Wave 2 trusted operational ingestion → Finish OpenClaw integration (in that order)"}
|
||||
{"id": "95d87d1a-5daa-414d-95ff-a344a62e0b6b", "type": "project", "project": "atocore", "confidence": 0.5, "content": "Phase 1 (Extractor): eval-driven loop—label captures, improve rules/add LLM mode, measure yield & FP, stop when queue reviewable (not coverage metrics)"}
|
||||
{"id": "7aafb588-51b0-4536-a414-ebaaea924b98", "type": "project", "project": "atocore", "confidence": 0.5, "content": "Phases 1 & 2 (Extractor + Harness) are a mini-phase; without harness, extractor improvements are blind edits"}
|
||||
{"id": "aa50c51a-27d7-4db9-b7a3-7ca75dba2118", "type": "knowledge", "project": "", "confidence": 0.5, "content": "Dalidou stores Claude Code interactions via a Stop hook that fires after each turn and POSTs to http://dalidou:8100/interactions with client=claude-code parameter"}
|
||||
{"id": "5951108b-3a5e-49d0-9308-dfab449664d3", "type": "adaptation", "project": "", "confidence": 0.5, "content": "Interaction capture system is passive and automatic; no manual action required, interactions accumulate automatically during normal Claude Code usage"}
|
||||
{"id": "9d2cbbe9-cf2e-4aab-9cb8-c4951da70826", "type": "project", "project": "", "confidence": 0.5, "content": "Session Log/Ledger system tracks work state across sessions so future sessions immediately know what is true and what is next; phases marked by git SHAs."}
|
||||
{"id": "db88eecf-e31a-4fee-b07d-0b51db7e315e", "type": "project", "project": "atocore", "confidence": 0.5, "content": "atocore uses multi-model coordination: Claude and codex share DEV-LEDGER.md (current state / active plan / P1+P2 findings / recent decisions / commit log) read at session start, appended at session end"}
|
||||
{"id": "8748f071-ff28-47a6-8504-65ca30a8336a", "type": "project", "project": "atocore", "confidence": 0.5, "content": "atocore starts with manual-event-loop (/audit or /status prompts) using DEV-LEDGER.md before upgrading to automated git hooks/CI review"}
|
||||
{"id": "f9210883-67a8-4dae-9f27-6b5ae7bd8a6b", "type": "project", "project": "atocore", "confidence": 0.5, "content": "atocore development involves coordinating between Claude and codex models with shared plan/review strategy and counter-validation to improve system quality"}
|
||||
{"id": "85f008b9-2d6d-49ad-81a1-e254dac2a2ac", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Z-axis is a binary engage/retract mechanism (z_engaged bool), not continuous position control; confirmation timeout z_engage_timeout_s required."}
|
||||
{"id": "0cc417ed-ac38-4231-9786-a9582ac6a60f", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Cam amplitude and offset are mechanically set by operator and read via encoders; no actuators control them, controller receives encoder telemetry only."}
|
||||
{"id": "2e001aaf-0c5c-4547-9b96-ebc4172b258d", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Cam parameters in controller are expected_cam_amplitude_deg and expected_cam_offset_deg (read-only reference for verification), not command setpoints."}
|
||||
{"id": "47778126-b0cf-41d9-9e21-f2418f53e792", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Manual mode UI displays cam encoder readings (cam_amplitude_deg, cam_offset_deg) as read-only for operator verification of mechanical setting."}
|
||||
{"id": "410e4a70-ae12-4de2-8f31-071ffee3cad4", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Manual session log records cam_setting measured at session start; run-log segment actual block includes cam_amplitude_deg_mean and cam_offset_deg_mean."}
|
||||
{"id": "e94f94f0-3538-40dd-aef2-0189eacc7eb7", "type": "knowledge", "project": "atocore", "confidence": 0.5, "content": "AtoCore deployments to dalidou use the script /srv/storage/atocore/app/deploy/dalidou/deploy.sh instead of manual docker commands"}
|
||||
{"id": "23fa6fdf-cfb9-4850-ad04-3ea56551c30a", "type": "project", "project": "", "confidence": 0.5, "content": "Retrieval/extraction evaluation follows 8-day mini-phase plan with hard gates to prevent scope drift. Preflight checks must validate git SHAs, baselines, and fixture stability before coding."}
|
||||
{"id": "3e1fad28-031b-4670-a9d0-0af2e8ba1361", "type": "project", "project": "", "confidence": 0.5, "content": "Day 1: Create labeled extractor eval set from 30 captures (10 zero-candidate, 10 single-candidate, 10 ambiguous) with metadata; create scoring tool to measure precision/recall."}
|
||||
{"id": "d49378a4-d03c-4730-be87-f0fcb2d199db", "type": "project", "project": "", "confidence": 0.5, "content": "Day 2: Measure current extractor against labeled set, recording yield, true/false positives, and false negatives by pattern."}
|
||||
145
scripts/eval_data/extractor_labels_2026-04-11.json
Normal file
145
scripts/eval_data/extractor_labels_2026-04-11.json
Normal file
@@ -0,0 +1,145 @@
|
||||
{
|
||||
"version": "0.1",
|
||||
"frozen_at": "2026-04-11",
|
||||
"snapshot_file": "scripts/eval_data/interactions_snapshot_2026-04-11.json",
|
||||
"labeled_count": 20,
|
||||
"plan_deviation": "Codex's plan called for 30 labeled interactions (10 zero / 10 plausible / 10 ambiguous). Actual corpus is heavily skewed toward instructional/status content; after reading 20 drawn by length-stratified random sample, the honest positive rate is ~25% (5/20). Labeling more would mostly add zeros; the Day 2 measurement is not bottlenecked on sample size.",
|
||||
"positive_count": 5,
|
||||
"labels": [
|
||||
{
|
||||
"id": "ab239158-d6ac-4c51-b6e4-dd4ccea384a2",
|
||||
"expected_count": 0,
|
||||
"miss_class": "n/a",
|
||||
"notes": "Instructional deploy guidance. No durable claim."
|
||||
},
|
||||
{
|
||||
"id": "da153f2a-b20a-4dee-8c72-431ebb71f08c",
|
||||
"expected_count": 0,
|
||||
"miss_class": "n/a",
|
||||
"notes": "'Deploy still in progress.' Pure status."
|
||||
},
|
||||
{
|
||||
"id": "7d8371ee-c6d3-4dfe-a7b0-2d091f075c15",
|
||||
"expected_count": 0,
|
||||
"miss_class": "n/a",
|
||||
"notes": "Git command walkthrough. No durable claim."
|
||||
},
|
||||
{
|
||||
"id": "14bf3f90-e318-466e-81ac-d35522741ba5",
|
||||
"expected_count": 0,
|
||||
"miss_class": "n/a",
|
||||
"notes": "Ledger status update. Transient fact, not a durable memory candidate."
|
||||
},
|
||||
{
|
||||
"id": "8f855235-c38d-4c27-9f2b-8530ebe1a2d8",
|
||||
"expected_count": 0,
|
||||
"miss_class": "n/a",
|
||||
"notes": "Short-term recommendation ('merge to main and deploy'), not a standing decision."
|
||||
},
|
||||
{
|
||||
"id": "04a96eb5-cd00-4e9f-9252-b2cc919000a4",
|
||||
"expected_count": 0,
|
||||
"miss_class": "n/a",
|
||||
"notes": "Dev server config table. Operational detail, not a memory."
|
||||
},
|
||||
{
|
||||
"id": "79d606ed-8981-454a-83af-c25226b1b65c",
|
||||
"expected_count": 1,
|
||||
"expected_type": "adaptation",
|
||||
"expected_project": "",
|
||||
"expected_snippet": "shared DEV-LEDGER as operating memory",
|
||||
"miss_class": "recommendation_prose",
|
||||
"notes": "A recommendation that later became a ratified decision. Rule extractor would need a 'simplest version that could work today' / 'I'd start with' cue class."
|
||||
},
|
||||
{
|
||||
"id": "a6b0d279-c564-4bce-a703-e476f4a148ad",
|
||||
"expected_count": 2,
|
||||
"expected_type": "project",
|
||||
"expected_project": "p06-polisher",
|
||||
"expected_snippet": "z_engaged bool; cam amplitude set mechanically and read by encoders",
|
||||
"miss_class": "architectural_change_summary",
|
||||
"notes": "Two durable architectural facts about the polisher machine (Z-axis is engage/retract, cam is read-only). Extractor would need to recognize 'A is now B' / 'X removed, Y added' patterns."
|
||||
},
|
||||
{
|
||||
"id": "4e00e398-2e89-4653-8ee5-3f65c7f4d2d3",
|
||||
"expected_count": 0,
|
||||
"miss_class": "n/a",
|
||||
"notes": "Clarification question to user."
|
||||
},
|
||||
{
|
||||
"id": "a6a7816a-7590-4616-84f4-49d9054c2a91",
|
||||
"expected_count": 0,
|
||||
"miss_class": "n/a",
|
||||
"notes": "Instructional response offering two next moves."
|
||||
},
|
||||
{
|
||||
"id": "03527502-316a-4a3e-989c-00719392c7d1",
|
||||
"expected_count": 0,
|
||||
"miss_class": "n/a",
|
||||
"notes": "Troubleshooting a paste failure. Ephemeral."
|
||||
},
|
||||
{
|
||||
"id": "1fff59fc-545f-42df-9dd1-a0e6dec1b7ee",
|
||||
"expected_count": 0,
|
||||
"miss_class": "n/a",
|
||||
"notes": "Agreement + follow-up question. No durable claim."
|
||||
},
|
||||
{
|
||||
"id": "eb65dc18-0030-4720-ace7-f55af9df719d",
|
||||
"expected_count": 0,
|
||||
"miss_class": "n/a",
|
||||
"notes": "Explanation of how the capture hook works. Instructional."
|
||||
},
|
||||
{
|
||||
"id": "52c8c0f3-32fb-4b48-9065-73c778a08417",
|
||||
"expected_count": 1,
|
||||
"expected_type": "project",
|
||||
"expected_project": "p06-polisher",
|
||||
"expected_snippet": "USB SSD mandatory on RPi; Tailscale for remote access",
|
||||
"miss_class": "spec_update_announcement",
|
||||
"notes": "Concrete architectural commitments just added to the polisher spec. Phrased as '§17.1 Local Storage - USB SSD mandatory, not SD card.' The '§' section markers could be a new cue."
|
||||
},
|
||||
{
|
||||
"id": "32d40414-15af-47ee-944b-2cceae9574b8",
|
||||
"expected_count": 0,
|
||||
"miss_class": "n/a",
|
||||
"notes": "Session recap. Historical summary, not a durable memory."
|
||||
},
|
||||
{
|
||||
"id": "b6d2cdfc-37fb-459a-96bd-caefb9beaab4",
|
||||
"expected_count": 0,
|
||||
"miss_class": "n/a",
|
||||
"notes": "Deployment prompt for Dalidou. Operational, not a memory."
|
||||
},
|
||||
{
|
||||
"id": "ee03d823-931b-4d4e-9258-88b4ed5eeb07",
|
||||
"expected_count": 2,
|
||||
"expected_type": "knowledge",
|
||||
"expected_project": "p06-polisher",
|
||||
"expected_snippet": "USB SSD is non-negotiable for local storage; Tailscale mesh for SSH/file transfer",
|
||||
"miss_class": "layered_recommendation",
|
||||
"notes": "Layered infra recommendation with 'non-negotiable' / 'strongly recommended' strength markers. The 'non-negotiable' token could be a new cue class."
|
||||
},
|
||||
{
|
||||
"id": "dd234d9f-0d1c-47e8-b01c-eebcb568c7e7",
|
||||
"expected_count": 1,
|
||||
"expected_type": "project",
|
||||
"expected_project": "p06-polisher",
|
||||
"expected_snippet": "interface contract is identical regardless of who generates the programs; machine is a standalone box",
|
||||
"miss_class": "alignment_assertion",
|
||||
"notes": "Architectural invariant assertion. '**Alignment verified**' / 'nothing changes for X' style. Likely too subtle for rule matching without LLM assistance."
|
||||
},
|
||||
{
|
||||
"id": "1f95891a-cf37-400e-9d68-4fad8e04dcbb",
|
||||
"expected_count": 0,
|
||||
"miss_class": "n/a",
|
||||
"notes": "Huge session handoff prompt. Informational only."
|
||||
},
|
||||
{
|
||||
"id": "5580950f-d010-4544-be4b-b3071271a698",
|
||||
"expected_count": 0,
|
||||
"miss_class": "n/a",
|
||||
"notes": "Ledger schema sketch. Structural design proposal, later ratified — but the same idea was already captured as a ratified decision in the recent decisions section, so not worth re-extracting from this conversational form."
|
||||
}
|
||||
]
|
||||
}
|
||||
518
scripts/eval_data/extractor_llm_baseline_2026-04-11.json
Normal file
518
scripts/eval_data/extractor_llm_baseline_2026-04-11.json
Normal file
@@ -0,0 +1,518 @@
|
||||
{
|
||||
"summary": {
|
||||
"total": 20,
|
||||
"exact_match": 6,
|
||||
"positive_expected": 5,
|
||||
"total_expected_candidates": 7,
|
||||
"total_actual_candidates": 51,
|
||||
"yield_rate": 2.55,
|
||||
"recall": 1.0,
|
||||
"precision": 0.357,
|
||||
"false_positive_interactions": 9,
|
||||
"false_negative_interactions": 0,
|
||||
"miss_classes": {},
|
||||
"mode": "llm"
|
||||
},
|
||||
"results": [
|
||||
{
|
||||
"id": "ab239158-d6ac-4c51-b6e4-dd4ccea384a2",
|
||||
"expected_count": 0,
|
||||
"actual_count": 1,
|
||||
"ok": false,
|
||||
"miss_class": "n/a",
|
||||
"notes": "Instructional deploy guidance. No durable claim.",
|
||||
"actual_candidates": [
|
||||
{
|
||||
"memory_type": "knowledge",
|
||||
"content": "AtoCore deployments to dalidou use the script /srv/storage/atocore/app/deploy/dalidou/deploy.sh instead of manual docker commands",
|
||||
"project": "atocore",
|
||||
"rule": "llm_extraction"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "da153f2a-b20a-4dee-8c72-431ebb71f08c",
|
||||
"expected_count": 0,
|
||||
"actual_count": 0,
|
||||
"ok": true,
|
||||
"miss_class": "n/a",
|
||||
"notes": "'Deploy still in progress.' Pure status.",
|
||||
"actual_candidates": []
|
||||
},
|
||||
{
|
||||
"id": "7d8371ee-c6d3-4dfe-a7b0-2d091f075c15",
|
||||
"expected_count": 0,
|
||||
"actual_count": 0,
|
||||
"ok": true,
|
||||
"miss_class": "n/a",
|
||||
"notes": "Git command walkthrough. No durable claim.",
|
||||
"actual_candidates": []
|
||||
},
|
||||
{
|
||||
"id": "14bf3f90-e318-466e-81ac-d35522741ba5",
|
||||
"expected_count": 0,
|
||||
"actual_count": 4,
|
||||
"ok": false,
|
||||
"miss_class": "n/a",
|
||||
"notes": "Ledger status update. Transient fact, not a durable memory candidate.",
|
||||
"actual_candidates": [
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "Retrieval/extraction evaluation follows 8-day mini-phase plan with hard gates to prevent scope drift. Preflight checks must validate git SHAs, baselines, and fixture stability before coding.",
|
||||
"project": "",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "Day 1: Create labeled extractor eval set from 30 captures (10 zero-candidate, 10 single-candidate, 10 ambiguous) with metadata; create scoring tool to measure precision/recall.",
|
||||
"project": "",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "Day 2: Measure current extractor against labeled set, recording yield, true/false positives, and false negatives by pattern.",
|
||||
"project": "",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "Session Log/Ledger system tracks work state across sessions so future sessions immediately know what is true and what is next; phases marked by git SHAs.",
|
||||
"project": "",
|
||||
"rule": "llm_extraction"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "8f855235-c38d-4c27-9f2b-8530ebe1a2d8",
|
||||
"expected_count": 0,
|
||||
"actual_count": 0,
|
||||
"ok": true,
|
||||
"miss_class": "n/a",
|
||||
"notes": "Short-term recommendation ('merge to main and deploy'), not a standing decision.",
|
||||
"actual_candidates": []
|
||||
},
|
||||
{
|
||||
"id": "04a96eb5-cd00-4e9f-9252-b2cc919000a4",
|
||||
"expected_count": 0,
|
||||
"actual_count": 0,
|
||||
"ok": true,
|
||||
"miss_class": "n/a",
|
||||
"notes": "Dev server config table. Operational detail, not a memory.",
|
||||
"actual_candidates": []
|
||||
},
|
||||
{
|
||||
"id": "79d606ed-8981-454a-83af-c25226b1b65c",
|
||||
"expected_count": 1,
|
||||
"actual_count": 3,
|
||||
"ok": false,
|
||||
"miss_class": "recommendation_prose",
|
||||
"notes": "A recommendation that later became a ratified decision. Rule extractor would need a 'simplest version that could work today' / 'I'd start with' cue class.",
|
||||
"actual_candidates": [
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "atocore uses multi-model coordination: Claude and codex share DEV-LEDGER.md (current state / active plan / P1+P2 findings / recent decisions / commit log) read at session start, appended at session end",
|
||||
"project": "atocore",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "atocore starts with manual-event-loop (/audit or /status prompts) using DEV-LEDGER.md before upgrading to automated git hooks/CI review",
|
||||
"project": "atocore",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "atocore development involves coordinating between Claude and codex models with shared plan/review strategy and counter-validation to improve system quality",
|
||||
"project": "atocore",
|
||||
"rule": "llm_extraction"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "a6b0d279-c564-4bce-a703-e476f4a148ad",
|
||||
"expected_count": 2,
|
||||
"actual_count": 6,
|
||||
"ok": false,
|
||||
"miss_class": "architectural_change_summary",
|
||||
"notes": "Two durable architectural facts about the polisher machine (Z-axis is engage/retract, cam is read-only). Extractor would need to recognize 'A is now B' / 'X removed, Y added' patterns.",
|
||||
"actual_candidates": [
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "Z-axis is a binary engage/retract mechanism (z_engaged bool), not continuous position control; confirmation timeout z_engage_timeout_s required.",
|
||||
"project": "p06-polisher",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "Cam amplitude and offset are mechanically set by operator and read via encoders; no actuators control them, controller receives encoder telemetry only.",
|
||||
"project": "p06-polisher",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "Cam parameters in controller are expected_cam_amplitude_deg and expected_cam_offset_deg (read-only reference for verification), not command setpoints.",
|
||||
"project": "p06-polisher",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "Manual mode UI displays cam encoder readings (cam_amplitude_deg, cam_offset_deg) as read-only for operator verification of mechanical setting.",
|
||||
"project": "p06-polisher",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "Manual session log records cam_setting measured at session start; run-log segment actual block includes cam_amplitude_deg_mean and cam_offset_deg_mean.",
|
||||
"project": "p06-polisher",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "Machine capabilities now define z_type: engage_retract and cam_type: mechanical_with_encoder instead of actuator-driven setpoints.",
|
||||
"project": "p06-polisher",
|
||||
"rule": "llm_extraction"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "4e00e398-2e89-4653-8ee5-3f65c7f4d2d3",
|
||||
"expected_count": 0,
|
||||
"actual_count": 0,
|
||||
"ok": true,
|
||||
"miss_class": "n/a",
|
||||
"notes": "Clarification question to user.",
|
||||
"actual_candidates": []
|
||||
},
|
||||
{
|
||||
"id": "a6a7816a-7590-4616-84f4-49d9054c2a91",
|
||||
"expected_count": 0,
|
||||
"actual_count": 3,
|
||||
"ok": false,
|
||||
"miss_class": "n/a",
|
||||
"notes": "Instructional response offering two next moves.",
|
||||
"actual_candidates": [
|
||||
{
|
||||
"memory_type": "knowledge",
|
||||
"content": "Codex is an audit agent; communicate with it via markdown prompts with numbered steps; it updates findings via commits to codex/* branches or direct messages.",
|
||||
"project": "",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "preference",
|
||||
"content": "Audit-first workflow recommended: have codex audit DEV-LEDGER.md and recent commits before execution; validates round-trip, catches errors early.",
|
||||
"project": "",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "knowledge",
|
||||
"content": "DEV-LEDGER.md at repo root is the shared coordination document with Orientation, Active Plan, and Open Review Findings sections.",
|
||||
"project": "",
|
||||
"rule": "llm_extraction"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "03527502-316a-4a3e-989c-00719392c7d1",
|
||||
"expected_count": 0,
|
||||
"actual_count": 0,
|
||||
"ok": true,
|
||||
"miss_class": "n/a",
|
||||
"notes": "Troubleshooting a paste failure. Ephemeral.",
|
||||
"actual_candidates": []
|
||||
},
|
||||
{
|
||||
"id": "1fff59fc-545f-42df-9dd1-a0e6dec1b7ee",
|
||||
"expected_count": 0,
|
||||
"actual_count": 3,
|
||||
"ok": false,
|
||||
"miss_class": "n/a",
|
||||
"notes": "Agreement + follow-up question. No durable claim.",
|
||||
"actual_candidates": [
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "Roadmap: Extractor improvement → Harness expansion → Wave 2 trusted operational ingestion → Finish OpenClaw integration (in that order)",
|
||||
"project": "atocore",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "Phase 1 (Extractor): eval-driven loop—label captures, improve rules/add LLM mode, measure yield & FP, stop when queue reviewable (not coverage metrics)",
|
||||
"project": "atocore",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "Phases 1 & 2 (Extractor + Harness) are a mini-phase; without harness, extractor improvements are blind edits",
|
||||
"project": "atocore",
|
||||
"rule": "llm_extraction"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "eb65dc18-0030-4720-ace7-f55af9df719d",
|
||||
"expected_count": 0,
|
||||
"actual_count": 2,
|
||||
"ok": false,
|
||||
"miss_class": "n/a",
|
||||
"notes": "Explanation of how the capture hook works. Instructional.",
|
||||
"actual_candidates": [
|
||||
{
|
||||
"memory_type": "knowledge",
|
||||
"content": "Dalidou stores Claude Code interactions via a Stop hook that fires after each turn and POSTs to http://dalidou:8100/interactions with client=claude-code parameter",
|
||||
"project": "",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "adaptation",
|
||||
"content": "Interaction capture system is passive and automatic; no manual action required, interactions accumulate automatically during normal Claude Code usage",
|
||||
"project": "",
|
||||
"rule": "llm_extraction"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "52c8c0f3-32fb-4b48-9065-73c778a08417",
|
||||
"expected_count": 1,
|
||||
"actual_count": 5,
|
||||
"ok": false,
|
||||
"miss_class": "spec_update_announcement",
|
||||
"notes": "Concrete architectural commitments just added to the polisher spec. Phrased as '§17.1 Local Storage - USB SSD mandatory, not SD card.' The '§' section markers could be a new cue.",
|
||||
"actual_candidates": [
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "USB SSD mandatory for storage (not SD card); directory structure /data/runs/{id}/, /data/manual/{id}/; status.json for machine state",
|
||||
"project": "",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "RPi joins Tailscale mesh for remote access over SSH VPN; no public IP or port forwarding; fully offline operation",
|
||||
"project": "",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "Data synchronization via rsync over Tailscale, failure-tolerant and non-blocking; USB stick as manual fallback",
|
||||
"project": "",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "Machine design principle: works fully offline and independently; network connection is for remote access only",
|
||||
"project": "",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "No cloud, no real-time streaming, no remote control features in design scope",
|
||||
"project": "",
|
||||
"rule": "llm_extraction"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "32d40414-15af-47ee-944b-2cceae9574b8",
|
||||
"expected_count": 0,
|
||||
"actual_count": 5,
|
||||
"ok": false,
|
||||
"miss_class": "n/a",
|
||||
"notes": "Session recap. Historical summary, not a durable memory.",
|
||||
"actual_candidates": [
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "P1: Reflection loop integration incomplete—extraction remains manual (POST /interactions/{id}/extract), not auto-triggered with reinforcement. Live capture won't auto-populate candidate review queue.",
|
||||
"project": "atocore",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "P1: Project memories excluded from context injection; build_context() requests [\"identity\", \"preference\"] only. Reinforcement signal doesn't reach assembled context packs.",
|
||||
"project": "atocore",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "Current batch-extract rules produce only 1 candidate from 42 real captures. Extractor needs conversational-cue detection or LLM-assisted path to improve yield.",
|
||||
"project": "atocore",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "Next priority: extractor rule expansion (cheapest validation of reflection loop), then Wave 2 trusted operational ingestion (master-plan priority). Defer retrieval eval harness focus.",
|
||||
"project": "atocore",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "knowledge",
|
||||
"content": "Alias canonicalization fix (resolve_project_name() boundary) is consistently applied across project state, memories, interactions, and context lookup. Code review approved directionally.",
|
||||
"project": "atocore",
|
||||
"rule": "llm_extraction"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "b6d2cdfc-37fb-459a-96bd-caefb9beaab4",
|
||||
"expected_count": 0,
|
||||
"actual_count": 1,
|
||||
"ok": false,
|
||||
"miss_class": "n/a",
|
||||
"notes": "Deployment prompt for Dalidou. Operational, not a memory.",
|
||||
"actual_candidates": [
|
||||
{
|
||||
"memory_type": "preference",
|
||||
"content": "User prefers receiving standalone testing prompts they can paste into Claude Code on target deployments rather than having the assistant run tests directly.",
|
||||
"project": "atocore",
|
||||
"rule": "llm_extraction"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "ee03d823-931b-4d4e-9258-88b4ed5eeb07",
|
||||
"expected_count": 2,
|
||||
"actual_count": 5,
|
||||
"ok": false,
|
||||
"miss_class": "layered_recommendation",
|
||||
"notes": "Layered infra recommendation with 'non-negotiable' / 'strongly recommended' strength markers. The 'non-negotiable' token could be a new cue class.",
|
||||
"actual_candidates": [
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "USB SSD on RPi is mandatory for polishing telemetry storage; must be independent of network for data integrity during runs.",
|
||||
"project": "p06-polisher",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "Use Tailscale mesh for RPi remote access to provide SSH, file transfer, and NAT traversal without port forwarding.",
|
||||
"project": "p06-polisher",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "Auto-sync telemetry data via rsync over Tailscale after runs complete; fire-and-forget pattern with automatic retry on network interruption.",
|
||||
"project": "p06-polisher",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "Real-time telemetry monitoring should target 10 Hz downsampling; full 100 Hz streaming over network is not necessary.",
|
||||
"project": "p06-polisher",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "knowledge",
|
||||
"content": "Polishing telemetry data rate is approximately 29 MB per hour (100 Hz × 20 channels × 4 bytes = 8 KB/s).",
|
||||
"project": "p06-polisher",
|
||||
"rule": "llm_extraction"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "dd234d9f-0d1c-47e8-b01c-eebcb568c7e7",
|
||||
"expected_count": 1,
|
||||
"actual_count": 3,
|
||||
"ok": false,
|
||||
"miss_class": "alignment_assertion",
|
||||
"notes": "Architectural invariant assertion. '**Alignment verified**' / 'nothing changes for X' style. Likely too subtle for rule matching without LLM assistance.",
|
||||
"actual_candidates": [
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "Machine spec (shareable) + Atomaste spec (internal) separate concerns. Machine spec hides program generation as 'separate scope' to protect IP/business strategy.",
|
||||
"project": "p06-polisher",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "Firmware interface contract is invariant: controller-job.v1 input, run-log.v1 + telemetry output. No firmware changes needed regardless of program generation implementation.",
|
||||
"project": "p06-polisher",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "Atomaste sim spec documents forward/return paths, calibration model (Preston k), translation loss, and service/IP strategy—details hidden from shareable machine spec.",
|
||||
"project": "p06-polisher",
|
||||
"rule": "llm_extraction"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "1f95891a-cf37-400e-9d68-4fad8e04dcbb",
|
||||
"expected_count": 0,
|
||||
"actual_count": 4,
|
||||
"ok": false,
|
||||
"miss_class": "n/a",
|
||||
"notes": "Huge session handoff prompt. Informational only.",
|
||||
"actual_candidates": [
|
||||
{
|
||||
"memory_type": "knowledge",
|
||||
"content": "AtoCore is FastAPI (Python 3.12, SQLite + ChromaDB) on Dalidou home server (dalidou:8100), repo C:\\Users\\antoi\\ATOCore, data /srv/storage/atocore/, ingests Obsidian vault + Google Drive into vector memory system.",
|
||||
"project": "atocore",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "knowledge",
|
||||
"content": "Deploy AtoCore: git push origin main, then ssh papa@dalidou and run /srv/storage/atocore/app/deploy/dalidou/deploy.sh",
|
||||
"project": "atocore",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "adaptation",
|
||||
"content": "Do not add memory extraction to interaction capture hot path; keep extraction as separate batch/manual step. Reason: latency and queue noise before review rhythm is comfortable.",
|
||||
"project": "atocore",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "As of 2026-04-11, approved roadmap in order: observe reinforcement, batch extraction, candidate triage, off-Dalidou backup, retrieval quality review.",
|
||||
"project": "atocore",
|
||||
"rule": "llm_extraction"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "5580950f-d010-4544-be4b-b3071271a698",
|
||||
"expected_count": 0,
|
||||
"actual_count": 6,
|
||||
"ok": false,
|
||||
"miss_class": "n/a",
|
||||
"notes": "Ledger schema sketch. Structural design proposal, later ratified — but the same idea was already captured as a ratified decision in the recent decisions section, so not worth re-extracting from this conversational form.",
|
||||
"actual_candidates": [
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "AtoCore adopts DEV-LEDGER.md as shared operating memory with stable headers; updated at session boundaries",
|
||||
"project": "atocore",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "adaptation",
|
||||
"content": "Codex branches for AtoCore fork from main (never orphan); use naming pattern codex/<topic>",
|
||||
"project": "atocore",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "adaptation",
|
||||
"content": "In AtoCore, Claude builds and Codex audits; never work in parallel on same files",
|
||||
"project": "atocore",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "adaptation",
|
||||
"content": "In AtoCore, P1-severity findings in DEV-LEDGER.md block further main commits until acknowledged",
|
||||
"project": "atocore",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "adaptation",
|
||||
"content": "Every AtoCore session appends to DEV-LEDGER.md Session Log and updates Orientation before ending",
|
||||
"project": "atocore",
|
||||
"rule": "llm_extraction"
|
||||
},
|
||||
{
|
||||
"memory_type": "project",
|
||||
"content": "AtoCore roadmap: (1) extractor improvement, (2) harness expansion, (3) Wave 2 ingestion, (4) OpenClaw finish; steps 1+2 are current mini-phase",
|
||||
"project": "atocore",
|
||||
"rule": "llm_extraction"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
1
scripts/eval_data/interactions_snapshot_2026-04-11.json
Normal file
1
scripts/eval_data/interactions_snapshot_2026-04-11.json
Normal file
File diff suppressed because one or more lines are too long
1
scripts/eval_data/triage_verdict_2026-04-12.json
Normal file
1
scripts/eval_data/triage_verdict_2026-04-12.json
Normal file
@@ -0,0 +1 @@
|
||||
{"promote": ["4b82fe01-4393-464a-b935-9ad5d112d3d8", "665cdd27-0057-4e73-82f5-5d4f47189b5d", "5f89c51d-7e8b-4fb9-830d-a35bb649f9f7", "25ac367c-8bbe-4ba4-8d8e-d533db33f2d9", "2f69a6ed-6de2-4565-87df-1ea3e8c42963", "6bcaebde-9e45-4de5-a220-65d9c4cd451e", "2dd36f74-db47-4c72-a185-fec025d07d4f", "7519d82b-8065-41f0-812e-9c1a3573d7b9", "78678162-5754-478b-b1fc-e25f22e0ee03", "6657b4ae-d4ec-4fec-a66f-2975cdb10d13", "ee626650-1ee0-439c-85c9-6d32a876f239", "1b44a886-a5af-4426-bf10-a92baf3a6502", "aa50c51a-27d7-4db9-b7a3-7ca75dba2118", "5951108b-3a5e-49d0-9308-dfab449664d3", "85f008b9-2d6d-49ad-81a1-e254dac2a2ac", "0cc417ed-ac38-4231-9786-a9582ac6a60f"], "reject": ["0dd85386-cace-4f9a-9098-c6732f3c64fa", "8939b875-152c-4c90-8614-3cfdc64cd1d6", "93e37d2a-b512-4a97-b230-e64ac913d087", "c873ec00-063e-488c-ad32-1233290a3feb", "89446ebe-fd42-4177-80db-3657bc41d048", "1f077e98-f945-4480-96ab-110b0671ebc6", "89f60018-c23b-4b2f-80ca-e6f7d02c5cd3", "82f17880-92da-485e-a24a-0599ab1836e7", "6d6f4fe9-73e5-449f-a802-6dc0a974f87b", "932f38df-58f3-49c2-9968-8d422dc54b42", "2b3178e8-fe38-4338-b2b0-75a01da18cea", "254c394d-3f80-4b34-a891-9f1cbfec74d7", "34add99d-8d2e-4586-b002-fc7b7d22bcb3", "993e0afe-9910-4984-b608-f5e9de7c0453", "bdf488d7-9200-441e-afbf-5335020ea78b", "188197af-a61d-4616-9e39-712aeaaadf61", "acffcaa4-5966-4ec1-a0b2-3b8dcebe75bd", "e8f4e704-367b-4759-b20c-da0ccf06cf7d", "ab2b607c-52b1-405f-a874-c6078393c21c", "5a5fd29d-291f-4e22-88fe-825cf55f745a", "4c238106-017e-4283-99a1-639497b6ddde", "83aed988-4257-4220-b612-6c725d6cd95a", "95d87d1a-5daa-414d-95ff-a344a62e0b6b", "7aafb588-51b0-4536-a414-ebaaea924b98", "9d2cbbe9-cf2e-4aab-9cb8-c4951da70826", "db88eecf-e31a-4fee-b07d-0b51db7e315e", "8748f071-ff28-47a6-8504-65ca30a8336a", "f9210883-67a8-4dae-9f27-6b5ae7bd8a6b", "2e001aaf-0c5c-4547-9b96-ebc4172b258d", "47778126-b0cf-41d9-9e21-f2418f53e792", "410e4a70-ae12-4de2-8f31-071ffee3cad4", "e94f94f0-3538-40dd-aef2-0189eacc7eb7", "23fa6fdf-cfb9-4850-ad04-3ea56551c30a", "3e1fad28-031b-4670-a9d0-0af2e8ba1361", "d49378a4-d03c-4730-be87-f0fcb2d199db"]}
|
||||
274
scripts/extractor_eval.py
Normal file
274
scripts/extractor_eval.py
Normal file
@@ -0,0 +1,274 @@
|
||||
"""Extractor eval runner — scores the rule-based extractor against a
|
||||
labeled interaction corpus.
|
||||
|
||||
Pulls full interaction content from a frozen snapshot, runs each through
|
||||
``extract_candidates_from_interaction``, and compares the output to the
|
||||
expected counts from a labels file. Produces a per-label scorecard plus
|
||||
aggregate precision / recall / yield numbers.
|
||||
|
||||
This harness deliberately stays file-based: snapshot + labels + this
|
||||
runner. No Dalidou HTTP dependency once the snapshot is frozen, so the
|
||||
eval is reproducible run-to-run even as live captures drift.
|
||||
|
||||
Usage:
|
||||
|
||||
python scripts/extractor_eval.py # human report
|
||||
python scripts/extractor_eval.py --json # machine-readable
|
||||
python scripts/extractor_eval.py \\
|
||||
--snapshot scripts/eval_data/interactions_snapshot_2026-04-11.json \\
|
||||
--labels scripts/eval_data/extractor_labels_2026-04-11.json
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import io
|
||||
import json
|
||||
import sys
|
||||
from dataclasses import dataclass, field
|
||||
from pathlib import Path
|
||||
|
||||
# Force UTF-8 on stdout so real LLM output (arrows, em-dashes, CJK)
|
||||
# doesn't crash the human report on Windows cp1252 consoles.
|
||||
if hasattr(sys.stdout, "buffer"):
|
||||
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding="utf-8", errors="replace", line_buffering=True)
|
||||
|
||||
# Make src/ importable without requiring an install.
|
||||
_REPO_ROOT = Path(__file__).resolve().parent.parent
|
||||
sys.path.insert(0, str(_REPO_ROOT / "src"))
|
||||
|
||||
from atocore.interactions.service import Interaction # noqa: E402
|
||||
from atocore.memory.extractor import extract_candidates_from_interaction # noqa: E402
|
||||
from atocore.memory.extractor_llm import extract_candidates_llm # noqa: E402
|
||||
|
||||
DEFAULT_SNAPSHOT = _REPO_ROOT / "scripts" / "eval_data" / "interactions_snapshot_2026-04-11.json"
|
||||
DEFAULT_LABELS = _REPO_ROOT / "scripts" / "eval_data" / "extractor_labels_2026-04-11.json"
|
||||
|
||||
|
||||
@dataclass
|
||||
class LabelResult:
|
||||
id: str
|
||||
expected_count: int
|
||||
actual_count: int
|
||||
ok: bool
|
||||
miss_class: str
|
||||
notes: str
|
||||
actual_candidates: list[dict] = field(default_factory=list)
|
||||
|
||||
|
||||
def load_snapshot(path: Path) -> dict[str, dict]:
|
||||
data = json.loads(path.read_text(encoding="utf-8"))
|
||||
return {item["id"]: item for item in data.get("interactions", [])}
|
||||
|
||||
|
||||
def load_labels(path: Path) -> dict:
|
||||
return json.loads(path.read_text(encoding="utf-8"))
|
||||
|
||||
|
||||
def interaction_from_snapshot(snap: dict) -> Interaction:
|
||||
return Interaction(
|
||||
id=snap["id"],
|
||||
prompt=snap.get("prompt", "") or "",
|
||||
response=snap.get("response", "") or "",
|
||||
response_summary="",
|
||||
project=snap.get("project", "") or "",
|
||||
client=snap.get("client", "") or "",
|
||||
session_id=snap.get("session_id", "") or "",
|
||||
created_at=snap.get("created_at", "") or "",
|
||||
)
|
||||
|
||||
|
||||
def score(snapshot: dict[str, dict], labels_doc: dict, mode: str = "rule") -> list[LabelResult]:
|
||||
results: list[LabelResult] = []
|
||||
for label in labels_doc["labels"]:
|
||||
iid = label["id"]
|
||||
snap = snapshot.get(iid)
|
||||
if snap is None:
|
||||
results.append(
|
||||
LabelResult(
|
||||
id=iid,
|
||||
expected_count=int(label.get("expected_count", 0)),
|
||||
actual_count=-1,
|
||||
ok=False,
|
||||
miss_class="not_in_snapshot",
|
||||
notes=label.get("notes", ""),
|
||||
)
|
||||
)
|
||||
continue
|
||||
interaction = interaction_from_snapshot(snap)
|
||||
if mode == "llm":
|
||||
candidates = extract_candidates_llm(interaction)
|
||||
else:
|
||||
candidates = extract_candidates_from_interaction(interaction)
|
||||
actual_count = len(candidates)
|
||||
expected_count = int(label.get("expected_count", 0))
|
||||
results.append(
|
||||
LabelResult(
|
||||
id=iid,
|
||||
expected_count=expected_count,
|
||||
actual_count=actual_count,
|
||||
ok=(actual_count == expected_count),
|
||||
miss_class=label.get("miss_class", "n/a"),
|
||||
notes=label.get("notes", ""),
|
||||
actual_candidates=[
|
||||
{
|
||||
"memory_type": c.memory_type,
|
||||
"content": c.content,
|
||||
"project": c.project,
|
||||
"rule": c.rule,
|
||||
}
|
||||
for c in candidates
|
||||
],
|
||||
)
|
||||
)
|
||||
return results
|
||||
|
||||
|
||||
def aggregate(results: list[LabelResult]) -> dict:
|
||||
total = len(results)
|
||||
exact_match = sum(1 for r in results if r.ok)
|
||||
true_positive = sum(1 for r in results if r.expected_count > 0 and r.actual_count > 0)
|
||||
false_positive_interactions = sum(
|
||||
1 for r in results if r.expected_count == 0 and r.actual_count > 0
|
||||
)
|
||||
false_negative_interactions = sum(
|
||||
1 for r in results if r.expected_count > 0 and r.actual_count == 0
|
||||
)
|
||||
positive_expected = sum(1 for r in results if r.expected_count > 0)
|
||||
total_expected_candidates = sum(r.expected_count for r in results)
|
||||
total_actual_candidates = sum(max(r.actual_count, 0) for r in results)
|
||||
yield_rate = total_actual_candidates / total if total else 0.0
|
||||
# Recall over interaction count that had at least one expected candidate:
|
||||
recall = true_positive / positive_expected if positive_expected else 0.0
|
||||
# Precision over interaction count that produced any candidate:
|
||||
precision_denom = true_positive + false_positive_interactions
|
||||
precision = true_positive / precision_denom if precision_denom else 0.0
|
||||
# Miss class breakdown
|
||||
miss_classes: dict[str, int] = {}
|
||||
for r in results:
|
||||
if r.expected_count > 0 and r.actual_count == 0:
|
||||
key = r.miss_class or "unlabeled"
|
||||
miss_classes[key] = miss_classes.get(key, 0) + 1
|
||||
return {
|
||||
"total": total,
|
||||
"exact_match": exact_match,
|
||||
"positive_expected": positive_expected,
|
||||
"total_expected_candidates": total_expected_candidates,
|
||||
"total_actual_candidates": total_actual_candidates,
|
||||
"yield_rate": round(yield_rate, 3),
|
||||
"recall": round(recall, 3),
|
||||
"precision": round(precision, 3),
|
||||
"false_positive_interactions": false_positive_interactions,
|
||||
"false_negative_interactions": false_negative_interactions,
|
||||
"miss_classes": miss_classes,
|
||||
}
|
||||
|
||||
|
||||
def print_human(results: list[LabelResult], summary: dict) -> None:
|
||||
print("=== Extractor eval ===")
|
||||
print(
|
||||
f"labeled={summary['total']} "
|
||||
f"exact_match={summary['exact_match']} "
|
||||
f"positive_expected={summary['positive_expected']}"
|
||||
)
|
||||
print(
|
||||
f"yield={summary['yield_rate']} "
|
||||
f"recall={summary['recall']} "
|
||||
f"precision={summary['precision']}"
|
||||
)
|
||||
print(
|
||||
f"false_positives={summary['false_positive_interactions']} "
|
||||
f"false_negatives={summary['false_negative_interactions']}"
|
||||
)
|
||||
print()
|
||||
print("miss class breakdown (FN):")
|
||||
if summary["miss_classes"]:
|
||||
for k, v in sorted(summary["miss_classes"].items(), key=lambda kv: -kv[1]):
|
||||
print(f" {v:3d} {k}")
|
||||
else:
|
||||
print(" (none)")
|
||||
print()
|
||||
print("per-interaction:")
|
||||
for r in results:
|
||||
marker = "OK " if r.ok else "MISS"
|
||||
iid_short = r.id[:8]
|
||||
print(f" {marker} {iid_short} expected={r.expected_count} actual={r.actual_count} class={r.miss_class}")
|
||||
if r.actual_candidates:
|
||||
for c in r.actual_candidates:
|
||||
preview = (c["content"] or "")[:80]
|
||||
print(f" [{c['memory_type']}] {preview}")
|
||||
|
||||
|
||||
def print_json(results: list[LabelResult], summary: dict) -> None:
|
||||
payload = {
|
||||
"summary": summary,
|
||||
"results": [
|
||||
{
|
||||
"id": r.id,
|
||||
"expected_count": r.expected_count,
|
||||
"actual_count": r.actual_count,
|
||||
"ok": r.ok,
|
||||
"miss_class": r.miss_class,
|
||||
"notes": r.notes,
|
||||
"actual_candidates": r.actual_candidates,
|
||||
}
|
||||
for r in results
|
||||
],
|
||||
}
|
||||
json.dump(payload, sys.stdout, indent=2)
|
||||
sys.stdout.write("\n")
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = argparse.ArgumentParser(description="AtoCore extractor eval")
|
||||
parser.add_argument("--snapshot", type=Path, default=DEFAULT_SNAPSHOT)
|
||||
parser.add_argument("--labels", type=Path, default=DEFAULT_LABELS)
|
||||
parser.add_argument("--json", action="store_true", help="emit machine-readable JSON")
|
||||
parser.add_argument(
|
||||
"--output",
|
||||
type=Path,
|
||||
default=None,
|
||||
help="write JSON result to this file (bypasses log/stdout interleaving)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--mode",
|
||||
choices=["rule", "llm"],
|
||||
default="rule",
|
||||
help="which extractor to score (default: rule)",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
snapshot = load_snapshot(args.snapshot)
|
||||
labels = load_labels(args.labels)
|
||||
results = score(snapshot, labels, mode=args.mode)
|
||||
summary = aggregate(results)
|
||||
summary["mode"] = args.mode
|
||||
|
||||
if args.output is not None:
|
||||
payload = {
|
||||
"summary": summary,
|
||||
"results": [
|
||||
{
|
||||
"id": r.id,
|
||||
"expected_count": r.expected_count,
|
||||
"actual_count": r.actual_count,
|
||||
"ok": r.ok,
|
||||
"miss_class": r.miss_class,
|
||||
"notes": r.notes,
|
||||
"actual_candidates": r.actual_candidates,
|
||||
}
|
||||
for r in results
|
||||
],
|
||||
}
|
||||
args.output.write_text(json.dumps(payload, indent=2, ensure_ascii=False), encoding="utf-8")
|
||||
print(f"wrote {args.output} ({summary['mode']}: recall={summary['recall']} precision={summary['precision']})")
|
||||
elif args.json:
|
||||
print_json(results, summary)
|
||||
else:
|
||||
print_human(results, summary)
|
||||
|
||||
return 0 if summary["false_negative_interactions"] == 0 and summary["false_positive_interactions"] == 0 else 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
89
scripts/persist_llm_candidates.py
Normal file
89
scripts/persist_llm_candidates.py
Normal file
@@ -0,0 +1,89 @@
|
||||
"""Persist LLM-extracted candidates from a baseline JSON to Dalidou.
|
||||
|
||||
One-shot script: reads a saved extractor eval output file, filters to
|
||||
candidates the LLM actually produced, and POSTs each to the Dalidou
|
||||
memory API with ``status=candidate``. Deduplicates against already-
|
||||
existing candidate content so the script is safe to re-run.
|
||||
|
||||
Usage:
|
||||
|
||||
python scripts/persist_llm_candidates.py \\
|
||||
scripts/eval_data/extractor_llm_baseline_2026-04-11.json
|
||||
|
||||
Then triage via:
|
||||
|
||||
python scripts/atocore_client.py triage
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import urllib.error
|
||||
import urllib.parse
|
||||
import urllib.request
|
||||
|
||||
BASE_URL = os.environ.get("ATOCORE_BASE_URL", "http://dalidou:8100")
|
||||
TIMEOUT = int(os.environ.get("ATOCORE_TIMEOUT_SECONDS", "10"))
|
||||
|
||||
|
||||
def post_json(path: str, body: dict) -> dict:
|
||||
data = json.dumps(body).encode("utf-8")
|
||||
req = urllib.request.Request(
|
||||
url=f"{BASE_URL}{path}",
|
||||
method="POST",
|
||||
headers={"Content-Type": "application/json"},
|
||||
data=data,
|
||||
)
|
||||
with urllib.request.urlopen(req, timeout=TIMEOUT) as resp:
|
||||
return json.loads(resp.read().decode("utf-8"))
|
||||
|
||||
|
||||
def main() -> int:
|
||||
if len(sys.argv) < 2:
|
||||
print(f"usage: {sys.argv[0]} <baseline_json>", file=sys.stderr)
|
||||
return 1
|
||||
|
||||
data = json.loads(open(sys.argv[1], encoding="utf-8").read())
|
||||
results = data.get("results", [])
|
||||
|
||||
persisted = 0
|
||||
skipped = 0
|
||||
errors = 0
|
||||
|
||||
for r in results:
|
||||
for c in r.get("actual_candidates", []):
|
||||
content = (c.get("content") or "").strip()
|
||||
if not content:
|
||||
continue
|
||||
mem_type = c.get("memory_type", "knowledge")
|
||||
project = c.get("project", "")
|
||||
confidence = c.get("confidence", 0.5)
|
||||
|
||||
try:
|
||||
resp = post_json("/memory", {
|
||||
"memory_type": mem_type,
|
||||
"content": content,
|
||||
"project": project,
|
||||
"confidence": float(confidence),
|
||||
"status": "candidate",
|
||||
})
|
||||
persisted += 1
|
||||
print(f" + {resp.get('id','?')[:8]} [{mem_type}] {content[:80]}")
|
||||
except urllib.error.HTTPError as exc:
|
||||
if exc.code == 400:
|
||||
skipped += 1
|
||||
else:
|
||||
errors += 1
|
||||
print(f" ! error {exc.code}: {content[:60]}", file=sys.stderr)
|
||||
except Exception as exc:
|
||||
errors += 1
|
||||
print(f" ! {exc}: {content[:60]}", file=sys.stderr)
|
||||
|
||||
print(f"\npersisted={persisted} skipped={skipped} errors={errors}")
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
194
scripts/retrieval_eval.py
Normal file
194
scripts/retrieval_eval.py
Normal file
@@ -0,0 +1,194 @@
|
||||
"""Retrieval quality eval harness.
|
||||
|
||||
Runs a fixed set of project-hinted questions against
|
||||
``POST /context/build`` on a live AtoCore instance and scores the
|
||||
resulting ``formatted_context`` against per-question expectations.
|
||||
The goal is a diffable scorecard that tells you, run-to-run,
|
||||
whether a retrieval / builder / ingestion change moved the needle.
|
||||
|
||||
Design notes
|
||||
------------
|
||||
- Fixtures live in ``scripts/retrieval_eval_fixtures.json`` so new
|
||||
questions can be added without touching Python. Each fixture
|
||||
names the project, the prompt, and a checklist of substrings that
|
||||
MUST appear in ``formatted_context`` (``expect_present``) and
|
||||
substrings that MUST NOT appear (``expect_absent``). The absent
|
||||
list catches cross-project bleed and stale content.
|
||||
- The checklist is deliberately substring-based (not regex, not
|
||||
embedding-similarity) so a failure is always a trivially
|
||||
reproducible "this string is not in that string". Richer scoring
|
||||
can come later once we know the harness is useful.
|
||||
- The harness is external to the app runtime and talks to AtoCore
|
||||
over HTTP, so it works against dev, staging, or prod. It follows
|
||||
the same environment-variable contract as ``atocore_client.py``
|
||||
(``ATOCORE_BASE_URL``, ``ATOCORE_TIMEOUT_SECONDS``).
|
||||
- Exit code 0 on all-pass, 1 on any fixture failure. Intended for
|
||||
manual runs today; a future cron / CI hook can consume the
|
||||
JSON output via ``--json``.
|
||||
|
||||
Usage
|
||||
-----
|
||||
|
||||
python scripts/retrieval_eval.py # human-readable report
|
||||
python scripts/retrieval_eval.py --json # machine-readable
|
||||
python scripts/retrieval_eval.py --fixtures path/to/custom.json
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import urllib.error
|
||||
import urllib.parse
|
||||
import urllib.request
|
||||
from dataclasses import dataclass, field
|
||||
from pathlib import Path
|
||||
|
||||
DEFAULT_BASE_URL = os.environ.get("ATOCORE_BASE_URL", "http://dalidou:8100")
|
||||
DEFAULT_TIMEOUT = int(os.environ.get("ATOCORE_TIMEOUT_SECONDS", "30"))
|
||||
DEFAULT_BUDGET = 3000
|
||||
DEFAULT_FIXTURES = Path(__file__).parent / "retrieval_eval_fixtures.json"
|
||||
|
||||
|
||||
@dataclass
|
||||
class Fixture:
|
||||
name: str
|
||||
project: str
|
||||
prompt: str
|
||||
budget: int = DEFAULT_BUDGET
|
||||
expect_present: list[str] = field(default_factory=list)
|
||||
expect_absent: list[str] = field(default_factory=list)
|
||||
notes: str = ""
|
||||
|
||||
|
||||
@dataclass
|
||||
class FixtureResult:
|
||||
fixture: Fixture
|
||||
ok: bool
|
||||
missing_present: list[str]
|
||||
unexpected_absent: list[str]
|
||||
total_chars: int
|
||||
error: str = ""
|
||||
|
||||
|
||||
def load_fixtures(path: Path) -> list[Fixture]:
|
||||
data = json.loads(path.read_text(encoding="utf-8"))
|
||||
if not isinstance(data, list):
|
||||
raise ValueError(f"{path} must contain a JSON array of fixtures")
|
||||
fixtures: list[Fixture] = []
|
||||
for i, raw in enumerate(data):
|
||||
if not isinstance(raw, dict):
|
||||
raise ValueError(f"fixture {i} is not an object")
|
||||
fixtures.append(
|
||||
Fixture(
|
||||
name=raw["name"],
|
||||
project=raw.get("project", ""),
|
||||
prompt=raw["prompt"],
|
||||
budget=int(raw.get("budget", DEFAULT_BUDGET)),
|
||||
expect_present=list(raw.get("expect_present", [])),
|
||||
expect_absent=list(raw.get("expect_absent", [])),
|
||||
notes=raw.get("notes", ""),
|
||||
)
|
||||
)
|
||||
return fixtures
|
||||
|
||||
|
||||
def run_fixture(fixture: Fixture, base_url: str, timeout: int) -> FixtureResult:
|
||||
payload = {
|
||||
"prompt": fixture.prompt,
|
||||
"project": fixture.project or None,
|
||||
"budget": fixture.budget,
|
||||
}
|
||||
req = urllib.request.Request(
|
||||
url=f"{base_url}/context/build",
|
||||
method="POST",
|
||||
headers={"Content-Type": "application/json"},
|
||||
data=json.dumps(payload).encode("utf-8"),
|
||||
)
|
||||
try:
|
||||
with urllib.request.urlopen(req, timeout=timeout) as resp:
|
||||
body = json.loads(resp.read().decode("utf-8"))
|
||||
except urllib.error.URLError as exc:
|
||||
return FixtureResult(
|
||||
fixture=fixture,
|
||||
ok=False,
|
||||
missing_present=list(fixture.expect_present),
|
||||
unexpected_absent=[],
|
||||
total_chars=0,
|
||||
error=f"http_error: {exc}",
|
||||
)
|
||||
|
||||
formatted = body.get("formatted_context") or ""
|
||||
missing = [s for s in fixture.expect_present if s not in formatted]
|
||||
unexpected = [s for s in fixture.expect_absent if s in formatted]
|
||||
return FixtureResult(
|
||||
fixture=fixture,
|
||||
ok=not missing and not unexpected,
|
||||
missing_present=missing,
|
||||
unexpected_absent=unexpected,
|
||||
total_chars=len(formatted),
|
||||
)
|
||||
|
||||
|
||||
def print_human_report(results: list[FixtureResult]) -> None:
|
||||
total = len(results)
|
||||
passed = sum(1 for r in results if r.ok)
|
||||
print(f"Retrieval eval: {passed}/{total} fixtures passed")
|
||||
print()
|
||||
for r in results:
|
||||
marker = "PASS" if r.ok else "FAIL"
|
||||
print(f"[{marker}] {r.fixture.name} project={r.fixture.project} chars={r.total_chars}")
|
||||
if r.error:
|
||||
print(f" error: {r.error}")
|
||||
for miss in r.missing_present:
|
||||
print(f" missing expected: {miss!r}")
|
||||
for bleed in r.unexpected_absent:
|
||||
print(f" unexpected present: {bleed!r}")
|
||||
if r.fixture.notes and not r.ok:
|
||||
print(f" notes: {r.fixture.notes}")
|
||||
|
||||
|
||||
def print_json_report(results: list[FixtureResult]) -> None:
|
||||
payload = {
|
||||
"total": len(results),
|
||||
"passed": sum(1 for r in results if r.ok),
|
||||
"fixtures": [
|
||||
{
|
||||
"name": r.fixture.name,
|
||||
"project": r.fixture.project,
|
||||
"ok": r.ok,
|
||||
"total_chars": r.total_chars,
|
||||
"missing_present": r.missing_present,
|
||||
"unexpected_absent": r.unexpected_absent,
|
||||
"error": r.error,
|
||||
}
|
||||
for r in results
|
||||
],
|
||||
}
|
||||
json.dump(payload, sys.stdout, indent=2)
|
||||
sys.stdout.write("\n")
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = argparse.ArgumentParser(description="AtoCore retrieval quality eval harness")
|
||||
parser.add_argument("--base-url", default=DEFAULT_BASE_URL)
|
||||
parser.add_argument("--timeout", type=int, default=DEFAULT_TIMEOUT)
|
||||
parser.add_argument("--fixtures", type=Path, default=DEFAULT_FIXTURES)
|
||||
parser.add_argument("--json", action="store_true", help="emit machine-readable JSON")
|
||||
args = parser.parse_args()
|
||||
|
||||
fixtures = load_fixtures(args.fixtures)
|
||||
results = [run_fixture(f, args.base_url, args.timeout) for f in fixtures]
|
||||
|
||||
if args.json:
|
||||
print_json_report(results)
|
||||
else:
|
||||
print_human_report(results)
|
||||
|
||||
return 0 if all(r.ok for r in results) else 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
225
scripts/retrieval_eval_fixtures.json
Normal file
225
scripts/retrieval_eval_fixtures.json
Normal file
@@ -0,0 +1,225 @@
|
||||
[
|
||||
{
|
||||
"name": "p04-architecture-decision",
|
||||
"project": "p04-gigabit",
|
||||
"prompt": "what mirror architecture was selected for GigaBIT M1 and why",
|
||||
"expect_present": [
|
||||
"--- Trusted Project State ---",
|
||||
"Option B",
|
||||
"conical",
|
||||
"--- Project Memories ---"
|
||||
],
|
||||
"expect_absent": [
|
||||
"p06-polisher",
|
||||
"folded-beam"
|
||||
],
|
||||
"notes": "Canonical p04 decision — should surface both Trusted Project State and the project-memory band"
|
||||
},
|
||||
{
|
||||
"name": "p04-constraints",
|
||||
"project": "p04-gigabit",
|
||||
"prompt": "what are the key GigaBIT M1 program constraints",
|
||||
"expect_present": [
|
||||
"--- Trusted Project State ---",
|
||||
"Zerodur",
|
||||
"1.2"
|
||||
],
|
||||
"expect_absent": [
|
||||
"polisher suite"
|
||||
],
|
||||
"notes": "Key constraints are in Trusted Project State and in the mission-framing memory"
|
||||
},
|
||||
{
|
||||
"name": "p04-short-ambiguous",
|
||||
"project": "p04-gigabit",
|
||||
"prompt": "current status",
|
||||
"expect_present": [
|
||||
"--- Trusted Project State ---"
|
||||
],
|
||||
"expect_absent": [],
|
||||
"notes": "Short ambiguous prompt — at minimum project state should surface. Hard case: the prompt is generic enough that chunks may not rank well."
|
||||
},
|
||||
{
|
||||
"name": "p05-configuration",
|
||||
"project": "p05-interferometer",
|
||||
"prompt": "what is the selected interferometer configuration",
|
||||
"expect_present": [
|
||||
"folded-beam",
|
||||
"CGH"
|
||||
],
|
||||
"expect_absent": [
|
||||
"Option B",
|
||||
"conical back",
|
||||
"polisher suite"
|
||||
],
|
||||
"notes": "P05 architecture memory covers folded-beam + CGH. GigaBIT M1 legitimately appears in p05 source docs."
|
||||
},
|
||||
{
|
||||
"name": "p05-vendor-signal",
|
||||
"project": "p05-interferometer",
|
||||
"prompt": "what is the current vendor signal for the interferometer procurement",
|
||||
"expect_present": [
|
||||
"4D",
|
||||
"Zygo"
|
||||
],
|
||||
"expect_absent": [
|
||||
"polisher"
|
||||
],
|
||||
"notes": "Vendor memory mentions 4D as strongest technical candidate and Zygo Verifire SV as value path"
|
||||
},
|
||||
{
|
||||
"name": "p05-cgh-calibration",
|
||||
"project": "p05-interferometer",
|
||||
"prompt": "how does CGH calibration work for the interferometer",
|
||||
"expect_present": [
|
||||
"CGH"
|
||||
],
|
||||
"expect_absent": [
|
||||
"polisher-sim",
|
||||
"polisher-post"
|
||||
],
|
||||
"notes": "CGH is a core p05 concept. Should surface via chunks and possibly the architecture memory. Must not bleed p06 polisher-suite terms."
|
||||
},
|
||||
{
|
||||
"name": "p06-suite-split",
|
||||
"project": "p06-polisher",
|
||||
"prompt": "how is the polisher software suite split across layers",
|
||||
"expect_present": [
|
||||
"polisher-sim",
|
||||
"polisher-post",
|
||||
"polisher-control"
|
||||
],
|
||||
"expect_absent": [
|
||||
"GigaBIT"
|
||||
],
|
||||
"notes": "The three-layer split is in multiple p06 memories"
|
||||
},
|
||||
{
|
||||
"name": "p06-control-rule",
|
||||
"project": "p06-polisher",
|
||||
"prompt": "what is the polisher control design rule",
|
||||
"expect_present": [
|
||||
"interlocks"
|
||||
],
|
||||
"expect_absent": [
|
||||
"interferometer"
|
||||
],
|
||||
"notes": "Control design rule memory mentions interlocks and state transitions"
|
||||
},
|
||||
{
|
||||
"name": "p06-firmware-interface",
|
||||
"project": "p06-polisher",
|
||||
"prompt": "what is the firmware interface contract for the polisher machine",
|
||||
"expect_present": [
|
||||
"controller-job"
|
||||
],
|
||||
"expect_absent": [
|
||||
"interferometer",
|
||||
"GigaBIT"
|
||||
],
|
||||
"notes": "New p06 memory from the first triage: firmware interface contract is invariant controller-job.v1 in, run-log.v1 out"
|
||||
},
|
||||
{
|
||||
"name": "p06-z-axis",
|
||||
"project": "p06-polisher",
|
||||
"prompt": "how does the polisher Z-axis work",
|
||||
"expect_present": [
|
||||
"engage"
|
||||
],
|
||||
"expect_absent": [
|
||||
"interferometer"
|
||||
],
|
||||
"notes": "New p06 memory: Z-axis is binary engage/retract, not continuous position. The word 'engage' should appear."
|
||||
},
|
||||
{
|
||||
"name": "p06-cam-mechanism",
|
||||
"project": "p06-polisher",
|
||||
"prompt": "how is cam amplitude controlled on the polisher",
|
||||
"expect_present": [
|
||||
"encoder"
|
||||
],
|
||||
"expect_absent": [
|
||||
"GigaBIT"
|
||||
],
|
||||
"notes": "New p06 memory: cam set mechanically by operator, read by encoders. The word 'encoder' should appear."
|
||||
},
|
||||
{
|
||||
"name": "p06-telemetry-rate",
|
||||
"project": "p06-polisher",
|
||||
"prompt": "what is the expected polishing telemetry data rate",
|
||||
"expect_present": [
|
||||
"29 MB"
|
||||
],
|
||||
"expect_absent": [
|
||||
"interferometer"
|
||||
],
|
||||
"notes": "New p06 knowledge memory: approximately 29 MB per hour at 100 Hz"
|
||||
},
|
||||
{
|
||||
"name": "p06-offline-design",
|
||||
"project": "p06-polisher",
|
||||
"prompt": "does the polisher machine need network to operate",
|
||||
"expect_present": [
|
||||
"offline"
|
||||
],
|
||||
"expect_absent": [
|
||||
"CGH"
|
||||
],
|
||||
"notes": "New p06 memory: machine works fully offline and independently; network is for remote access only"
|
||||
},
|
||||
{
|
||||
"name": "p06-short-ambiguous",
|
||||
"project": "p06-polisher",
|
||||
"prompt": "current status",
|
||||
"expect_present": [
|
||||
"--- Trusted Project State ---"
|
||||
],
|
||||
"expect_absent": [],
|
||||
"notes": "Short ambiguous prompt — project state should surface at minimum"
|
||||
},
|
||||
{
|
||||
"name": "cross-project-no-bleed",
|
||||
"project": "p04-gigabit",
|
||||
"prompt": "what telemetry rate should we target",
|
||||
"expect_present": [],
|
||||
"expect_absent": [
|
||||
"29 MB",
|
||||
"polisher"
|
||||
],
|
||||
"notes": "Adversarial: telemetry rate is a p06 fact. A p04 query for 'telemetry rate' must NOT surface p06 memories. Tests cross-project gating."
|
||||
},
|
||||
{
|
||||
"name": "no-project-hint",
|
||||
"project": "",
|
||||
"prompt": "tell me about the current projects",
|
||||
"expect_present": [],
|
||||
"expect_absent": [
|
||||
"--- Project Memories ---"
|
||||
],
|
||||
"notes": "Without a project hint, project memories must not appear (cross-project bleed guard). Chunks may appear if any match."
|
||||
},
|
||||
{
|
||||
"name": "p06-usb-ssd",
|
||||
"project": "p06-polisher",
|
||||
"prompt": "what storage solution is specified for the polisher RPi",
|
||||
"expect_present": [
|
||||
"USB SSD"
|
||||
],
|
||||
"expect_absent": [
|
||||
"interferometer"
|
||||
],
|
||||
"notes": "New p06 memory from triage: USB SSD mandatory, not SD card"
|
||||
},
|
||||
{
|
||||
"name": "p06-tailscale",
|
||||
"project": "p06-polisher",
|
||||
"prompt": "how do we access the polisher machine remotely",
|
||||
"expect_present": [
|
||||
"Tailscale"
|
||||
],
|
||||
"expect_absent": [
|
||||
"GigaBIT"
|
||||
],
|
||||
"notes": "New p06 memory: Tailscale mesh for RPi remote access"
|
||||
}
|
||||
]
|
||||
@@ -49,6 +49,7 @@ from atocore.memory.service import (
|
||||
)
|
||||
from atocore.observability.logger import get_logger
|
||||
from atocore.ops.backup import (
|
||||
cleanup_old_backups,
|
||||
create_runtime_backup,
|
||||
list_runtime_backups,
|
||||
validate_backup,
|
||||
@@ -140,6 +141,7 @@ class MemoryCreateRequest(BaseModel):
|
||||
content: str
|
||||
project: str = ""
|
||||
confidence: float = 1.0
|
||||
status: str = "active"
|
||||
|
||||
|
||||
class MemoryUpdateRequest(BaseModel):
|
||||
@@ -343,6 +345,7 @@ def api_create_memory(req: MemoryCreateRequest) -> dict:
|
||||
content=req.content,
|
||||
project=req.project,
|
||||
confidence=req.confidence,
|
||||
status=req.status,
|
||||
)
|
||||
except ValueError as e:
|
||||
raise HTTPException(status_code=400, detail=str(e))
|
||||
@@ -511,6 +514,7 @@ class InteractionRecordRequest(BaseModel):
|
||||
chunks_used: list[str] = []
|
||||
context_pack: dict | None = None
|
||||
reinforce: bool = True
|
||||
extract: bool = False
|
||||
|
||||
|
||||
@router.post("/interactions")
|
||||
@@ -536,6 +540,7 @@ def api_record_interaction(req: InteractionRecordRequest) -> dict:
|
||||
chunks_used=req.chunks_used,
|
||||
context_pack=req.context_pack,
|
||||
reinforce=req.reinforce,
|
||||
extract=req.extract,
|
||||
)
|
||||
except ValueError as e:
|
||||
raise HTTPException(status_code=400, detail=str(e))
|
||||
@@ -731,6 +736,25 @@ def api_list_backups() -> dict:
|
||||
}
|
||||
|
||||
|
||||
class BackupCleanupRequest(BaseModel):
|
||||
confirm: bool = False
|
||||
|
||||
|
||||
@router.post("/admin/backup/cleanup")
|
||||
def api_cleanup_backups(req: BackupCleanupRequest | None = None) -> dict:
|
||||
"""Apply retention policy to old backup snapshots.
|
||||
|
||||
Dry-run by default. Pass ``confirm: true`` to actually delete.
|
||||
Retention: last 7 daily, last 4 weekly (Sundays), last 6 monthly (1st).
|
||||
"""
|
||||
payload = req or BackupCleanupRequest()
|
||||
try:
|
||||
return cleanup_old_backups(confirm=payload.confirm)
|
||||
except Exception as e:
|
||||
log.error("admin_cleanup_failed", error=str(e))
|
||||
raise HTTPException(status_code=500, detail=f"Cleanup failed: {e}")
|
||||
|
||||
|
||||
@router.get("/admin/backup/{stamp}/validate")
|
||||
def api_validate_backup(stamp: str) -> dict:
|
||||
"""Validate that a previously created backup is structurally usable."""
|
||||
|
||||
@@ -30,6 +30,12 @@ SYSTEM_PREFIX = (
|
||||
# identity: 5%, preferences: 5%, project state: 20%, retrieval: 60%+
|
||||
PROJECT_STATE_BUDGET_RATIO = 0.20
|
||||
MEMORY_BUDGET_RATIO = 0.10 # 5% identity + 5% preference
|
||||
# Project-scoped memories (project/knowledge/episodic) are the outlet
|
||||
# for the Phase 9 reflection loop on the retrieval side. Budget sits
|
||||
# between identity/preference and retrieved chunks so a reinforced
|
||||
# memory can actually reach the model.
|
||||
PROJECT_MEMORY_BUDGET_RATIO = 0.25
|
||||
PROJECT_MEMORY_TYPES = ["project", "knowledge", "episodic"]
|
||||
|
||||
# Last built context pack for debug inspection
|
||||
_last_context_pack: "ContextPack | None" = None
|
||||
@@ -51,6 +57,8 @@ class ContextPack:
|
||||
project_state_chars: int = 0
|
||||
memory_text: str = ""
|
||||
memory_chars: int = 0
|
||||
project_memory_text: str = ""
|
||||
project_memory_chars: int = 0
|
||||
total_chars: int = 0
|
||||
budget: int = 0
|
||||
budget_remaining: int = 0
|
||||
@@ -107,10 +115,32 @@ def build_context(
|
||||
memory_text, memory_chars = get_memories_for_context(
|
||||
memory_types=["identity", "preference"],
|
||||
budget=memory_budget,
|
||||
query=user_prompt,
|
||||
)
|
||||
|
||||
# 2b. Get project-scoped memories (third precedence). Only
|
||||
# populated when a canonical project is in scope — cross-project
|
||||
# memory bleed would rot the pack. Active-only filtering is
|
||||
# handled by the shared min_confidence=0.5 gate inside
|
||||
# get_memories_for_context.
|
||||
project_memory_text = ""
|
||||
project_memory_chars = 0
|
||||
if canonical_project:
|
||||
project_memory_budget = min(
|
||||
int(budget * PROJECT_MEMORY_BUDGET_RATIO),
|
||||
max(budget - project_state_chars - memory_chars, 0),
|
||||
)
|
||||
project_memory_text, project_memory_chars = get_memories_for_context(
|
||||
memory_types=PROJECT_MEMORY_TYPES,
|
||||
project=canonical_project,
|
||||
budget=project_memory_budget,
|
||||
header="--- Project Memories ---",
|
||||
footer="--- End Project Memories ---",
|
||||
query=user_prompt,
|
||||
)
|
||||
|
||||
# 3. Calculate remaining budget for retrieval
|
||||
retrieval_budget = budget - project_state_chars - memory_chars
|
||||
retrieval_budget = budget - project_state_chars - memory_chars - project_memory_chars
|
||||
|
||||
# 4. Retrieve candidates
|
||||
candidates = (
|
||||
@@ -130,11 +160,14 @@ def build_context(
|
||||
selected = _select_within_budget(scored, max(retrieval_budget, 0))
|
||||
|
||||
# 7. Format full context
|
||||
formatted = _format_full_context(project_state_text, memory_text, selected)
|
||||
formatted = _format_full_context(
|
||||
project_state_text, memory_text, project_memory_text, selected
|
||||
)
|
||||
if len(formatted) > budget:
|
||||
formatted, selected = _trim_context_to_budget(
|
||||
project_state_text,
|
||||
memory_text,
|
||||
project_memory_text,
|
||||
selected,
|
||||
budget,
|
||||
)
|
||||
@@ -144,6 +177,7 @@ def build_context(
|
||||
|
||||
project_state_chars = len(project_state_text)
|
||||
memory_chars = len(memory_text)
|
||||
project_memory_chars = len(project_memory_text)
|
||||
retrieval_chars = sum(c.char_count for c in selected)
|
||||
total_chars = len(formatted)
|
||||
duration_ms = int((time.time() - start) * 1000)
|
||||
@@ -154,6 +188,8 @@ def build_context(
|
||||
project_state_chars=project_state_chars,
|
||||
memory_text=memory_text,
|
||||
memory_chars=memory_chars,
|
||||
project_memory_text=project_memory_text,
|
||||
project_memory_chars=project_memory_chars,
|
||||
total_chars=total_chars,
|
||||
budget=budget,
|
||||
budget_remaining=budget - total_chars,
|
||||
@@ -171,6 +207,7 @@ def build_context(
|
||||
chunks_used=len(selected),
|
||||
project_state_chars=project_state_chars,
|
||||
memory_chars=memory_chars,
|
||||
project_memory_chars=project_memory_chars,
|
||||
retrieval_chars=retrieval_chars,
|
||||
total_chars=total_chars,
|
||||
budget_remaining=budget - total_chars,
|
||||
@@ -250,6 +287,7 @@ def _select_within_budget(
|
||||
def _format_full_context(
|
||||
project_state_text: str,
|
||||
memory_text: str,
|
||||
project_memory_text: str,
|
||||
chunks: list[ContextChunk],
|
||||
) -> str:
|
||||
"""Format project state + memories + retrieved chunks into full context block."""
|
||||
@@ -265,7 +303,12 @@ def _format_full_context(
|
||||
parts.append(memory_text)
|
||||
parts.append("")
|
||||
|
||||
# 3. Retrieved chunks (lowest trust)
|
||||
# 3. Project-scoped memories (third trust level)
|
||||
if project_memory_text:
|
||||
parts.append(project_memory_text)
|
||||
parts.append("")
|
||||
|
||||
# 4. Retrieved chunks (lowest trust)
|
||||
if chunks:
|
||||
parts.append("--- AtoCore Retrieved Context ---")
|
||||
if project_state_text:
|
||||
@@ -277,7 +320,7 @@ def _format_full_context(
|
||||
parts.append(chunk.content)
|
||||
parts.append("")
|
||||
parts.append("--- End Context ---")
|
||||
elif not project_state_text and not memory_text:
|
||||
elif not project_state_text and not memory_text and not project_memory_text:
|
||||
parts.append("--- AtoCore Context ---\nNo relevant context found.\n--- End Context ---")
|
||||
|
||||
return "\n".join(parts)
|
||||
@@ -299,6 +342,7 @@ def _pack_to_dict(pack: ContextPack) -> dict:
|
||||
"project_hint": pack.project_hint,
|
||||
"project_state_chars": pack.project_state_chars,
|
||||
"memory_chars": pack.memory_chars,
|
||||
"project_memory_chars": pack.project_memory_chars,
|
||||
"chunks_used": len(pack.chunks_used),
|
||||
"total_chars": pack.total_chars,
|
||||
"budget": pack.budget,
|
||||
@@ -306,6 +350,7 @@ def _pack_to_dict(pack: ContextPack) -> dict:
|
||||
"duration_ms": pack.duration_ms,
|
||||
"has_project_state": bool(pack.project_state_text),
|
||||
"has_memories": bool(pack.memory_text),
|
||||
"has_project_memories": bool(pack.project_memory_text),
|
||||
"chunks": [
|
||||
{
|
||||
"source_file": c.source_file,
|
||||
@@ -335,26 +380,45 @@ def _truncate_text_block(text: str, budget: int) -> tuple[str, int]:
|
||||
def _trim_context_to_budget(
|
||||
project_state_text: str,
|
||||
memory_text: str,
|
||||
project_memory_text: str,
|
||||
chunks: list[ContextChunk],
|
||||
budget: int,
|
||||
) -> tuple[str, list[ContextChunk]]:
|
||||
"""Trim retrieval first, then memory, then project state until formatted context fits."""
|
||||
"""Trim retrieval → project memories → identity/preference → project state."""
|
||||
kept_chunks = list(chunks)
|
||||
formatted = _format_full_context(project_state_text, memory_text, kept_chunks)
|
||||
formatted = _format_full_context(
|
||||
project_state_text, memory_text, project_memory_text, kept_chunks
|
||||
)
|
||||
while len(formatted) > budget and kept_chunks:
|
||||
kept_chunks.pop()
|
||||
formatted = _format_full_context(project_state_text, memory_text, kept_chunks)
|
||||
formatted = _format_full_context(
|
||||
project_state_text, memory_text, project_memory_text, kept_chunks
|
||||
)
|
||||
|
||||
if len(formatted) <= budget:
|
||||
return formatted, kept_chunks
|
||||
|
||||
# Drop project memories next (they were the most recently added
|
||||
# tier and carry less trust than identity/preference).
|
||||
project_memory_text, _ = _truncate_text_block(
|
||||
project_memory_text,
|
||||
max(budget - len(project_state_text) - len(memory_text), 0),
|
||||
)
|
||||
formatted = _format_full_context(
|
||||
project_state_text, memory_text, project_memory_text, kept_chunks
|
||||
)
|
||||
if len(formatted) <= budget:
|
||||
return formatted, kept_chunks
|
||||
|
||||
memory_text, _ = _truncate_text_block(memory_text, max(budget - len(project_state_text), 0))
|
||||
formatted = _format_full_context(project_state_text, memory_text, kept_chunks)
|
||||
formatted = _format_full_context(
|
||||
project_state_text, memory_text, project_memory_text, kept_chunks
|
||||
)
|
||||
if len(formatted) <= budget:
|
||||
return formatted, kept_chunks
|
||||
|
||||
project_state_text, _ = _truncate_text_block(project_state_text, budget)
|
||||
formatted = _format_full_context(project_state_text, "", [])
|
||||
formatted = _format_full_context(project_state_text, "", "", [])
|
||||
if len(formatted) > budget:
|
||||
formatted, _ = _truncate_text_block(formatted, budget)
|
||||
return formatted, []
|
||||
|
||||
@@ -63,6 +63,7 @@ def record_interaction(
|
||||
chunks_used: list[str] | None = None,
|
||||
context_pack: dict | None = None,
|
||||
reinforce: bool = True,
|
||||
extract: bool = False,
|
||||
) -> Interaction:
|
||||
"""Persist a single interaction to the audit trail.
|
||||
|
||||
@@ -163,6 +164,30 @@ def record_interaction(
|
||||
error=str(exc),
|
||||
)
|
||||
|
||||
if extract and (response or response_summary):
|
||||
try:
|
||||
from atocore.memory.extractor import extract_candidates_from_interaction
|
||||
from atocore.memory.service import create_memory
|
||||
|
||||
candidates = extract_candidates_from_interaction(interaction)
|
||||
for candidate in candidates:
|
||||
try:
|
||||
create_memory(
|
||||
memory_type=candidate.memory_type,
|
||||
content=candidate.content,
|
||||
project=candidate.project,
|
||||
confidence=candidate.confidence,
|
||||
status="candidate",
|
||||
)
|
||||
except ValueError:
|
||||
pass # duplicate or validation error — skip silently
|
||||
except Exception as exc: # pragma: no cover - extraction must never block capture
|
||||
log.error(
|
||||
"extraction_failed_on_capture",
|
||||
interaction_id=interaction_id,
|
||||
error=str(exc),
|
||||
)
|
||||
|
||||
return interaction
|
||||
|
||||
|
||||
|
||||
281
src/atocore/memory/extractor_llm.py
Normal file
281
src/atocore/memory/extractor_llm.py
Normal file
@@ -0,0 +1,281 @@
|
||||
"""LLM-assisted candidate-memory extraction via the Claude Code CLI.
|
||||
|
||||
Day 4 of the 2026-04-11 mini-phase: the rule-based extractor hit 0%
|
||||
recall against real conversational claude-code captures (Day 2 baseline
|
||||
scorecard in ``scripts/eval_data/extractor_labels_2026-04-11.json``),
|
||||
with false negatives spread across 5 distinct miss classes. A single
|
||||
rule expansion cannot close that gap, so this module adds an optional
|
||||
LLM-assisted mode that shells out to the ``claude -p`` (Claude Code
|
||||
non-interactive) CLI with a focused extraction system prompt. That
|
||||
path reuses the user's existing Claude.ai OAuth credentials — no API
|
||||
key anywhere, per the 2026-04-11 decision.
|
||||
|
||||
Trust rules carried forward from the rule-based extractor:
|
||||
|
||||
- Candidates are NEVER auto-promoted. Caller persists with
|
||||
``status="candidate"`` and a human reviews via the triage CLI.
|
||||
- This path is additive. The rule-based extractor keeps working
|
||||
exactly as before; callers opt in by importing this module.
|
||||
- Extraction stays off the capture hot path — this is batch / manual
|
||||
only, per the 2026-04-11 decision.
|
||||
- Failure is silent. Missing CLI, non-zero exit, malformed JSON,
|
||||
timeout — all return an empty list and log an error. Never raises
|
||||
into the caller; the capture audit trail must not break on an
|
||||
optional side effect.
|
||||
|
||||
Configuration:
|
||||
|
||||
- Requires the ``claude`` CLI on PATH (``claude --version`` should work).
|
||||
- ``ATOCORE_LLM_EXTRACTOR_MODEL`` overrides the model alias (default
|
||||
``haiku``).
|
||||
- ``ATOCORE_LLM_EXTRACTOR_TIMEOUT_S`` overrides the per-call timeout
|
||||
(default 90 seconds — first invocation is slow because Node.js
|
||||
startup plus OAuth check is non-trivial).
|
||||
|
||||
Implementation notes:
|
||||
|
||||
- We run ``claude -p`` with ``--model <alias>``,
|
||||
``--append-system-prompt`` for the extraction instructions,
|
||||
``--no-session-persistence`` so we don't pollute session history,
|
||||
and ``--disable-slash-commands`` so stray ``/foo`` in an extracted
|
||||
response never triggers something.
|
||||
- The CLI is invoked from a temp working directory so it does not
|
||||
auto-discover ``CLAUDE.md`` / ``DEV-LEDGER.md`` / ``AGENTS.md``
|
||||
from the repo root. We want a bare extraction context, not the
|
||||
full project briefing. We can't use ``--bare`` because that
|
||||
forces API-key auth; the temp-cwd trick is the lightest way to
|
||||
keep OAuth auth while skipping project context loading.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
import tempfile
|
||||
from dataclasses import dataclass
|
||||
from functools import lru_cache
|
||||
|
||||
from atocore.interactions.service import Interaction
|
||||
from atocore.memory.extractor import MemoryCandidate
|
||||
from atocore.memory.service import MEMORY_TYPES
|
||||
from atocore.observability.logger import get_logger
|
||||
|
||||
log = get_logger("extractor_llm")
|
||||
|
||||
LLM_EXTRACTOR_VERSION = "llm-0.2.0"
|
||||
DEFAULT_MODEL = os.environ.get("ATOCORE_LLM_EXTRACTOR_MODEL", "haiku")
|
||||
DEFAULT_TIMEOUT_S = float(os.environ.get("ATOCORE_LLM_EXTRACTOR_TIMEOUT_S", "90"))
|
||||
MAX_RESPONSE_CHARS = 8000
|
||||
MAX_PROMPT_CHARS = 2000
|
||||
|
||||
_SYSTEM_PROMPT = """You extract durable memory candidates from LLM conversation turns for a personal context engine called AtoCore.
|
||||
|
||||
Your job is to read one user prompt plus the assistant's response and decide which durable facts, decisions, preferences, architectural rules, or project invariants should be remembered across future sessions.
|
||||
|
||||
Rules:
|
||||
|
||||
1. Only surface durable claims. Skip transient status ("deploy is still running"), instructional guidance ("here is how to run the command"), troubleshooting tactics, ephemeral recommendations ("merge this PR now"), and session recaps.
|
||||
2. A candidate is durable when a reader coming back in two weeks would still need to know it. Architectural choices, named rules, ratified decisions, invariants, procurement commitments, and project-level constraints qualify. Conversational fillers and step-by-step instructions do not.
|
||||
3. Each candidate must stand alone. Rewrite the claim in one sentence under 200 characters with enough context that a reader without the conversation understands it.
|
||||
4. Each candidate must have a type from this closed set: project, knowledge, preference, adaptation.
|
||||
5. If the conversation is clearly scoped to a project (p04-gigabit, p05-interferometer, p06-polisher, atocore), set ``project`` to that id. Otherwise leave ``project`` empty.
|
||||
6. If the response makes no durable claim, return an empty list. It is correct and expected to return [] on most conversational turns.
|
||||
7. Confidence should be 0.5 by default so human review workload is honest. Raise to 0.6 only when the response states the claim in an unambiguous, committed form (e.g. "the decision is X", "the selected approach is Y", "X is non-negotiable").
|
||||
8. Output must be a raw JSON array and nothing else. No prose before or after. No markdown fences. No explanations.
|
||||
|
||||
Each array element has exactly this shape:
|
||||
|
||||
{"type": "project|knowledge|preference|adaptation", "content": "...", "project": "...", "confidence": 0.5}
|
||||
|
||||
Return [] when there is nothing to extract."""
|
||||
|
||||
|
||||
@dataclass
|
||||
class LLMExtractionResult:
|
||||
candidates: list[MemoryCandidate]
|
||||
raw_output: str
|
||||
error: str = ""
|
||||
|
||||
|
||||
@lru_cache(maxsize=1)
|
||||
def _sandbox_cwd() -> str:
|
||||
"""Return a stable temp directory for ``claude -p`` invocations.
|
||||
|
||||
We want the CLI to run from a directory that does NOT contain
|
||||
``CLAUDE.md`` / ``DEV-LEDGER.md`` / ``AGENTS.md``, so every
|
||||
extraction call starts with a clean context instead of the full
|
||||
AtoCore project briefing. Cached so the directory persists for
|
||||
the lifetime of the process.
|
||||
"""
|
||||
return tempfile.mkdtemp(prefix="ato-llm-extract-")
|
||||
|
||||
|
||||
def _cli_available() -> bool:
|
||||
return shutil.which("claude") is not None
|
||||
|
||||
|
||||
def extract_candidates_llm(
|
||||
interaction: Interaction,
|
||||
model: str | None = None,
|
||||
timeout_s: float | None = None,
|
||||
) -> list[MemoryCandidate]:
|
||||
"""Run the LLM-assisted extractor against one interaction.
|
||||
|
||||
Returns a list of ``MemoryCandidate`` objects, empty on any
|
||||
failure path. The caller is responsible for persistence.
|
||||
"""
|
||||
return extract_candidates_llm_verbose(
|
||||
interaction,
|
||||
model=model,
|
||||
timeout_s=timeout_s,
|
||||
).candidates
|
||||
|
||||
|
||||
def extract_candidates_llm_verbose(
|
||||
interaction: Interaction,
|
||||
model: str | None = None,
|
||||
timeout_s: float | None = None,
|
||||
) -> LLMExtractionResult:
|
||||
"""Like ``extract_candidates_llm`` but also returns the raw
|
||||
subprocess output and any error encountered, for eval / debugging.
|
||||
"""
|
||||
if not _cli_available():
|
||||
return LLMExtractionResult(
|
||||
candidates=[],
|
||||
raw_output="",
|
||||
error="claude_cli_missing",
|
||||
)
|
||||
|
||||
response_text = (interaction.response or "").strip()
|
||||
if not response_text:
|
||||
return LLMExtractionResult(candidates=[], raw_output="", error="empty_response")
|
||||
|
||||
prompt_excerpt = (interaction.prompt or "")[:MAX_PROMPT_CHARS]
|
||||
response_excerpt = response_text[:MAX_RESPONSE_CHARS]
|
||||
user_message = (
|
||||
f"PROJECT HINT (may be empty): {interaction.project or ''}\n\n"
|
||||
f"USER PROMPT:\n{prompt_excerpt}\n\n"
|
||||
f"ASSISTANT RESPONSE:\n{response_excerpt}\n\n"
|
||||
"Return the JSON array now."
|
||||
)
|
||||
|
||||
args = [
|
||||
"claude",
|
||||
"-p",
|
||||
"--model",
|
||||
model or DEFAULT_MODEL,
|
||||
"--append-system-prompt",
|
||||
_SYSTEM_PROMPT,
|
||||
"--no-session-persistence",
|
||||
"--disable-slash-commands",
|
||||
user_message,
|
||||
]
|
||||
|
||||
try:
|
||||
completed = subprocess.run(
|
||||
args,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=timeout_s or DEFAULT_TIMEOUT_S,
|
||||
cwd=_sandbox_cwd(),
|
||||
encoding="utf-8",
|
||||
errors="replace",
|
||||
)
|
||||
except subprocess.TimeoutExpired:
|
||||
log.error("llm_extractor_timeout", interaction_id=interaction.id)
|
||||
return LLMExtractionResult(candidates=[], raw_output="", error="timeout")
|
||||
except Exception as exc: # pragma: no cover - unexpected subprocess failure
|
||||
log.error("llm_extractor_subprocess_failed", error=str(exc))
|
||||
return LLMExtractionResult(candidates=[], raw_output="", error=f"subprocess_error: {exc}")
|
||||
|
||||
if completed.returncode != 0:
|
||||
log.error(
|
||||
"llm_extractor_nonzero_exit",
|
||||
interaction_id=interaction.id,
|
||||
returncode=completed.returncode,
|
||||
stderr_prefix=(completed.stderr or "")[:200],
|
||||
)
|
||||
return LLMExtractionResult(
|
||||
candidates=[],
|
||||
raw_output=completed.stdout or "",
|
||||
error=f"exit_{completed.returncode}",
|
||||
)
|
||||
|
||||
raw_output = (completed.stdout or "").strip()
|
||||
candidates = _parse_candidates(raw_output, interaction)
|
||||
log.info(
|
||||
"llm_extractor_done",
|
||||
interaction_id=interaction.id,
|
||||
candidate_count=len(candidates),
|
||||
model=model or DEFAULT_MODEL,
|
||||
)
|
||||
return LLMExtractionResult(candidates=candidates, raw_output=raw_output)
|
||||
|
||||
|
||||
def _parse_candidates(raw_output: str, interaction: Interaction) -> list[MemoryCandidate]:
|
||||
"""Parse the model's JSON output into MemoryCandidate objects.
|
||||
|
||||
Tolerates common model glitches: surrounding whitespace, stray
|
||||
markdown fences, leading/trailing prose. Silently drops malformed
|
||||
array elements rather than raising.
|
||||
"""
|
||||
text = raw_output.strip()
|
||||
if text.startswith("```"):
|
||||
text = text.strip("`")
|
||||
first_newline = text.find("\n")
|
||||
if first_newline >= 0:
|
||||
text = text[first_newline + 1 :]
|
||||
if text.endswith("```"):
|
||||
text = text[:-3]
|
||||
text = text.strip()
|
||||
|
||||
if not text or text == "[]":
|
||||
return []
|
||||
|
||||
if not text.lstrip().startswith("["):
|
||||
start = text.find("[")
|
||||
end = text.rfind("]")
|
||||
if start >= 0 and end > start:
|
||||
text = text[start : end + 1]
|
||||
|
||||
try:
|
||||
parsed = json.loads(text)
|
||||
except json.JSONDecodeError as exc:
|
||||
log.error("llm_extractor_parse_failed", error=str(exc), raw_prefix=raw_output[:120])
|
||||
return []
|
||||
|
||||
if not isinstance(parsed, list):
|
||||
return []
|
||||
|
||||
results: list[MemoryCandidate] = []
|
||||
for item in parsed:
|
||||
if not isinstance(item, dict):
|
||||
continue
|
||||
mem_type = str(item.get("type") or "").strip().lower()
|
||||
content = str(item.get("content") or "").strip()
|
||||
project = str(item.get("project") or "").strip()
|
||||
confidence_raw = item.get("confidence", 0.5)
|
||||
if mem_type not in MEMORY_TYPES:
|
||||
continue
|
||||
if not content:
|
||||
continue
|
||||
try:
|
||||
confidence = float(confidence_raw)
|
||||
except (TypeError, ValueError):
|
||||
confidence = 0.5
|
||||
confidence = max(0.0, min(1.0, confidence))
|
||||
results.append(
|
||||
MemoryCandidate(
|
||||
memory_type=mem_type,
|
||||
content=content[:1000],
|
||||
rule="llm_extraction",
|
||||
source_span=content[:200],
|
||||
project=project,
|
||||
confidence=confidence,
|
||||
source_interaction_id=interaction.id,
|
||||
extractor_version=LLM_EXTRACTOR_VERSION,
|
||||
)
|
||||
)
|
||||
return results
|
||||
@@ -51,6 +51,15 @@ _STOP_WORDS: frozenset[str] = frozenset({
|
||||
})
|
||||
_MATCH_THRESHOLD = 0.70
|
||||
|
||||
# Long memories can't realistically hit 70% overlap through organic
|
||||
# paraphrase — a 40-token memory would need 28 stemmed tokens echoed
|
||||
# verbatim. Above this token count the matcher switches to an absolute
|
||||
# overlap floor plus a softer fraction floor so paragraph-length memories
|
||||
# still reinforce when the response genuinely uses them.
|
||||
_LONG_MEMORY_TOKEN_COUNT = 15
|
||||
_LONG_MODE_MIN_OVERLAP = 12
|
||||
_LONG_MODE_MIN_FRACTION = 0.35
|
||||
|
||||
DEFAULT_CONFIDENCE_DELTA = 0.02
|
||||
|
||||
|
||||
@@ -171,26 +180,47 @@ def _stem(word: str) -> str:
|
||||
def _tokenize(text: str) -> set[str]:
|
||||
"""Split normalized text into a stemmed token set.
|
||||
|
||||
Strips punctuation, drops words shorter than 3 chars and stop words.
|
||||
Strips punctuation, drops words shorter than 3 chars and stop
|
||||
words. Hyphenated and slash-separated identifiers
|
||||
(``polisher-control``, ``twyman-green``, ``2-projects/interferometer``)
|
||||
produce both the full form AND each sub-token, so a query for
|
||||
"polisher control" can match a memory that wrote
|
||||
"polisher-control" without forcing callers to guess the exact
|
||||
hyphenation.
|
||||
"""
|
||||
tokens: set[str] = set()
|
||||
for raw in text.split():
|
||||
# Strip leading/trailing punctuation (commas, periods, quotes, etc.)
|
||||
word = raw.strip(".,;:!?\"'()[]{}-/")
|
||||
if len(word) < 3:
|
||||
if not word:
|
||||
continue
|
||||
if word in _STOP_WORDS:
|
||||
continue
|
||||
tokens.add(_stem(word))
|
||||
_add_token(tokens, word)
|
||||
# Also add sub-tokens split on internal '-' or '/' so
|
||||
# hyphenated identifiers match queries that don't hyphenate.
|
||||
if "-" in word or "/" in word:
|
||||
for sub in re.split(r"[-/]+", word):
|
||||
_add_token(tokens, sub)
|
||||
return tokens
|
||||
|
||||
|
||||
def _add_token(tokens: set[str], word: str) -> None:
|
||||
if len(word) < 3:
|
||||
return
|
||||
if word in _STOP_WORDS:
|
||||
return
|
||||
tokens.add(_stem(word))
|
||||
|
||||
|
||||
def _memory_matches(memory_content: str, normalized_response: str) -> bool:
|
||||
"""Return True if enough of the memory's tokens appear in the response.
|
||||
|
||||
Uses token-overlap: tokenize both sides (lowercase, stem, drop stop
|
||||
words), then check whether >= 70 % of the memory's content tokens
|
||||
appear in the response token set.
|
||||
Dual-mode token overlap:
|
||||
- Short memories (<= _LONG_MEMORY_TOKEN_COUNT stems): require
|
||||
>= 70 % of memory tokens echoed.
|
||||
- Long memories (paragraphs): require an absolute floor of
|
||||
_LONG_MODE_MIN_OVERLAP distinct stems echoed AND a softer
|
||||
fraction of _LONG_MODE_MIN_FRACTION, so organic paraphrase
|
||||
of a real project memory can reinforce without the response
|
||||
quoting the paragraph verbatim.
|
||||
"""
|
||||
if not memory_content:
|
||||
return False
|
||||
@@ -202,4 +232,10 @@ def _memory_matches(memory_content: str, normalized_response: str) -> bool:
|
||||
return False
|
||||
response_tokens = _tokenize(normalized_response)
|
||||
overlap = memory_tokens & response_tokens
|
||||
return len(overlap) / len(memory_tokens) >= _MATCH_THRESHOLD
|
||||
fraction = len(overlap) / len(memory_tokens)
|
||||
if len(memory_tokens) <= _LONG_MEMORY_TOKEN_COUNT:
|
||||
return fraction >= _MATCH_THRESHOLD
|
||||
return (
|
||||
len(overlap) >= _LONG_MODE_MIN_OVERLAP
|
||||
and fraction >= _LONG_MODE_MIN_FRACTION
|
||||
)
|
||||
|
||||
@@ -344,6 +344,9 @@ def get_memories_for_context(
|
||||
memory_types: list[str] | None = None,
|
||||
project: str | None = None,
|
||||
budget: int = 500,
|
||||
header: str = "--- AtoCore Memory ---",
|
||||
footer: str = "--- End Memory ---",
|
||||
query: str | None = None,
|
||||
) -> tuple[str, int]:
|
||||
"""Get formatted memories for context injection.
|
||||
|
||||
@@ -351,38 +354,81 @@ def get_memories_for_context(
|
||||
|
||||
Budget allocation per Master Plan section 9:
|
||||
identity: 5%, preference: 5%, rest from retrieval budget
|
||||
|
||||
The caller can override ``header`` / ``footer`` to distinguish
|
||||
multiple memory blocks in the same pack (e.g. identity/preference
|
||||
vs project/knowledge memories).
|
||||
|
||||
When ``query`` is provided, candidates within each memory type
|
||||
are ranked by lexical overlap against the query (stemmed token
|
||||
intersection, ties broken by confidence). Without a query,
|
||||
candidates fall through in the order ``get_memories`` returns
|
||||
them — which is effectively "by confidence desc".
|
||||
"""
|
||||
if memory_types is None:
|
||||
memory_types = ["identity", "preference"]
|
||||
|
||||
if budget <= 0:
|
||||
return "", 0
|
||||
|
||||
header = "--- AtoCore Memory ---"
|
||||
footer = "--- End Memory ---"
|
||||
wrapper_chars = len(header) + len(footer) + 2
|
||||
if budget <= wrapper_chars:
|
||||
return "", 0
|
||||
|
||||
available = budget - wrapper_chars
|
||||
selected_entries: list[str] = []
|
||||
used = 0
|
||||
|
||||
for index, mtype in enumerate(memory_types):
|
||||
type_budget = available if index == len(memory_types) - 1 else max(0, available // (len(memory_types) - index))
|
||||
type_used = 0
|
||||
# Pre-tokenize the query once. ``_score_memory_for_query`` is a
|
||||
# free function below that reuses the reinforcement tokenizer so
|
||||
# lexical scoring here matches the reinforcement matcher.
|
||||
query_tokens: set[str] | None = None
|
||||
if query:
|
||||
from atocore.memory.reinforcement import _normalize, _tokenize
|
||||
|
||||
query_tokens = _tokenize(_normalize(query))
|
||||
if not query_tokens:
|
||||
query_tokens = None
|
||||
|
||||
# Collect ALL candidates across the requested types into one
|
||||
# pool, then rank globally before the budget walk. Ranking per
|
||||
# type and walking types in order would starve later types when
|
||||
# the first type's candidates filled the budget — even if a
|
||||
# later-type candidate matched the query perfectly. Type order
|
||||
# is preserved as a stable tiebreaker inside
|
||||
# ``_rank_memories_for_query`` via Python's stable sort.
|
||||
pool: list[Memory] = []
|
||||
seen_ids: set[str] = set()
|
||||
for mtype in memory_types:
|
||||
for mem in get_memories(
|
||||
memory_type=mtype,
|
||||
project=project,
|
||||
min_confidence=0.5,
|
||||
limit=10,
|
||||
limit=30,
|
||||
):
|
||||
entry = f"[{mem.memory_type}] {mem.content}"
|
||||
entry_len = len(entry) + 1
|
||||
if entry_len > type_budget - type_used:
|
||||
if mem.id in seen_ids:
|
||||
continue
|
||||
selected_entries.append(entry)
|
||||
type_used += entry_len
|
||||
available -= type_used
|
||||
seen_ids.add(mem.id)
|
||||
pool.append(mem)
|
||||
|
||||
if query_tokens is not None:
|
||||
pool = _rank_memories_for_query(pool, query_tokens)
|
||||
|
||||
# Per-entry cap prevents a single long memory from monopolizing
|
||||
# the band. With 16 p06 memories competing for ~700 chars, an
|
||||
# uncapped 530-char overview memory fills the entire budget before
|
||||
# a query-relevant 150-char memory gets a slot. The cap ensures at
|
||||
# least 2-3 entries fit regardless of individual memory length.
|
||||
max_entry_chars = 250
|
||||
for mem in pool:
|
||||
content = mem.content
|
||||
if len(content) > max_entry_chars:
|
||||
content = content[:max_entry_chars - 3].rstrip() + "..."
|
||||
entry = f"[{mem.memory_type}] {content}"
|
||||
entry_len = len(entry) + 1
|
||||
if entry_len > available - used:
|
||||
continue
|
||||
selected_entries.append(entry)
|
||||
used += entry_len
|
||||
|
||||
if not selected_entries:
|
||||
return "", 0
|
||||
@@ -394,6 +440,28 @@ def get_memories_for_context(
|
||||
return text, len(text)
|
||||
|
||||
|
||||
def _rank_memories_for_query(
|
||||
memories: list["Memory"],
|
||||
query_tokens: set[str],
|
||||
) -> list["Memory"]:
|
||||
"""Rerank a memory list by lexical overlap with a pre-tokenized query.
|
||||
|
||||
Ordering key: (overlap_count DESC, confidence DESC). When a query
|
||||
shares no tokens with a memory, overlap is zero and confidence
|
||||
acts as the sole tiebreaker — which matches the pre-query
|
||||
behaviour and keeps no-query calls stable.
|
||||
"""
|
||||
from atocore.memory.reinforcement import _normalize, _tokenize
|
||||
|
||||
scored: list[tuple[int, float, Memory]] = []
|
||||
for mem in memories:
|
||||
mem_tokens = _tokenize(_normalize(mem.content))
|
||||
overlap = len(mem_tokens & query_tokens) if mem_tokens else 0
|
||||
scored.append((overlap, mem.confidence, mem))
|
||||
scored.sort(key=lambda t: (t[0], t[1]), reverse=True)
|
||||
return [mem for _, _, mem in scored]
|
||||
|
||||
|
||||
def _row_to_memory(row) -> Memory:
|
||||
"""Convert a DB row to Memory dataclass."""
|
||||
keys = row.keys() if hasattr(row, "keys") else []
|
||||
|
||||
234
t420-openclaw/AGENTS.md
Normal file
234
t420-openclaw/AGENTS.md
Normal file
@@ -0,0 +1,234 @@
|
||||
# AGENTS.md - Your Workspace
|
||||
|
||||
This folder is home. Treat it that way.
|
||||
|
||||
## First Run
|
||||
|
||||
If `BOOTSTRAP.md` exists, that's your birth certificate. Follow it, figure out who you are, then delete it. You won't need it again.
|
||||
|
||||
## Every Session
|
||||
|
||||
Before doing anything else:
|
||||
1. Read `SOUL.md` — this is who you are
|
||||
2. Read `USER.md` — this is who you're helping
|
||||
3. Read `MODEL-ROUTING.md` — follow the auto-routing policy for model selection
|
||||
4. Read `memory/YYYY-MM-DD.md` (today + yesterday) for recent context
|
||||
5. **If in MAIN SESSION** (direct chat with your human): Also read `MEMORY.md`
|
||||
|
||||
Don't ask permission. Just do it.
|
||||
|
||||
## Memory
|
||||
|
||||
You wake up fresh each session. These files are your continuity:
|
||||
- **Daily notes:** `memory/YYYY-MM-DD.md` (create `memory/` if needed) — raw logs of what happened
|
||||
- **Long-term:** `MEMORY.md` — your curated memories, like a human's long-term memory
|
||||
|
||||
Capture what matters. Decisions, context, things to remember. Skip the secrets unless asked to keep them.
|
||||
|
||||
### 🧠 MEMORY.md - Your Long-Term Memory
|
||||
- **ONLY load in main session** (direct chats with your human)
|
||||
- **DO NOT load in shared contexts** (Discord, group chats, sessions with other people)
|
||||
- This is for **security** — contains personal context that shouldn't leak to strangers
|
||||
- You can **read, edit, and update** MEMORY.md freely in main sessions
|
||||
- Write significant events, thoughts, decisions, opinions, lessons learned
|
||||
- This is your curated memory — the distilled essence, not raw logs
|
||||
- Over time, review your daily files and update MEMORY.md with what's worth keeping
|
||||
|
||||
### 📝 Write It Down - No "Mental Notes"!
|
||||
- **Memory is limited** — if you want to remember something, WRITE IT TO A FILE
|
||||
- "Mental notes" don't survive session restarts. Files do.
|
||||
- When someone says "remember this" → update `memory/YYYY-MM-DD.md` or relevant file
|
||||
- When you learn a lesson → update AGENTS.md, TOOLS.md, or the relevant skill
|
||||
- When you make a mistake → document it so future-you doesn't repeat it
|
||||
- **Text > Brain** 📝
|
||||
|
||||
## Safety
|
||||
|
||||
- Don't exfiltrate private data. Ever.
|
||||
- Don't run destructive commands without asking.
|
||||
- `trash` > `rm` (recoverable beats gone forever)
|
||||
- When in doubt, ask.
|
||||
|
||||
## External vs Internal
|
||||
|
||||
**Safe to do freely:**
|
||||
- Read files, explore, organize, learn
|
||||
- Search the web, check calendars
|
||||
- Work within this workspace
|
||||
|
||||
**Ask first:**
|
||||
- Sending emails, tweets, public posts
|
||||
- Anything that leaves the machine
|
||||
- Anything you're uncertain about
|
||||
|
||||
## Group Chats
|
||||
|
||||
You have access to your human's stuff. That doesn't mean you *share* their stuff. In groups, you're a participant — not their voice, not their proxy. Think before you speak.
|
||||
|
||||
### 💬 Know When to Speak!
|
||||
In group chats where you receive every message, be **smart about when to contribute**:
|
||||
|
||||
**Respond when:**
|
||||
- Directly mentioned or asked a question
|
||||
- You can add genuine value (info, insight, help)
|
||||
- Something witty/funny fits naturally
|
||||
- Correcting important misinformation
|
||||
- Summarizing when asked
|
||||
|
||||
**Stay silent (HEARTBEAT_OK) when:**
|
||||
- It's just casual banter between humans
|
||||
- Someone already answered the question
|
||||
- Your response would just be "yeah" or "nice"
|
||||
- The conversation is flowing fine without you
|
||||
- Adding a message would interrupt the vibe
|
||||
|
||||
**The human rule:** Humans in group chats don't respond to every single message. Neither should you. Quality > quantity. If you wouldn't send it in a real group chat with friends, don't send it.
|
||||
|
||||
**Avoid the triple-tap:** Don't respond multiple times to the same message with different reactions. One thoughtful response beats three fragments.
|
||||
|
||||
Participate, don't dominate.
|
||||
|
||||
### 😊 React Like a Human!
|
||||
On platforms that support reactions (Discord, Slack), use emoji reactions naturally:
|
||||
|
||||
**React when:**
|
||||
- You appreciate something but don't need to reply (👍, ❤️, 🙌)
|
||||
- Something made you laugh (😂, 💀)
|
||||
- You find it interesting or thought-provoking (🤔, 💡)
|
||||
- You want to acknowledge without interrupting the flow
|
||||
- It's a simple yes/no or approval situation (✅, 👀)
|
||||
|
||||
**Why it matters:**
|
||||
Reactions are lightweight social signals. Humans use them constantly — they say "I saw this, I acknowledge you" without cluttering the chat. You should too.
|
||||
|
||||
**Don't overdo it:** One reaction per message max. Pick the one that fits best.
|
||||
|
||||
## Tools
|
||||
|
||||
When a task is contextual and project-dependent, use the `atocore-context` skill to query Dalidou-hosted AtoCore for trusted project state, retrieval, context-building, registered project refresh, or project registration discovery when that will improve accuracy. Treat AtoCore as additive and fail-open; do not replace OpenClaw's own memory with it. Prefer `projects` and `refresh-project <id>` when a known project needs a clean source refresh, and use `project-template` when proposing a new project registration, and `propose-project ...` when you want a normalized preview before editing the registry manually.
|
||||
|
||||
### Organic AtoCore Routing
|
||||
|
||||
For normal project knowledge questions, use AtoCore by default without waiting for the human to ask for the helper explicitly.
|
||||
|
||||
Use AtoCore first when the prompt:
|
||||
- mentions a registered project id or alias
|
||||
- asks about architecture, constraints, status, requirements, vendors, planning, prior decisions, or current project truth
|
||||
- would benefit from cross-source context instead of only the local repo
|
||||
|
||||
Preferred flow:
|
||||
1. `auto-context "<prompt>" 3000` for most project knowledge questions
|
||||
2. `project-state <project>` when the user is clearly asking for trusted current truth
|
||||
3. `audit-query "<prompt>" 5 [project]` when broad prompts drift, archive/history noise appears, or retrieval quality is being evaluated
|
||||
4. `refresh-project <id>` before answering if the user explicitly asked to refresh or ingest project changes
|
||||
|
||||
For AtoCore improvement work, prefer this sequence:
|
||||
1. retrieval-quality pass
|
||||
2. Wave 2 trusted-operational ingestion
|
||||
3. AtoDrive clarification
|
||||
4. restore and ops validation
|
||||
|
||||
Wave 2 trusted-operational truth should prioritize:
|
||||
- current status
|
||||
- current decisions
|
||||
- requirements baseline
|
||||
- milestone plan
|
||||
- next actions
|
||||
|
||||
Do not ingest the whole PKM vault before the trusted-operational layer is in good shape. Treat AtoDrive as curated operational truth, not a generic dump.
|
||||
|
||||
Do not force AtoCore for purely local coding actions like fixing a function, editing one file, or running tests, unless broader project context is likely to matter.
|
||||
|
||||
If `auto-context` returns `no_project_match` or AtoCore is unavailable, continue normally with OpenClaw's own tools and memory.
|
||||
|
||||
Skills provide your tools. When you need one, check its `SKILL.md`. Keep local notes (camera names, SSH details, voice preferences) in `TOOLS.md`.
|
||||
|
||||
**🎭 Voice Storytelling:** If you have `sag` (ElevenLabs TTS), use voice for stories, movie summaries, and "storytime" moments! Way more engaging than walls of text. Surprise people with funny voices.
|
||||
|
||||
**📝 Platform Formatting:**
|
||||
- **Discord/WhatsApp:** No markdown tables! Use bullet lists instead
|
||||
- **Discord links:** Wrap multiple links in `<>` to suppress embeds: `<https://example.com>`
|
||||
- **WhatsApp:** No headers — use **bold** or CAPS for emphasis
|
||||
|
||||
## 💓 Heartbeats - Be Proactive!
|
||||
|
||||
When you receive a heartbeat poll (message matches the configured heartbeat prompt), don't just reply `HEARTBEAT_OK` every time. Use heartbeats productively!
|
||||
|
||||
Default heartbeat prompt:
|
||||
`Read HEARTBEAT.md if it exists (workspace context). Follow it strictly. Do not infer or repeat old tasks from prior chats. If nothing needs attention, reply HEARTBEAT_OK.`
|
||||
|
||||
You are free to edit `HEARTBEAT.md` with a short checklist or reminders. Keep it small to limit token burn.
|
||||
|
||||
### Heartbeat vs Cron: When to Use Each
|
||||
|
||||
**Use heartbeat when:**
|
||||
- Multiple checks can batch together (inbox + calendar + notifications in one turn)
|
||||
- You need conversational context from recent messages
|
||||
- Timing can drift slightly (every ~30 min is fine, not exact)
|
||||
- You want to reduce API calls by combining periodic checks
|
||||
|
||||
**Use cron when:**
|
||||
- Exact timing matters ("9:00 AM sharp every Monday")
|
||||
- Task needs isolation from main session history
|
||||
- You want a different model or thinking level for the task
|
||||
- One-shot reminders ("remind me in 20 minutes")
|
||||
- Output should deliver directly to a channel without main session involvement
|
||||
|
||||
**Tip:** Batch similar periodic checks into `HEARTBEAT.md` instead of creating multiple cron jobs. Use cron for precise schedules and standalone tasks.
|
||||
|
||||
**Things to check (rotate through these, 2-4 times per day):**
|
||||
- **Emails** - Any urgent unread messages?
|
||||
- **Calendar** - Upcoming events in next 24-48h?
|
||||
- **Mentions** - Twitter/social notifications?
|
||||
- **Weather** - Relevant if your human might go out?
|
||||
|
||||
**Track your checks** in `memory/heartbeat-state.json`:
|
||||
```json
|
||||
{
|
||||
"lastChecks": {
|
||||
"email": 1703275200,
|
||||
"calendar": 1703260800,
|
||||
"weather": null
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**When to reach out:**
|
||||
- Important email arrived
|
||||
- Calendar event coming up (<2h)
|
||||
- Something interesting you found
|
||||
- It's been >8h since you said anything
|
||||
|
||||
**When to stay quiet (HEARTBEAT_OK):**
|
||||
- Late night (23:00-08:00) unless urgent
|
||||
- Human is clearly busy
|
||||
- Nothing new since last check
|
||||
- You just checked <30 minutes ago
|
||||
|
||||
**Proactive work you can do without asking:**
|
||||
- Read and organize memory files
|
||||
- Check on projects (git status, etc.)
|
||||
- Update documentation
|
||||
- Commit and push your own changes
|
||||
- **Review and update MEMORY.md** (see below)
|
||||
|
||||
### 🔄 Memory Maintenance (During Heartbeats)
|
||||
Periodically (every few days), use a heartbeat to:
|
||||
1. Read through recent `memory/YYYY-MM-DD.md` files
|
||||
2. Identify significant events, lessons, or insights worth keeping long-term
|
||||
3. Update `MEMORY.md` with distilled learnings
|
||||
4. Remove outdated info from MEMORY.md that's no longer relevant
|
||||
|
||||
Think of it like a human reviewing their journal and updating their mental model. Daily files are raw notes; MEMORY.md is curated wisdom.
|
||||
|
||||
The goal: Be helpful without being annoying. Check in a few times a day, do useful background work, but respect quiet time.
|
||||
|
||||
## Make It Yours
|
||||
|
||||
This is a starting point. Add your own conventions, style, and rules as you figure out what works.
|
||||
|
||||
## Orchestration Completion Protocol
|
||||
After any orchestration chain completes (research → review → condensation):
|
||||
1. Secretary MUST be the final agent tasked
|
||||
2. Secretary produces the condensation file AND posts a distillate to Discord #reports
|
||||
3. Manager should include in Secretary's task: "Post a distillate to Discord #reports summarizing this orchestration"
|
||||
133
t420-openclaw/ATOCORE-OPERATIONS.md
Normal file
133
t420-openclaw/ATOCORE-OPERATIONS.md
Normal file
@@ -0,0 +1,133 @@
|
||||
# AtoCore Operations
|
||||
|
||||
This is the current operating playbook for making AtoCore more dependable and higher-signal.
|
||||
|
||||
## Order of Work
|
||||
|
||||
1. Retrieval-quality pass
|
||||
2. Wave 2 trusted-operational ingestion
|
||||
3. AtoDrive clarification
|
||||
4. Restore and ops validation
|
||||
|
||||
## 1. Retrieval-Quality Pass
|
||||
|
||||
Observed behavior from the live service:
|
||||
|
||||
- broad prompts like `gigabit` and `polisher` still surface archive/history noise
|
||||
- meaningful prompts like `mirror frame stiffness requirements and selected architecture` are much sharper
|
||||
- now that the corpus is large enough, ranking quality matters more than raw corpus presence
|
||||
|
||||
Use these commands first:
|
||||
|
||||
```bash
|
||||
python atocore.py audit-query "gigabit" 5
|
||||
python atocore.py audit-query "polisher" 5
|
||||
python atocore.py audit-query "mirror frame stiffness requirements and selected architecture" 5 p04-gigabit
|
||||
python atocore.py audit-query "interferometer error budget and vendor selection constraints" 5 p05-interferometer
|
||||
python atocore.py audit-query "polisher system map shared contracts and calibration workflow" 5 p06-polisher
|
||||
```
|
||||
|
||||
What to fix in the retrieval pass:
|
||||
|
||||
- reduce `_archive`, `pre-cleanup`, `pre-migration`, and `History` prominence
|
||||
- prefer current-status, decision, requirement, architecture-freeze, and milestone docs
|
||||
- prefer trusted project-state over freeform notes when both speak to current truth
|
||||
- keep broad prompts from matching stale or generic chunks too easily
|
||||
|
||||
Suggested acceptance bar:
|
||||
|
||||
- top 5 for active-project prompts contain at least one current-status or next-focus item
|
||||
- top 5 contain at least one decision or architecture-baseline item
|
||||
- top 5 contain at least one requirement or constraints item
|
||||
- broad single-word prompts no longer lead with archive/history chunks
|
||||
|
||||
## 2. Wave 2 Trusted-Operational Ingestion
|
||||
|
||||
Do not ingest the whole PKM vault next.
|
||||
|
||||
Wave 2 should ingest trusted operational truth for each active project:
|
||||
|
||||
- current status dashboard or status note
|
||||
- current decisions / decision log
|
||||
- requirements baseline
|
||||
- architecture freeze / current baseline
|
||||
- milestone plan
|
||||
- next actions / near-term focus
|
||||
|
||||
Recommended helper flow:
|
||||
|
||||
```bash
|
||||
python atocore.py project-state p04-gigabit
|
||||
python atocore.py project-state p05-interferometer
|
||||
python atocore.py project-state p06-polisher
|
||||
python atocore.py project-state-set p04-gigabit status next_focus "Continue curated support and frame-context buildout." "Wave 2 status dashboard" 1.0
|
||||
python atocore.py project-state-set p05-interferometer requirement key_constraints "Preserve current error-budget, thermal, and vendor-selection constraints as the working baseline." "Wave 2 requirements baseline" 1.0
|
||||
python atocore.py project-state-set p06-polisher decision system_boundary "The suite remains a three-layer chain with explicit planning, translation, and execution boundaries." "Wave 2 decision log" 1.0
|
||||
python atocore.py refresh-project p04-gigabit
|
||||
python atocore.py refresh-project p05-interferometer
|
||||
python atocore.py refresh-project p06-polisher
|
||||
```
|
||||
|
||||
Use project-state for the most authoritative "current truth" fields, then refresh the registered project roots after curated Wave 2 documents land.
|
||||
|
||||
## 3. AtoDrive Clarification
|
||||
|
||||
AtoDrive should become a trusted-operational source, not a generic corpus dump.
|
||||
|
||||
Good AtoDrive candidates:
|
||||
|
||||
- current dashboards
|
||||
- current baselines
|
||||
- approved architecture docs
|
||||
- decision logs
|
||||
- milestone and next-step views
|
||||
- operational source-of-truth files that humans actively maintain
|
||||
|
||||
Avoid as default AtoDrive ingest:
|
||||
|
||||
- large generic archives
|
||||
- duplicated exports
|
||||
- stale snapshots when a newer baseline exists
|
||||
- exploratory notes that are not designated current truth
|
||||
|
||||
Rule of thumb:
|
||||
|
||||
- if the file answers "what is true now?" it may belong in trusted-operational
|
||||
- if the file mostly answers "what did we think at some point?" it belongs in the broader corpus, not Wave 2
|
||||
|
||||
## 4. Restore and Ops Validation
|
||||
|
||||
Backups are not enough until restore has been tested.
|
||||
|
||||
Validate these explicitly:
|
||||
|
||||
- SQLite metadata restore
|
||||
- Chroma restore or rebuild
|
||||
- registry restore
|
||||
- source-root refresh after restore
|
||||
- health and stats consistency after recovery
|
||||
|
||||
Recommended restore drill:
|
||||
|
||||
1. Record current `health`, `stats`, and `projects` output.
|
||||
2. Restore SQLite metadata and project registry from backup.
|
||||
3. Decide whether Chroma is restored from backup or rebuilt from source.
|
||||
4. Run project refresh for active projects.
|
||||
5. Compare vector/doc counts and run retrieval audits again.
|
||||
|
||||
Commands to capture the before/after baseline:
|
||||
|
||||
```bash
|
||||
python atocore.py health
|
||||
python atocore.py stats
|
||||
python atocore.py projects
|
||||
python atocore.py audit-query "gigabit" 5
|
||||
python atocore.py audit-query "interferometer error budget and vendor selection constraints" 5 p05-interferometer
|
||||
```
|
||||
|
||||
Recovery policy decision still needed:
|
||||
|
||||
- prefer Chroma backup restore for fast recovery when backup integrity is trusted
|
||||
- prefer Chroma rebuild when backups are suspect, schema changed, or ranking behavior drifts unexpectedly
|
||||
|
||||
The important part is to choose one policy on purpose and validate it, not leave it implicit.
|
||||
105
t420-openclaw/SKILL.md
Normal file
105
t420-openclaw/SKILL.md
Normal file
@@ -0,0 +1,105 @@
|
||||
---
|
||||
name: atocore-context
|
||||
description: Use Dalidou-hosted AtoCore as a read-only external context service for project state, retrieval, and context-building without touching OpenClaw's own memory.
|
||||
---
|
||||
|
||||
# AtoCore Context
|
||||
|
||||
Use this skill when you need trusted project context, retrieval help, or AtoCore
|
||||
health/status from the canonical Dalidou instance.
|
||||
|
||||
## Purpose
|
||||
|
||||
AtoCore is an additive external context service.
|
||||
|
||||
- It does not replace OpenClaw's own memory.
|
||||
- It should be used for contextual work, not trivial prompts.
|
||||
- It is read-only in this first integration batch.
|
||||
- If AtoCore is unavailable, continue normally.
|
||||
|
||||
## Canonical Endpoint
|
||||
|
||||
Default base URL:
|
||||
|
||||
```bash
|
||||
http://dalidou:8100
|
||||
```
|
||||
|
||||
Override with:
|
||||
|
||||
```bash
|
||||
ATOCORE_BASE_URL=http://host:port
|
||||
```
|
||||
|
||||
## Safe Usage
|
||||
|
||||
Use AtoCore for:
|
||||
- project-state checks
|
||||
- automatic project detection for normal project questions
|
||||
- retrieval-quality audits before declaring a project corpus "good enough"
|
||||
- retrieval over ingested project/ecosystem docs
|
||||
- context-building for complex project prompts
|
||||
- verifying current AtoCore hosting and architecture state
|
||||
- listing registered projects and refreshing a known project source set
|
||||
- inspecting the project registration template before proposing a new project entry
|
||||
- generating a proposal preview for a new project registration without writing it
|
||||
- registering an approved project entry when explicitly requested
|
||||
- updating an existing registered project when aliases or description need refinement
|
||||
|
||||
Do not use AtoCore for:
|
||||
- automatic memory write-back
|
||||
- replacing OpenClaw memory
|
||||
- silent ingestion of broad new corpora without approval
|
||||
- ingesting the whole PKM vault before trusted operational truth is staged
|
||||
- mutating the registry automatically without human approval
|
||||
|
||||
## Commands
|
||||
|
||||
```bash
|
||||
~/clawd/skills/atocore-context/scripts/atocore.sh health
|
||||
~/clawd/skills/atocore-context/scripts/atocore.sh sources
|
||||
~/clawd/skills/atocore-context/scripts/atocore.sh stats
|
||||
~/clawd/skills/atocore-context/scripts/atocore.sh projects
|
||||
~/clawd/skills/atocore-context/scripts/atocore.sh project-template
|
||||
~/clawd/skills/atocore-context/scripts/atocore.sh detect-project "what's the interferometer error budget?"
|
||||
~/clawd/skills/atocore-context/scripts/atocore.sh auto-context "what's the interferometer error budget?" 3000
|
||||
~/clawd/skills/atocore-context/scripts/atocore.sh debug-context
|
||||
~/clawd/skills/atocore-context/scripts/atocore.sh audit-query "gigabit" 5
|
||||
~/clawd/skills/atocore-context/scripts/atocore.sh audit-query "mirror frame stiffness requirements and selected architecture" 5 p04-gigabit
|
||||
~/clawd/skills/atocore-context/scripts/atocore.sh propose-project p07-example "p07,example-project" vault incoming/projects/p07-example "Example project" "Primary staged project docs"
|
||||
~/clawd/skills/atocore-context/scripts/atocore.sh register-project p07-example "p07,example-project" vault incoming/projects/p07-example "Example project" "Primary staged project docs"
|
||||
~/clawd/skills/atocore-context/scripts/atocore.sh update-project p05 "Curated staged docs for the P05 interferometer architecture, vendors, and error-budget project."
|
||||
~/clawd/skills/atocore-context/scripts/atocore.sh refresh-project p05
|
||||
~/clawd/skills/atocore-context/scripts/atocore.sh project-state atocore
|
||||
~/clawd/skills/atocore-context/scripts/atocore.sh project-state-set p05-interferometer status next_focus "Freeze current error-budget baseline and vendor downselect." "Wave 2 status dashboard" 1.0
|
||||
~/clawd/skills/atocore-context/scripts/atocore.sh project-state-invalidate p05-interferometer status next_focus
|
||||
~/clawd/skills/atocore-context/scripts/atocore.sh query "What is AtoDrive?"
|
||||
~/clawd/skills/atocore-context/scripts/atocore.sh context-build "Need current AtoCore architecture" atocore 3000
|
||||
```
|
||||
|
||||
Direct Python entrypoint for non-Bash environments:
|
||||
|
||||
```bash
|
||||
python ~/clawd/skills/atocore-context/scripts/atocore.py health
|
||||
```
|
||||
|
||||
## Contract
|
||||
|
||||
- prefer AtoCore only when additional context is genuinely useful
|
||||
- trust AtoCore as additive context, not as a hard runtime dependency
|
||||
- fail open if the service errors or times out
|
||||
- cite when information came from AtoCore rather than local OpenClaw memory
|
||||
- for normal project knowledge questions, prefer `auto-context "<prompt>" 3000` before answering
|
||||
- use `detect-project "<prompt>"` when you want to inspect project inference explicitly
|
||||
- use `debug-context` right after `auto-context` or `context-build` when you want
|
||||
to inspect the exact last AtoCore context pack
|
||||
- use `audit-query "<prompt>" 5 [project]` when retrieval quality is in question, especially for broad prompts
|
||||
- prefer `projects` plus `refresh-project <id>` over long ad hoc ingest instructions when the project is already registered
|
||||
- use `project-template` when preparing a new project registration proposal
|
||||
- use `propose-project ...` to draft a normalized entry and review collisions first
|
||||
- use `register-project ...` only after the proposal has been reviewed and approved
|
||||
- use `update-project ...` when a registered project's description or aliases need refinement before refresh
|
||||
- use `project-state-set` for trusted operational truth such as current status, current decisions, frozen requirements, milestone baselines, and next actions
|
||||
- do Wave 2 before broad PKM expansion: status dashboards, decision logs, milestone views, current baseline docs, and next-step views
|
||||
- treat AtoDrive as a curated trusted-operational source, not a generic dump of miscellaneous drive files
|
||||
- validate restore posture explicitly; a backup is not trusted until restore or rebuild steps have been exercised successfully
|
||||
279
t420-openclaw/TOOLS.md
Normal file
279
t420-openclaw/TOOLS.md
Normal file
@@ -0,0 +1,279 @@
|
||||
# TOOLS.md - Local Notes
|
||||
|
||||
|
||||
## AtoCore (External Context Service)
|
||||
|
||||
- **Canonical Host:** http://dalidou:8100
|
||||
- **Role:** Read-only external context service for trusted project state, retrieval, context-building, registered project refresh, project registration discovery, and retrieval-quality auditing
|
||||
- **Machine state lives on:** Dalidou (/srv/storage/atocore/data/...)
|
||||
- **Rule:** Use AtoCore as additive context only; do not treat it as a replacement for OpenClaw memory
|
||||
- **Helper script:** /home/papa/clawd/skills/atocore-context/scripts/atocore.sh
|
||||
- **Python fallback:** `/home/papa/clawd/skills/atocore-context/scripts/atocore.py` for non-Bash environments
|
||||
- **Key commands:** `projects`, `project-template`, `detect-project "<prompt>"`, `auto-context "<prompt>" [budget] [project]`, `debug-context`, `audit-query "<prompt>" [top_k] [project]`, `propose-project ...`, `register-project ...`, `update-project <id> "description" ["aliases"]`, `refresh-project <id>`, `project-state <id> [category]`, `project-state-set <project> <category> <key> <value> [source] [confidence]`, `project-state-invalidate <project> <category> <key>`, `context-build ...`
|
||||
- **Fail-open rule:** If AtoCore is unavailable, continue normal OpenClaw behavior
|
||||
|
||||
### Organic Usage Rule
|
||||
|
||||
- For normal project knowledge questions, try `auto-context` first.
|
||||
- For retrieval complaints or broad-prompt drift, run `audit-query` before changing ingestion scope.
|
||||
- Use `project-state` when you want trusted current truth only.
|
||||
- Use `project-state-set` for current status, current decisions, baseline requirements, milestone views, and next actions.
|
||||
- Use `query` for quick probing/debugging.
|
||||
- Use `context-build` when you already know the project and want the exact context pack.
|
||||
- Use `debug-context` right after `auto-context` or `context-build` if you want
|
||||
to inspect the exact AtoCore supplement being fed into the workflow.
|
||||
- Do Wave 2 trusted-operational ingestion before broad PKM expansion.
|
||||
- Treat AtoDrive as a curated operational-truth source, not a generic bulk ingest target.
|
||||
- Keep purely local coding tasks local unless broader project context is likely to help.
|
||||
|
||||
## PKM / Obsidian Vault
|
||||
|
||||
- **Local Path:** `/home/papa/obsidian-vault/`
|
||||
- **Name:** Antoine Brain Extension
|
||||
- **Sync:** Syncthing (syncs from dalidou)
|
||||
- **Access:** ✅ Direct local access — no SSH needed!
|
||||
|
||||
## ATODrive (Work Documents)
|
||||
|
||||
- **Local Path:** `/home/papa/ATODrive/`
|
||||
- **Sync:** Syncthing (syncs from dalidou SeaDrive)
|
||||
- **Access:** ✅ Direct local access
|
||||
|
||||
## Atomaste (Business/Templates)
|
||||
|
||||
- **Local Path:** `/home/papa/Atomaste/`
|
||||
- **Sync:** Syncthing (syncs from dalidou SeaDrive)
|
||||
- **Access:** ✅ Direct local access
|
||||
|
||||
## Atomaste Finance (Canonical Expense System)
|
||||
|
||||
- **Single home:** `/home/papa/Atomaste/03_Finances/Expenses/`
|
||||
- **Rule:** If it is expense-related, it belongs under `Expenses/`, not `Documents/Receipts/`
|
||||
- **Per-year structure:**
|
||||
- `YYYY/Inbox/` — unprocessed incoming receipts/screenshots
|
||||
- `YYYY/receipts/` — final home for processed raw receipt files
|
||||
- `YYYY/expenses_master.csv` — main structured expense table
|
||||
- `YYYY/reports/` — derived summaries, exports, tax packages
|
||||
- **Workflow:** Inbox → `expenses_master.csv` → `receipts/`
|
||||
- **Legacy path:** `/home/papa/Atomaste/03_Finances/Documents/Receipts/` is deprecated and should stay unused except for the migration note
|
||||
|
||||
## Odile Inc (Corporate)
|
||||
|
||||
- **Local Path:** `/home/papa/Odile Inc/`
|
||||
- **Sync:** Syncthing (syncs from dalidou SeaDrive `My Libraries\Odile\Odile Inc`)
|
||||
- **Access:** ✅ Direct local access
|
||||
- **Entity:** Odile Bérubé O.D. Inc. (SPCC, optometrist)
|
||||
- **Fiscal year end:** July 31
|
||||
- **Structure:** `01_Finances/` (BankStatements, Expenses, Payroll, Revenue, Taxes), `02_Admin/`, `Inbox/`
|
||||
- **Rule:** Corporate docs go here, personal docs go to `Impôts Odile/Dossier_Fiscal_YYYY/`
|
||||
|
||||
## Impôts Odile (Personal Tax)
|
||||
|
||||
- **Local Path:** `/home/papa/Impôts Odile/`
|
||||
- **Sync:** Syncthing (syncs from dalidou SeaDrive)
|
||||
- **Access:** ✅ Direct local access
|
||||
- **Structure:** `Dossier_Fiscal_YYYY/` (9 sections: revenus, dépenses, crédits, feuillets, REER, dons, comptable, frais médicaux, budget)
|
||||
- **Cron:** Monthly receipt processing (1st of month, 2 PM ET) scans mario@atomaste.ca for Odile's emails
|
||||
|
||||
## Git Repos (via Gitea)
|
||||
|
||||
- **Gitea URL:** http://100.80.199.40:3000
|
||||
- **Auth:** Token in `~/.gitconfig` — **ALWAYS use auth for API calls** (private repos won't show without it)
|
||||
- **API Auth Header:** `Authorization: token $(git config --get credential.http://100.80.199.40:3000.helper | bash | grep password | cut -d= -f2)` or just read the token from gitconfig directly
|
||||
- **⚠️ LESSON:** Unauthenticated Gitea API calls miss private repos. Always authenticate.
|
||||
- **Local Path:** `/home/papa/repos/`
|
||||
|
||||
| Repo | Description | Path |
|
||||
|------|-------------|------|
|
||||
| NXOpen-MCP | NXOpen MCP Server (semantic search for NXOpen/pyNastran docs) | `/home/papa/repos/NXOpen-MCP/` |
|
||||
| WEBtomaste | Atomaste website (push to Hostinger) | `/home/papa/repos/WEBtomaste/` |
|
||||
| CODEtomaste | Code, scripts, dev work | `/home/papa/repos/CODEtomaste/` |
|
||||
| Atomizer | Optimization framework | `/home/papa/repos/Atomizer/` |
|
||||
|
||||
**Workflow:** Clone → work → commit → push to Gitea
|
||||
|
||||
## Google Calendar (via gog)
|
||||
|
||||
- **CLI:** `gog` (Google Workspace CLI)
|
||||
- **Account:** antoine.letarte@gmail.com
|
||||
- **Scopes:** Calendar only (no Gmail, Drive, etc.)
|
||||
- **Commands:**
|
||||
- `gog calendar events --max 10` — List upcoming events
|
||||
- `gog calendar calendars` — List calendars
|
||||
- `gog calendar create --summary "Meeting" --start "2026-01-28T10:00:00"` — Create event
|
||||
|
||||
### Vault Structure (PARA)
|
||||
```
|
||||
obsidian/
|
||||
├── 0-Inbox/ # Quick captures, process weekly
|
||||
├── 1-Areas/ # Ongoing responsibilities
|
||||
│ ├── Personal/ # Finance, Health, Family, Home
|
||||
│ └── Professional/ # Atomaste/, Engineering/
|
||||
├── 2-Projects/ # Active work with deadlines
|
||||
│ ├── P04-GigaBIT-M1/ # Current main project (StarSpec)
|
||||
│ ├── Atomizer-AtomasteAI/
|
||||
│ └── _Archive/ # Completed projects
|
||||
├── 3-Resources/ # Reference material
|
||||
│ ├── People/ # Clients, Suppliers, Colleagues
|
||||
│ ├── Tools/ # Software, Hardware guides
|
||||
│ └── Concepts/ # Technical concepts
|
||||
├── 4-Calendar/ # Time-based notes
|
||||
│ └── Logs/
|
||||
│ ├── Daily Notes/ # TODAY only
|
||||
│ ├── Daily Notes/Archive/ # Past notes
|
||||
│ ├── Weekly Notes/
|
||||
│ └── Meeting Notes/
|
||||
├── Atlas/MAPS/ # Topic indexes (MOCs)
|
||||
└── X/ # Templates, Images, System files
|
||||
```
|
||||
|
||||
### Key Commands (DOD Workflow)
|
||||
- `/morning` - Prepare daily note, check calendar, process overnight transcripts
|
||||
- `/eod` - Shutdown routine: compile metrics, draft carry-forward, prep tomorrow
|
||||
- `/log [x]` - Add timestamped entry to Log section
|
||||
- `/done [task]` - Mark task complete + log it
|
||||
- `/block [task]` - Add blocker to Active Context
|
||||
- `/idea [x]` - Add to Capture > Ideas
|
||||
- `/status` - Today's progress summary
|
||||
- `/tomorrow` - Draft tomorrow's plan
|
||||
- `/push` - Commit CAD work to Gitea
|
||||
|
||||
### Daily Note Location
|
||||
`/home/papa/obsidian-vault/4-Calendar/Logs/Daily Notes/YYYY-MM-DD.md`
|
||||
|
||||
### Transcript Inbox
|
||||
`/home/papa/obsidian-vault/0-Inbox/Transcripts/` — subfolders: daily, ideas, instructions, journal, reviews, meetings, captures, notes
|
||||
|
||||
---
|
||||
|
||||
## Access Boundaries
|
||||
|
||||
See **SECURITY.md** for full details. Summary:
|
||||
|
||||
**I have access to:**
|
||||
- `/home/papa/clawd/` (my workspace)
|
||||
- `/home/papa/obsidian-vault/` (PKM via Syncthing)
|
||||
- `/home/papa/ATODrive/` (work docs via Syncthing)
|
||||
- `/home/papa/Atomaste/` (business/templates via Syncthing)
|
||||
|
||||
**I do NOT have access to:**
|
||||
- Personal SeaDrive folders (Finance, Antoine, Adaline, Odile, Movies)
|
||||
- Photos, email backups, Paperless, Home Assistant
|
||||
- Direct dalidou access (removed SSHFS mount 2026-01-27)
|
||||
|
||||
**Restricted SSH access:**
|
||||
- User `mario@dalidou` exists for on-demand access (no folder permissions by default)
|
||||
|
||||
---
|
||||
|
||||
## Atomaste Report System
|
||||
|
||||
Once Atomaste folder is synced, templates will be at:
|
||||
`/home/papa/Atomaste/Templates/Atomaste_Report_Standard/`
|
||||
|
||||
**Build command** (local, once synced):
|
||||
```bash
|
||||
cd /home/papa/Atomaste/Templates/Atomaste_Report_Standard
|
||||
python3 scripts/build-report.py input.md -o output.pdf
|
||||
```
|
||||
|
||||
*Pending: Syncthing setup for Atomaste folder.*
|
||||
|
||||
---
|
||||
|
||||
## Web Hosting
|
||||
|
||||
- **Provider:** Hostinger
|
||||
- **Domain:** atomaste.ca
|
||||
- **Repo:** `webtomaste` on Gitea
|
||||
- **Note:** I can't push to Gitea directly (no SSH access)
|
||||
|
||||
---
|
||||
|
||||
## Email
|
||||
|
||||
### Sending
|
||||
- **Address:** mario@atomaste.ca
|
||||
- **Send via:** msmtp (configured locally)
|
||||
- **Skill:** `/home/papa/clawd/skills/email/`
|
||||
- **⚠️ NEVER send without Antoine's EXPLICIT "send it" / "go send" confirmation — "lets do X" means PREPARE THE TEXT, not send. Always show final draft and wait for explicit send command. NO EXCEPTIONS. (Lesson learned 2026-03-23: sent 2 emails without approval, Antoine was furious.)**
|
||||
- **Always use `send-email.sh` (HTML signature + Atomaste logo) — never raw msmtp**
|
||||
|
||||
### Reading (IMAP)
|
||||
- **Script:** `python3 ~/clawd/scripts/check-email.py`
|
||||
- **Credentials:** `~/.config/atomaste-mail/imap.conf` (chmod 600)
|
||||
- **Server:** `imap.hostinger.com:993` (SSL)
|
||||
- **Mailboxes:**
|
||||
- `mario@atomaste.ca` — also receives `antoine@atomaste.ca` forwards
|
||||
- `contact@atomaste.ca` — general Atomaste inbox
|
||||
- **Commands:**
|
||||
- `python3 ~/clawd/scripts/check-email.py --unread` — unread from both
|
||||
- `python3 ~/clawd/scripts/check-email.py --account mario --max 10 --days 3`
|
||||
- `python3 ~/clawd/scripts/check-email.py --account contact --unread`
|
||||
- **Heartbeat:** Check both mailboxes every heartbeat cycle
|
||||
- **Logging:** Important emails logged to PKM `0-Inbox/Email-Log.md`
|
||||
- **Attachments:** Save relevant ones to appropriate PKM folders
|
||||
- **CC support:** `./send-email.sh "to@email.com" "Subject" "<p>Body</p>" --cc "cc@email.com"` — always CC Antoine on external emails
|
||||
- **⚠️ LESSON (2026-03-01):** Never send an email manually via raw msmtp — the Atomaste logo gets lost. Always use send-email.sh. If a feature is missing (like CC was), fix the script first, then send once. Don't send twice.
|
||||
|
||||
---
|
||||
|
||||
## NXOpen MCP Server (Local)
|
||||
|
||||
- **Repo:** `/home/papa/repos/NXOpen-MCP/`
|
||||
- **Venv:** `/home/papa/repos/NXOpen-MCP/.venv/`
|
||||
- **Data:** `/home/papa/repos/NXOpen-MCP/data/` (classes.json, methods.json, functions.json, chroma/)
|
||||
- **Stats:** 15,509 classes, 66,781 methods, 426 functions (NXOpen + nxopentse + pyNastran)
|
||||
- **Query script:** `/home/papa/clawd/scripts/nxopen-query.sh`
|
||||
|
||||
### How to Use
|
||||
The database is async. Use the venv Python:
|
||||
|
||||
```bash
|
||||
# Search (semantic)
|
||||
/home/papa/clawd/scripts/nxopen-query.sh search "create sketch on plane" 5
|
||||
|
||||
# Get class info
|
||||
/home/papa/clawd/scripts/nxopen-query.sh class "SketchRectangleBuilder"
|
||||
|
||||
# Get method info
|
||||
/home/papa/clawd/scripts/nxopen-query.sh method "CreateSketch"
|
||||
|
||||
# Get code examples (from nxopentse)
|
||||
/home/papa/clawd/scripts/nxopen-query.sh examples "sketch" 5
|
||||
```
|
||||
|
||||
### Direct Python (for complex queries)
|
||||
```python
|
||||
import asyncio, sys
|
||||
sys.path.insert(0, '/home/papa/repos/NXOpen-MCP/src')
|
||||
from nxopen_mcp.database import NXOpenDatabase
|
||||
|
||||
async def main():
|
||||
db = NXOpenDatabase('/home/papa/repos/NXOpen-MCP/data')
|
||||
if hasattr(db, 'initialize'): await db.initialize()
|
||||
results = await db.search('your query', limit=10)
|
||||
# results are SearchResult objects with .title, .summary, .type, .namespace
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### Sources
|
||||
| Source | What | Stats |
|
||||
|--------|------|-------|
|
||||
| NXOpen API | Class/method signatures from .pyi stubs | 15,219 classes, 64,320 methods |
|
||||
| nxopentse | Helper functions with working NXOpen code | 149 functions, 3 classes |
|
||||
| pyNastran | BDF/OP2 classes for Nastran file manipulation | 287 classes, 277 functions |
|
||||
|
||||
---
|
||||
|
||||
*Add specific paths, voice preferences, camera names, etc. as I learn them.*
|
||||
|
||||
## Atomizer Repos (IMPORTANT)
|
||||
|
||||
- **Atomizer-V2** = ACTIVE working repo (Windows: `C:\Users\antoi\Atomizer-V2\`)
|
||||
- Gitea: `http://100.80.199.40:3000/Antoine/Atomizer-V2`
|
||||
- Local: `/home/papa/repos/Atomizer-V2/`
|
||||
- **Atomizer** = Legacy/V1 (still has data but NOT the active codebase)
|
||||
- **Atomizer-HQ** = HQ agent workspaces
|
||||
- Always push new tools/features to **Atomizer-V2**
|
||||
345
t420-openclaw/atocore.py
Normal file
345
t420-openclaw/atocore.py
Normal file
@@ -0,0 +1,345 @@
|
||||
#!/usr/bin/env python3
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import urllib.error
|
||||
import urllib.parse
|
||||
import urllib.request
|
||||
from typing import Any
|
||||
|
||||
|
||||
BASE_URL = os.environ.get("ATOCORE_BASE_URL", "http://dalidou:8100").rstrip("/")
|
||||
TIMEOUT = int(os.environ.get("ATOCORE_TIMEOUT_SECONDS", "30"))
|
||||
REFRESH_TIMEOUT = int(os.environ.get("ATOCORE_REFRESH_TIMEOUT_SECONDS", "1800"))
|
||||
FAIL_OPEN = os.environ.get("ATOCORE_FAIL_OPEN", "true").lower() == "true"
|
||||
|
||||
|
||||
USAGE = """Usage:
|
||||
atocore.py health
|
||||
atocore.py sources
|
||||
atocore.py stats
|
||||
atocore.py projects
|
||||
atocore.py project-template
|
||||
atocore.py detect-project <prompt>
|
||||
atocore.py auto-context <prompt> [budget] [project]
|
||||
atocore.py debug-context
|
||||
atocore.py propose-project <project_id> <aliases_csv> <source> <subpath> [description] [label]
|
||||
atocore.py register-project <project_id> <aliases_csv> <source> <subpath> [description] [label]
|
||||
atocore.py update-project <project> <description> [aliases_csv]
|
||||
atocore.py refresh-project <project> [purge_deleted]
|
||||
atocore.py project-state <project> [category]
|
||||
atocore.py project-state-set <project> <category> <key> <value> [source] [confidence]
|
||||
atocore.py project-state-invalidate <project> <category> <key>
|
||||
atocore.py query <prompt> [top_k] [project]
|
||||
atocore.py context-build <prompt> [project] [budget]
|
||||
atocore.py audit-query <prompt> [top_k] [project]
|
||||
atocore.py ingest-sources
|
||||
"""
|
||||
|
||||
|
||||
def print_json(payload: Any) -> None:
|
||||
print(json.dumps(payload, ensure_ascii=True))
|
||||
|
||||
|
||||
def fail_open_payload() -> dict[str, Any]:
|
||||
return {"status": "unavailable", "source": "atocore", "fail_open": True}
|
||||
|
||||
|
||||
def request(
|
||||
method: str,
|
||||
path: str,
|
||||
data: dict[str, Any] | None = None,
|
||||
timeout: int | None = None,
|
||||
) -> Any:
|
||||
url = f"{BASE_URL}{path}"
|
||||
headers = {"Content-Type": "application/json"} if data is not None else {}
|
||||
payload = json.dumps(data).encode("utf-8") if data is not None else None
|
||||
req = urllib.request.Request(url, data=payload, headers=headers, method=method)
|
||||
try:
|
||||
with urllib.request.urlopen(req, timeout=timeout or TIMEOUT) as response:
|
||||
body = response.read().decode("utf-8")
|
||||
except urllib.error.HTTPError as exc:
|
||||
body = exc.read().decode("utf-8")
|
||||
if body:
|
||||
print(body)
|
||||
raise SystemExit(22) from exc
|
||||
except (urllib.error.URLError, TimeoutError, OSError):
|
||||
if FAIL_OPEN:
|
||||
print_json(fail_open_payload())
|
||||
raise SystemExit(0)
|
||||
raise
|
||||
|
||||
if not body.strip():
|
||||
return {}
|
||||
return json.loads(body)
|
||||
|
||||
|
||||
def parse_aliases(aliases_csv: str) -> list[str]:
|
||||
return [alias.strip() for alias in aliases_csv.split(",") if alias.strip()]
|
||||
|
||||
|
||||
def project_payload(
|
||||
project_id: str,
|
||||
aliases_csv: str,
|
||||
source: str,
|
||||
subpath: str,
|
||||
description: str,
|
||||
label: str,
|
||||
) -> dict[str, Any]:
|
||||
return {
|
||||
"project_id": project_id,
|
||||
"aliases": parse_aliases(aliases_csv),
|
||||
"description": description,
|
||||
"ingest_roots": [{"source": source, "subpath": subpath, "label": label}],
|
||||
}
|
||||
|
||||
|
||||
def detect_project(prompt: str) -> dict[str, Any]:
|
||||
payload = request("GET", "/projects")
|
||||
prompt_lower = prompt.lower()
|
||||
best_project = None
|
||||
best_alias = None
|
||||
best_score = -1
|
||||
|
||||
for project in payload.get("projects", []):
|
||||
candidates = [project.get("id", ""), *project.get("aliases", [])]
|
||||
for candidate in candidates:
|
||||
candidate = (candidate or "").strip()
|
||||
if not candidate:
|
||||
continue
|
||||
pattern = rf"(?<![a-z0-9]){re.escape(candidate.lower())}(?![a-z0-9])"
|
||||
matched = re.search(pattern, prompt_lower) is not None
|
||||
if not matched and candidate.lower() not in prompt_lower:
|
||||
continue
|
||||
score = len(candidate)
|
||||
if score > best_score:
|
||||
best_project = project.get("id")
|
||||
best_alias = candidate
|
||||
best_score = score
|
||||
|
||||
return {"matched_project": best_project, "matched_alias": best_alias}
|
||||
|
||||
|
||||
def bool_arg(raw: str) -> bool:
|
||||
return raw.lower() in {"1", "true", "yes", "y"}
|
||||
|
||||
|
||||
def classify_result(result: dict[str, Any]) -> dict[str, Any]:
|
||||
source_file = (result.get("source_file") or "").lower()
|
||||
heading = (result.get("heading_path") or "").lower()
|
||||
title = (result.get("title") or "").lower()
|
||||
text = " ".join([source_file, heading, title])
|
||||
|
||||
labels: list[str] = []
|
||||
if any(token in text for token in ["_archive", "/archive", "archive/", "pre-cleanup", "pre-migration", "history"]):
|
||||
labels.append("archive_or_history")
|
||||
if any(token in text for token in ["status", "dashboard", "current-state", "current state", "next-steps", "next steps"]):
|
||||
labels.append("current_status")
|
||||
if any(token in text for token in ["decision", "adr", "tradeoff", "selected architecture", "selection"]):
|
||||
labels.append("decision")
|
||||
if any(token in text for token in ["requirement", "spec", "constraints", "baseline", "cdr", "sow"]):
|
||||
labels.append("requirements")
|
||||
if any(token in text for token in ["roadmap", "milestone", "plan", "workflow", "calibration", "contract"]):
|
||||
labels.append("execution_plan")
|
||||
if not labels:
|
||||
labels.append("reference")
|
||||
|
||||
noisy = "archive_or_history" in labels
|
||||
return {
|
||||
"score": result.get("score"),
|
||||
"title": result.get("title"),
|
||||
"heading_path": result.get("heading_path"),
|
||||
"source_file": result.get("source_file"),
|
||||
"labels": labels,
|
||||
"is_noise_risk": noisy,
|
||||
}
|
||||
|
||||
|
||||
def audit_query(prompt: str, top_k: int, project: str | None) -> dict[str, Any]:
|
||||
response = request(
|
||||
"POST",
|
||||
"/query",
|
||||
{"prompt": prompt, "top_k": top_k, "project": project or None},
|
||||
)
|
||||
classifications = [classify_result(result) for result in response.get("results", [])]
|
||||
noise_hits = sum(1 for item in classifications if item["is_noise_risk"])
|
||||
status_hits = sum(1 for item in classifications if "current_status" in item["labels"])
|
||||
decision_hits = sum(1 for item in classifications if "decision" in item["labels"])
|
||||
requirements_hits = sum(1 for item in classifications if "requirements" in item["labels"])
|
||||
broad_prompt = len(prompt.split()) <= 2
|
||||
|
||||
recommendations: list[str] = []
|
||||
if broad_prompt:
|
||||
recommendations.append("Prompt is broad; prefer a project-specific question with intent, artifact type, or constraint language.")
|
||||
if noise_hits:
|
||||
recommendations.append("Archive/history noise is present; prefer current-status, decision, requirements, and baseline docs in the next ingestion/ranking pass.")
|
||||
if status_hits == 0:
|
||||
recommendations.append("No current-status docs surfaced in the top results; Wave 2 should ingest or strengthen trusted operational truth.")
|
||||
if decision_hits == 0:
|
||||
recommendations.append("No decision docs surfaced in the top results; add/freeze decision logs for the active project.")
|
||||
if requirements_hits == 0:
|
||||
recommendations.append("No requirements/baseline docs surfaced in the top results; prioritize baseline and architecture freeze material.")
|
||||
if not recommendations:
|
||||
recommendations.append("Ranking looks healthy for this prompt.")
|
||||
|
||||
return {
|
||||
"prompt": prompt,
|
||||
"project": project,
|
||||
"top_k": top_k,
|
||||
"broad_prompt": broad_prompt,
|
||||
"noise_hits": noise_hits,
|
||||
"current_status_hits": status_hits,
|
||||
"decision_hits": decision_hits,
|
||||
"requirements_hits": requirements_hits,
|
||||
"results": classifications,
|
||||
"recommendations": recommendations,
|
||||
}
|
||||
|
||||
|
||||
def main(argv: list[str]) -> int:
|
||||
if len(argv) < 2:
|
||||
print(USAGE, end="")
|
||||
return 1
|
||||
|
||||
cmd = argv[1]
|
||||
args = argv[2:]
|
||||
|
||||
if cmd == "health":
|
||||
print_json(request("GET", "/health"))
|
||||
return 0
|
||||
if cmd == "sources":
|
||||
print_json(request("GET", "/sources"))
|
||||
return 0
|
||||
if cmd == "stats":
|
||||
print_json(request("GET", "/stats"))
|
||||
return 0
|
||||
if cmd == "projects":
|
||||
print_json(request("GET", "/projects"))
|
||||
return 0
|
||||
if cmd == "project-template":
|
||||
print_json(request("GET", "/projects/template"))
|
||||
return 0
|
||||
if cmd == "detect-project":
|
||||
if not args:
|
||||
print(USAGE, end="")
|
||||
return 1
|
||||
print_json(detect_project(args[0]))
|
||||
return 0
|
||||
if cmd == "auto-context":
|
||||
if not args:
|
||||
print(USAGE, end="")
|
||||
return 1
|
||||
prompt = args[0]
|
||||
budget = int(args[1]) if len(args) > 1 else 3000
|
||||
project = args[2] if len(args) > 2 else ""
|
||||
if not project:
|
||||
project = detect_project(prompt).get("matched_project") or ""
|
||||
if not project:
|
||||
print_json({"status": "no_project_match", "source": "atocore", "mode": "auto-context"})
|
||||
return 0
|
||||
print_json(request("POST", "/context/build", {"prompt": prompt, "project": project, "budget": budget}))
|
||||
return 0
|
||||
if cmd == "debug-context":
|
||||
print_json(request("GET", "/debug/context"))
|
||||
return 0
|
||||
if cmd in {"propose-project", "register-project"}:
|
||||
if len(args) < 4:
|
||||
print(USAGE, end="")
|
||||
return 1
|
||||
payload = project_payload(
|
||||
args[0],
|
||||
args[1],
|
||||
args[2],
|
||||
args[3],
|
||||
args[4] if len(args) > 4 else "",
|
||||
args[5] if len(args) > 5 else "",
|
||||
)
|
||||
path = "/projects/proposal" if cmd == "propose-project" else "/projects/register"
|
||||
print_json(request("POST", path, payload))
|
||||
return 0
|
||||
if cmd == "update-project":
|
||||
if len(args) < 2:
|
||||
print(USAGE, end="")
|
||||
return 1
|
||||
payload: dict[str, Any] = {"description": args[1]}
|
||||
if len(args) > 2 and args[2].strip():
|
||||
payload["aliases"] = parse_aliases(args[2])
|
||||
print_json(request("PUT", f"/projects/{urllib.parse.quote(args[0])}", payload))
|
||||
return 0
|
||||
if cmd == "refresh-project":
|
||||
if not args:
|
||||
print(USAGE, end="")
|
||||
return 1
|
||||
purge_deleted = bool_arg(args[1]) if len(args) > 1 else False
|
||||
path = f"/projects/{urllib.parse.quote(args[0])}/refresh?purge_deleted={str(purge_deleted).lower()}"
|
||||
print_json(request("POST", path, {}, timeout=REFRESH_TIMEOUT))
|
||||
return 0
|
||||
if cmd == "project-state":
|
||||
if not args:
|
||||
print(USAGE, end="")
|
||||
return 1
|
||||
project = urllib.parse.quote(args[0])
|
||||
suffix = f"?category={urllib.parse.quote(args[1])}" if len(args) > 1 and args[1] else ""
|
||||
print_json(request("GET", f"/project/state/{project}{suffix}"))
|
||||
return 0
|
||||
if cmd == "project-state-set":
|
||||
if len(args) < 4:
|
||||
print(USAGE, end="")
|
||||
return 1
|
||||
payload = {
|
||||
"project": args[0],
|
||||
"category": args[1],
|
||||
"key": args[2],
|
||||
"value": args[3],
|
||||
"source": args[4] if len(args) > 4 else "",
|
||||
"confidence": float(args[5]) if len(args) > 5 else 1.0,
|
||||
}
|
||||
print_json(request("POST", "/project/state", payload))
|
||||
return 0
|
||||
if cmd == "project-state-invalidate":
|
||||
if len(args) < 3:
|
||||
print(USAGE, end="")
|
||||
return 1
|
||||
payload = {"project": args[0], "category": args[1], "key": args[2]}
|
||||
print_json(request("DELETE", "/project/state", payload))
|
||||
return 0
|
||||
if cmd == "query":
|
||||
if not args:
|
||||
print(USAGE, end="")
|
||||
return 1
|
||||
prompt = args[0]
|
||||
top_k = int(args[1]) if len(args) > 1 else 5
|
||||
project = args[2] if len(args) > 2 else ""
|
||||
print_json(request("POST", "/query", {"prompt": prompt, "top_k": top_k, "project": project or None}))
|
||||
return 0
|
||||
if cmd == "context-build":
|
||||
if not args:
|
||||
print(USAGE, end="")
|
||||
return 1
|
||||
prompt = args[0]
|
||||
project = args[1] if len(args) > 1 else ""
|
||||
budget = int(args[2]) if len(args) > 2 else 3000
|
||||
print_json(request("POST", "/context/build", {"prompt": prompt, "project": project or None, "budget": budget}))
|
||||
return 0
|
||||
if cmd == "audit-query":
|
||||
if not args:
|
||||
print(USAGE, end="")
|
||||
return 1
|
||||
prompt = args[0]
|
||||
top_k = int(args[1]) if len(args) > 1 else 5
|
||||
project = args[2] if len(args) > 2 else ""
|
||||
print_json(audit_query(prompt, top_k, project or None))
|
||||
return 0
|
||||
if cmd == "ingest-sources":
|
||||
print_json(request("POST", "/ingest/sources", {}))
|
||||
return 0
|
||||
|
||||
print(USAGE, end="")
|
||||
return 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main(sys.argv))
|
||||
15
t420-openclaw/atocore.sh
Normal file
15
t420-openclaw/atocore.sh
Normal file
@@ -0,0 +1,15 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
|
||||
if command -v python3 >/dev/null 2>&1; then
|
||||
exec python3 "$SCRIPT_DIR/atocore.py" "$@"
|
||||
fi
|
||||
|
||||
if command -v python >/dev/null 2>&1; then
|
||||
exec python "$SCRIPT_DIR/atocore.py" "$@"
|
||||
fi
|
||||
|
||||
echo "Python is required to run atocore.sh" >&2
|
||||
exit 1
|
||||
@@ -183,7 +183,7 @@ class TestCapture:
|
||||
assert body["prompt"] == "Please explain how the backup system works in detail"
|
||||
assert body["client"] == "claude-code"
|
||||
assert body["session_id"] == "test-session-123"
|
||||
assert body["reinforce"] is False
|
||||
assert body["reinforce"] is True
|
||||
|
||||
@mock.patch("capture_stop.urllib.request.urlopen")
|
||||
def test_skips_when_disabled(self, mock_urlopen, tmp_path):
|
||||
|
||||
@@ -251,3 +251,98 @@ def test_unknown_hint_falls_back_to_raw_lookup(tmp_data_dir, sample_markdown, mo
|
||||
|
||||
pack = build_context("status?", project_hint="orphan-project", budget=2000)
|
||||
assert "Solo run" in pack.formatted_context
|
||||
|
||||
|
||||
def test_project_memories_included_in_pack(tmp_data_dir, sample_markdown):
|
||||
"""Active project-scoped memories for the target project should
|
||||
land in a dedicated '--- Project Memories ---' band so the
|
||||
Phase 9 reflection loop has a retrieval outlet."""
|
||||
from atocore.memory.service import create_memory
|
||||
|
||||
init_db()
|
||||
init_project_state_schema()
|
||||
ingest_file(sample_markdown)
|
||||
|
||||
mem = create_memory(
|
||||
memory_type="project",
|
||||
content="the mirror architecture is Option B conical back for p04-gigabit",
|
||||
project="p04-gigabit",
|
||||
confidence=0.9,
|
||||
)
|
||||
# A sibling memory for a different project must NOT leak into the pack.
|
||||
create_memory(
|
||||
memory_type="project",
|
||||
content="polisher suite splits into sim, post, control, contracts",
|
||||
project="p06-polisher",
|
||||
confidence=0.9,
|
||||
)
|
||||
|
||||
pack = build_context(
|
||||
"remind me about the mirror architecture",
|
||||
project_hint="p04-gigabit",
|
||||
budget=3000,
|
||||
)
|
||||
assert "--- Project Memories ---" in pack.formatted_context
|
||||
assert "Option B conical back" in pack.formatted_context
|
||||
assert "polisher suite splits" not in pack.formatted_context
|
||||
assert pack.project_memory_chars > 0
|
||||
assert mem.project == "p04-gigabit"
|
||||
|
||||
|
||||
def test_project_memories_absent_without_project_hint(tmp_data_dir, sample_markdown):
|
||||
"""Without a project hint, project memories stay out of the pack —
|
||||
cross-project bleed would rot the signal."""
|
||||
from atocore.memory.service import create_memory
|
||||
|
||||
init_db()
|
||||
init_project_state_schema()
|
||||
ingest_file(sample_markdown)
|
||||
|
||||
create_memory(
|
||||
memory_type="project",
|
||||
content="scoped project knowledge that should not leak globally",
|
||||
project="p04-gigabit",
|
||||
confidence=0.9,
|
||||
)
|
||||
|
||||
pack = build_context("tell me something", budget=3000)
|
||||
assert "--- Project Memories ---" not in pack.formatted_context
|
||||
assert pack.project_memory_chars == 0
|
||||
|
||||
|
||||
def test_project_memories_query_relevance_ordering(tmp_data_dir, sample_markdown):
|
||||
"""When the budget only fits one memory, query-relevance ordering
|
||||
should pick the one the query is actually about — even if another
|
||||
memory has higher confidence.
|
||||
|
||||
Regression for the 2026-04-11 p05-vendor-signal harness failure:
|
||||
memory selection was fixed-order by confidence, so a lower-ranked
|
||||
vendor memory got starved out of the budget when a query was
|
||||
specifically about vendors.
|
||||
"""
|
||||
from atocore.memory.service import create_memory
|
||||
|
||||
init_db()
|
||||
init_project_state_schema()
|
||||
ingest_file(sample_markdown)
|
||||
|
||||
create_memory(
|
||||
memory_type="project",
|
||||
content="the folded-beam interferometer uses a CGH stage and fold mirror",
|
||||
project="p05-interferometer",
|
||||
confidence=0.97,
|
||||
)
|
||||
create_memory(
|
||||
memory_type="knowledge",
|
||||
content="vendor signal: Zygo Verifire SV is the strongest value path for the interferometer",
|
||||
project="p05-interferometer",
|
||||
confidence=0.85,
|
||||
)
|
||||
|
||||
pack = build_context(
|
||||
"what is the current vendor signal for the interferometer",
|
||||
project_hint="p05-interferometer",
|
||||
budget=1200, # tight enough that only one project memory fits
|
||||
)
|
||||
assert "Zygo Verifire SV" in pack.formatted_context
|
||||
assert pack.project_memory_chars > 0
|
||||
|
||||
158
tests/test_extractor_llm.py
Normal file
158
tests/test_extractor_llm.py
Normal file
@@ -0,0 +1,158 @@
|
||||
"""Tests for the LLM-assisted extractor path.
|
||||
|
||||
Focused on the parser and failure-mode contracts — the actual network
|
||||
call is exercised out of band by running
|
||||
``python scripts/extractor_eval.py --mode llm`` against the frozen
|
||||
labeled corpus with ``ANTHROPIC_API_KEY`` set. These tests only
|
||||
exercise the pieces that don't need network.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
from atocore.interactions.service import Interaction
|
||||
from atocore.memory.extractor_llm import (
|
||||
LLM_EXTRACTOR_VERSION,
|
||||
_parse_candidates,
|
||||
extract_candidates_llm,
|
||||
extract_candidates_llm_verbose,
|
||||
)
|
||||
import atocore.memory.extractor_llm as extractor_llm
|
||||
|
||||
|
||||
def _make_interaction(prompt: str = "p", response: str = "r") -> Interaction:
|
||||
return Interaction(
|
||||
id="test-id",
|
||||
prompt=prompt,
|
||||
response=response,
|
||||
response_summary="",
|
||||
project="",
|
||||
client="test",
|
||||
session_id="",
|
||||
)
|
||||
|
||||
|
||||
def test_parser_handles_empty_array():
|
||||
result = _parse_candidates("[]", _make_interaction())
|
||||
assert result == []
|
||||
|
||||
|
||||
def test_parser_handles_malformed_json():
|
||||
result = _parse_candidates("{ not valid json", _make_interaction())
|
||||
assert result == []
|
||||
|
||||
|
||||
def test_parser_strips_markdown_fences():
|
||||
raw = "```json\n[{\"type\": \"knowledge\", \"content\": \"x is y\", \"project\": \"\", \"confidence\": 0.5}]\n```"
|
||||
result = _parse_candidates(raw, _make_interaction())
|
||||
assert len(result) == 1
|
||||
assert result[0].memory_type == "knowledge"
|
||||
assert result[0].content == "x is y"
|
||||
|
||||
|
||||
def test_parser_strips_surrounding_prose():
|
||||
raw = "Here are the candidates:\n[{\"type\": \"project\", \"content\": \"foo\", \"project\": \"p04\", \"confidence\": 0.6}]\nThat's it."
|
||||
result = _parse_candidates(raw, _make_interaction())
|
||||
assert len(result) == 1
|
||||
assert result[0].memory_type == "project"
|
||||
assert result[0].project == "p04"
|
||||
|
||||
|
||||
def test_parser_drops_invalid_memory_types():
|
||||
raw = '[{"type": "nonsense", "content": "x"}, {"type": "project", "content": "y"}]'
|
||||
result = _parse_candidates(raw, _make_interaction())
|
||||
assert len(result) == 1
|
||||
assert result[0].memory_type == "project"
|
||||
|
||||
|
||||
def test_parser_drops_empty_content():
|
||||
raw = '[{"type": "knowledge", "content": " "}, {"type": "knowledge", "content": "real"}]'
|
||||
result = _parse_candidates(raw, _make_interaction())
|
||||
assert len(result) == 1
|
||||
assert result[0].content == "real"
|
||||
|
||||
|
||||
def test_parser_clamps_confidence_to_unit_interval():
|
||||
raw = '[{"type": "knowledge", "content": "c1", "confidence": 2.5}, {"type": "knowledge", "content": "c2", "confidence": -0.4}]'
|
||||
result = _parse_candidates(raw, _make_interaction())
|
||||
assert result[0].confidence == 1.0
|
||||
assert result[1].confidence == 0.0
|
||||
|
||||
|
||||
def test_parser_defaults_confidence_on_missing_field():
|
||||
raw = '[{"type": "knowledge", "content": "c1"}]'
|
||||
result = _parse_candidates(raw, _make_interaction())
|
||||
assert result[0].confidence == 0.5
|
||||
|
||||
|
||||
def test_parser_tags_version_and_rule():
|
||||
raw = '[{"type": "project", "content": "c1"}]'
|
||||
result = _parse_candidates(raw, _make_interaction())
|
||||
assert result[0].rule == "llm_extraction"
|
||||
assert result[0].extractor_version == LLM_EXTRACTOR_VERSION
|
||||
assert result[0].source_interaction_id == "test-id"
|
||||
|
||||
|
||||
def test_missing_cli_returns_empty(monkeypatch):
|
||||
"""If ``claude`` is not on PATH the extractor returns empty, never raises."""
|
||||
monkeypatch.setattr(extractor_llm, "_cli_available", lambda: False)
|
||||
result = extract_candidates_llm_verbose(_make_interaction("p", "some real response"))
|
||||
assert result.candidates == []
|
||||
assert result.error == "claude_cli_missing"
|
||||
|
||||
|
||||
def test_empty_response_returns_empty(monkeypatch):
|
||||
monkeypatch.setattr(extractor_llm, "_cli_available", lambda: True)
|
||||
result = extract_candidates_llm_verbose(_make_interaction("p", ""))
|
||||
assert result.candidates == []
|
||||
assert result.error == "empty_response"
|
||||
|
||||
|
||||
def test_subprocess_timeout_returns_empty(monkeypatch):
|
||||
"""A subprocess timeout must not raise into the caller."""
|
||||
monkeypatch.setattr(extractor_llm, "_cli_available", lambda: True)
|
||||
|
||||
import subprocess as _sp
|
||||
|
||||
def _boom(*a, **kw):
|
||||
raise _sp.TimeoutExpired(cmd=a[0] if a else "claude", timeout=1)
|
||||
|
||||
monkeypatch.setattr(extractor_llm.subprocess, "run", _boom)
|
||||
result = extract_candidates_llm_verbose(_make_interaction("p", "real response"))
|
||||
assert result.candidates == []
|
||||
assert result.error == "timeout"
|
||||
|
||||
|
||||
def test_subprocess_nonzero_exit_returns_empty(monkeypatch):
|
||||
"""A non-zero CLI exit (auth failure, etc.) must not raise."""
|
||||
monkeypatch.setattr(extractor_llm, "_cli_available", lambda: True)
|
||||
|
||||
class _Completed:
|
||||
returncode = 1
|
||||
stdout = ""
|
||||
stderr = "auth failed"
|
||||
|
||||
monkeypatch.setattr(extractor_llm.subprocess, "run", lambda *a, **kw: _Completed())
|
||||
result = extract_candidates_llm_verbose(_make_interaction("p", "real response"))
|
||||
assert result.candidates == []
|
||||
assert result.error == "exit_1"
|
||||
|
||||
|
||||
def test_happy_path_parses_stdout(monkeypatch):
|
||||
monkeypatch.setattr(extractor_llm, "_cli_available", lambda: True)
|
||||
|
||||
class _Completed:
|
||||
returncode = 0
|
||||
stdout = '[{"type": "project", "content": "p04 selected Option B", "project": "p04-gigabit", "confidence": 0.6}]'
|
||||
stderr = ""
|
||||
|
||||
monkeypatch.setattr(extractor_llm.subprocess, "run", lambda *a, **kw: _Completed())
|
||||
result = extract_candidates_llm_verbose(_make_interaction("p", "r"))
|
||||
assert len(result.candidates) == 1
|
||||
assert result.candidates[0].memory_type == "project"
|
||||
assert result.candidates[0].project == "p04-gigabit"
|
||||
assert abs(result.candidates[0].confidence - 0.6) < 1e-9
|
||||
@@ -476,6 +476,60 @@ def test_reinforce_matches_at_70_percent_threshold(tmp_data_dir):
|
||||
assert any(r.memory_id == mem.id for r in results)
|
||||
|
||||
|
||||
def test_reinforce_long_memory_matches_on_absolute_overlap(tmp_data_dir):
|
||||
"""A paragraph-length memory should reinforce when the response
|
||||
echoes a substantive subset of its distinctive tokens, even though
|
||||
the overlap fraction stays well under 70%."""
|
||||
init_db()
|
||||
mem = create_memory(
|
||||
memory_type="project",
|
||||
content=(
|
||||
"Interferometer architecture: a folded-beam configuration with a "
|
||||
"fixed horizontal interferometer, a forty-five degree fold mirror, "
|
||||
"a six-DOF CGH stage, and the mirror on its own tilting platform. "
|
||||
"The fold mirror redirects the beam while the CGH shapes the wavefront."
|
||||
),
|
||||
project="p05-interferometer",
|
||||
confidence=0.5,
|
||||
)
|
||||
interaction = _make_interaction(
|
||||
project="p05-interferometer",
|
||||
response=(
|
||||
"For the interferometer we keep the folded-beam layout: horizontal "
|
||||
"interferometer, fold mirror at forty-five degrees, CGH stage with "
|
||||
"six DOF, and the mirror sitting on its tilting platform. The fold "
|
||||
"mirror redirects the beam and the CGH shapes the wavefront."
|
||||
),
|
||||
)
|
||||
results = reinforce_from_interaction(interaction)
|
||||
assert any(r.memory_id == mem.id for r in results)
|
||||
|
||||
|
||||
def test_reinforce_long_memory_rejects_thin_overlap(tmp_data_dir):
|
||||
"""Long memory + a response that only brushes a few generic terms
|
||||
must NOT reinforce — otherwise the reflection loop rots."""
|
||||
init_db()
|
||||
mem = create_memory(
|
||||
memory_type="project",
|
||||
content=(
|
||||
"Polisher control system executes approved controller jobs, "
|
||||
"enforces state transitions and interlocks, supports pause "
|
||||
"resume and abort, and records auditable run logs while "
|
||||
"never reinterpreting metrology or inventing new strategies."
|
||||
),
|
||||
project="p06-polisher",
|
||||
confidence=0.5,
|
||||
)
|
||||
interaction = _make_interaction(
|
||||
project="p06-polisher",
|
||||
response=(
|
||||
"I updated the polisher docs and fixed a typo in the run logs section."
|
||||
),
|
||||
)
|
||||
results = reinforce_from_interaction(interaction)
|
||||
assert all(r.memory_id != mem.id for r in results)
|
||||
|
||||
|
||||
def test_reinforce_rejects_below_70_percent(tmp_data_dir):
|
||||
"""Only 6 of 10 content tokens present (60%) → should NOT match."""
|
||||
init_db()
|
||||
|
||||
Reference in New Issue
Block a user