Compare commits
16 Commits
b3253f35ee
...
codex/audi
| Author | SHA1 | Date | |
|---|---|---|---|
| 89c7964237 | |||
| 146f2e4a5e | |||
| 5c69f77b45 | |||
| 3921c5ffc7 | |||
| 93f796207f | |||
| b98a658831 | |||
| 06792d862e | |||
| 95daa5c040 | |||
| 3a7e8ccba4 | |||
| a29b5e22f2 | |||
| b309e7fd49 | |||
| 330ecfb6a6 | |||
| 7d8d599030 | |||
| d9dc55f841 | |||
| 81307cec47 | |||
| 59331e522d |
@@ -1,5 +1,13 @@
|
|||||||
# AGENTS.md
|
# AGENTS.md
|
||||||
|
|
||||||
|
## Session protocol (read first, every session)
|
||||||
|
|
||||||
|
**Before doing anything else, read `DEV-LEDGER.md` at the repo root.** It is the one-file source of truth for "what is currently true" — live SHA, active plan, open review findings, recent decisions. The narrative docs under `docs/` may lag; the ledger does not.
|
||||||
|
|
||||||
|
**Before ending a session, append a Session Log line to `DEV-LEDGER.md`** with what you did and which commit range it covers, and bump the Orientation section if anything there changed.
|
||||||
|
|
||||||
|
This rule applies equally to Claude, Codex, and any future agent working in this repo.
|
||||||
|
|
||||||
## Project role
|
## Project role
|
||||||
This repository is AtoCore, the runtime and machine-memory layer of the Ato ecosystem.
|
This repository is AtoCore, the runtime and machine-memory layer of the Ato ecosystem.
|
||||||
|
|
||||||
|
|||||||
30
CLAUDE.md
Normal file
30
CLAUDE.md
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
# CLAUDE.md — project instructions for AtoCore
|
||||||
|
|
||||||
|
## Session protocol
|
||||||
|
|
||||||
|
Before doing anything else in this repo, read `DEV-LEDGER.md` at the repo root. It is the shared operating memory between Claude, Codex, and the human operator — live Dalidou SHA, active plan, open P1/P2 review findings, recent decisions, and session log. The narrative docs under `docs/` sometimes lag; the ledger does not.
|
||||||
|
|
||||||
|
Before ending a session, append a Session Log line to `DEV-LEDGER.md` covering:
|
||||||
|
|
||||||
|
- which commits you produced (sha range)
|
||||||
|
- what changed at a high level
|
||||||
|
- any harness / test count deltas
|
||||||
|
- anything you overclaimed and later corrected
|
||||||
|
|
||||||
|
Bump the **Orientation** section if `live_sha`, `main_tip`, `test_count`, or `harness` changed.
|
||||||
|
|
||||||
|
`AGENTS.md` at the repo root carries the broader project principles (storage separation, deployment model, coding guidance). Read it when you need the "why" behind a constraint.
|
||||||
|
|
||||||
|
## Deploy workflow
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git push origin main && ssh papa@dalidou "bash /srv/storage/atocore/app/deploy/dalidou/deploy.sh"
|
||||||
|
```
|
||||||
|
|
||||||
|
The deploy script self-verifies via `/health` build_sha — if it exits non-zero, do not assume the change is live.
|
||||||
|
|
||||||
|
## Working model
|
||||||
|
|
||||||
|
- Claude builds; Codex audits. No parallel work on the same files.
|
||||||
|
- P1 review findings block further `main` commits until acknowledged in the ledger's **Open Review Findings** table.
|
||||||
|
- Codex branches must fork from `origin/main` (no orphan commits that require `--allow-unrelated-histories`).
|
||||||
184
DEV-LEDGER.md
Normal file
184
DEV-LEDGER.md
Normal file
@@ -0,0 +1,184 @@
|
|||||||
|
# AtoCore Dev Ledger
|
||||||
|
|
||||||
|
> Shared operating memory between humans, Claude, and Codex.
|
||||||
|
> **Every session MUST read this file at start and append a Session Log entry before ending.**
|
||||||
|
> Section headers are stable - do not rename them. Trim Session Log and Recent Decisions to the last 20 entries at session end; older history lives in `git log` and `docs/`.
|
||||||
|
|
||||||
|
## Orientation
|
||||||
|
|
||||||
|
- **live_sha** (Dalidou `/health` build_sha): `5c69f77`
|
||||||
|
- **last_updated**: 2026-04-12 by Codex (audit branch `codex/audit-2026-04-12`)
|
||||||
|
- **main_tip**: `146f2e4`
|
||||||
|
- **test_count**: 278 passing
|
||||||
|
- **harness**: `15/18 PASS` (remaining failures are mixed: p06-firmware-interface exposes a lexical-ranking tie, p06-offline-design is a live triage scoping miss, p06-tailscale still has retrieved-chunk bleed)
|
||||||
|
- **active_memories**: 36 (was 20 before mini-phase; p06-polisher 2->16, atocore 0->5)
|
||||||
|
- **off_host_backup**: `papa@192.168.86.39:/home/papa/atocore-backups/` via cron env `ATOCORE_BACKUP_RSYNC`, verified
|
||||||
|
|
||||||
|
## Active Plan
|
||||||
|
|
||||||
|
**Mini-phase**: Extractor improvement (eval-driven) + retrieval harness expansion.
|
||||||
|
**Duration**: 8 days, hard gates at each day boundary.
|
||||||
|
**Plan author**: Codex (2026-04-11). **Executor**: Claude. **Audit**: Codex.
|
||||||
|
|
||||||
|
### Preflight (before Day 1)
|
||||||
|
|
||||||
|
Stop if any of these fail:
|
||||||
|
|
||||||
|
- `git rev-parse HEAD` on `main` matches the expected branching tip
|
||||||
|
- Live `/health` on Dalidou reports the SHA you think is deployed
|
||||||
|
- `python scripts/retrieval_eval.py --json` still passes at the current baseline
|
||||||
|
- `batch-extract` over the known 42-capture slice reproduces the current low-yield baseline
|
||||||
|
- A frozen sample set exists for extractor labeling so the target does not move mid-phase
|
||||||
|
|
||||||
|
Success: baseline eval output saved, baseline extract output saved, working branch created from `origin/main`.
|
||||||
|
|
||||||
|
### Day 1 - Labeled extractor eval set
|
||||||
|
|
||||||
|
Pick 30 real captures: 10 that should produce 0 candidates, 10 that should plausibly produce 1, 10 ambiguous/hard. Store as a stable artifact (interaction id, expected count, expected type, notes). Add a runner that scores extractor output against labels.
|
||||||
|
|
||||||
|
Success: 30 labeled interactions in a stable artifact, one-command precision/recall output.
|
||||||
|
Fail-early: if labeling 30 takes more than a day because the concept is unclear, tighten the extraction target before touching code.
|
||||||
|
|
||||||
|
### Day 2 - Measure current extractor
|
||||||
|
|
||||||
|
Run the rule-based extractor on all 30. Record yield, TP, FP, FN. Bucket misses by class (conversational preference, decision summary, status/constraint, meta chatter).
|
||||||
|
|
||||||
|
Success: short scorecard with counts by miss type, top 2 miss classes obvious.
|
||||||
|
Fail-early: if the labeled set shows fewer than 5 plausible positives total, the corpus is too weak - relabel before tuning.
|
||||||
|
|
||||||
|
### Day 3 - Smallest rule expansion for top miss class
|
||||||
|
|
||||||
|
Add 1-2 narrow, explainable rules for the worst miss class. Add unit tests from real paraphrase examples in the labeled set. Then rerun eval.
|
||||||
|
|
||||||
|
Success: recall up on the labeled set, false positives do not materially rise, new tests cover the new cue class.
|
||||||
|
Fail-early: if one rule expansion raises FP above ~20% of extracted candidates, revert or narrow before adding more.
|
||||||
|
|
||||||
|
### Day 4 - Decision gate: more rules or LLM-assisted prototype
|
||||||
|
|
||||||
|
If rule expansion reaches a **meaningfully reviewable queue**, keep going with rules. Otherwise prototype an LLM-assisted extraction mode behind a flag.
|
||||||
|
|
||||||
|
"Meaningfully reviewable queue":
|
||||||
|
- >= 15-25% candidate yield on the 30 labeled captures
|
||||||
|
- FP rate low enough that manual triage feels tolerable
|
||||||
|
- >= 2 real non-synthetic candidates worth review
|
||||||
|
|
||||||
|
Hard stop: if candidate yield is still under 10% after this point, stop rule tinkering and switch to architecture review (LLM-assisted OR narrower extraction scope).
|
||||||
|
|
||||||
|
### Day 5 - Stabilize and document
|
||||||
|
|
||||||
|
Add remaining focused rules or the flagged LLM-assisted path. Write down in-scope and out-of-scope utterance kinds.
|
||||||
|
|
||||||
|
Success: labeled eval green against target threshold, extractor scope explainable in <= 5 bullets.
|
||||||
|
|
||||||
|
### Day 6 - Retrieval harness expansion (6 -> 15-20 fixtures)
|
||||||
|
|
||||||
|
Grow across p04/p05/p06. Include short ambiguous prompts, cross-project collision cases, expected project-state wins, expected project-memory wins, and 1-2 "should fail open / low confidence" cases.
|
||||||
|
|
||||||
|
Success: >= 15 fixtures, each active project has easy + medium + hard cases.
|
||||||
|
Fail-early: if fixtures are mostly obvious wins, add harder adversarial cases before claiming coverage.
|
||||||
|
|
||||||
|
### Day 7 - Regression pass and calibration
|
||||||
|
|
||||||
|
Run harness on current code vs live Dalidou. Inspect failures (ranking, ingestion gap, project bleed, budget). Make at most ONE ranking/budget tweak if the harness clearly justifies it. Do not mix harness expansion and ranking changes in a single commit unless tightly coupled.
|
||||||
|
|
||||||
|
Success: harness still passes or improves after extractor work; any ranking tweak is justified by a concrete fixture delta.
|
||||||
|
Fail-early: if > 20-25% of harness fixtures regress after extractor changes, separate concerns before merging.
|
||||||
|
|
||||||
|
### Day 8 - Merge and close
|
||||||
|
|
||||||
|
Clean commit sequence. Save before/after metrics (extractor scorecard, harness results). Update docs only with claims the metrics support.
|
||||||
|
|
||||||
|
Merge order: labeled corpus + runner -> extractor improvements + tests -> harness expansion -> any justified ranking tweak -> docs sync last.
|
||||||
|
|
||||||
|
Success: point to a before/after delta for both extraction and retrieval; docs do not overclaim.
|
||||||
|
|
||||||
|
### Hard Gates (stop/rethink points)
|
||||||
|
|
||||||
|
- Extractor yield < 10% after 30 labeled interactions -> stop, reconsider rule-only extraction
|
||||||
|
- FP rate > 20% on labeled set -> narrow rules before adding more
|
||||||
|
- Harness expansion finds < 3 genuinely hard cases -> harness still too soft
|
||||||
|
- Ranking change improves one project but regresses another -> do not merge without explicit tradeoff note
|
||||||
|
|
||||||
|
### Branching
|
||||||
|
|
||||||
|
One branch `codex/extractor-eval-loop` for Day 1-5, a second `codex/retrieval-harness-expansion` for Day 6-7. Keeps extraction and retrieval judgments auditable.
|
||||||
|
|
||||||
|
## Review Protocol
|
||||||
|
|
||||||
|
- Codex records review findings in **Open Review Findings**.
|
||||||
|
- Claude must read **Open Review Findings** at session start before coding.
|
||||||
|
- Codex owns finding text. Claude may update operational fields only:
|
||||||
|
- `status`
|
||||||
|
- `owner`
|
||||||
|
- `resolved_by`
|
||||||
|
- If Claude disagrees with a finding, do not rewrite it. Mark it `declined` and explain why in the **Session Log**.
|
||||||
|
- Any commit or session that addresses a finding should reference the finding id in the commit message or **Session Log**.
|
||||||
|
- `P1` findings block further commits in the affected area until they are at least acknowledged and explicitly tracked.
|
||||||
|
- Findings may be code-level, claim-level, or ops-level. If the implementation boundary changes, retarget the finding instead of silently closing it.
|
||||||
|
|
||||||
|
## Open Review Findings
|
||||||
|
|
||||||
|
| id | finder | severity | file:line | summary | status | owner | opened_at | resolved_by |
|
||||||
|
|-----|--------|----------|------------------------------------|-------------------------------------------------------------------------|--------------|--------|------------|-------------|
|
||||||
|
| R1 | Codex | P1 | deploy/hooks/capture_stop.py:76-85 | Live Claude capture still omits `extract`, so "loop closed both sides" remains overstated in practice even though the API supports it | acknowledged | Claude | 2026-04-11 | |
|
||||||
|
| R2 | Codex | P1 | src/atocore/context/builder.py | Project memories excluded from pack | fixed | Claude | 2026-04-11 | 8ea53f4 |
|
||||||
|
| R3 | Claude | P2 | src/atocore/memory/extractor.py | Rule cues (`## Decision:`) never fire on conversational LLM text | open | Claude | 2026-04-11 | |
|
||||||
|
| R4 | Codex | P2 | DEV-LEDGER.md:11 | Orientation `main_tip` was stale versus `HEAD` / `origin/main` | fixed | Codex | 2026-04-11 | 81307ce |
|
||||||
|
| R5 | Codex | P1 | src/atocore/interactions/service.py:157-174 | The deployed extraction path still calls only the rule extractor; the new LLM extractor is eval/script-only, so Day 4 "gate cleared" is true as a benchmark result but not as an operational extraction path | open | Claude | 2026-04-12 | |
|
||||||
|
| R6 | Codex | P1 | src/atocore/memory/extractor_llm.py:258-276 | LLM extraction accepts model-supplied `project` verbatim with no fallback to `interaction.project`; live triage promoted a clearly p06 memory (offline/network rule) as project=`""`, which explains the p06-offline-design harness miss and falsifies the current "all 3 failures are budget-contention" claim | open | Claude | 2026-04-12 | |
|
||||||
|
| R7 | Codex | P2 | src/atocore/memory/service.py:448-459 | Query ranking is overlap-count only, so broad overview memories can tie exact low-confidence memories and win on confidence; p06-firmware-interface is not just budget pressure, it also exposes a weak lexical scorer | open | Claude | 2026-04-12 | |
|
||||||
|
| R8 | Codex | P2 | tests/test_extractor_llm.py:1-7 | LLM extractor tests stop at parser/failure contracts; there is no automated coverage for the script-only persistence/review path that produced the 16 promoted memories, including project-scope preservation | open | Claude | 2026-04-12 | |
|
||||||
|
|
||||||
|
## Recent Decisions
|
||||||
|
|
||||||
|
- **2026-04-12** Day 4 gate cleared: LLM-assisted extraction via `claude -p` (OAuth, no API key) is the path forward. Rule extractor stays as default for structural cues. *Proposed by:* Claude. *Ratified by:* Antoine.
|
||||||
|
- **2026-04-12** First live triage: 16 promoted, 35 rejected from 51 LLM-extracted candidates. 31% accept rate. Active memory count 20->36. *Executed by:* Claude. *Ratified by:* Antoine.
|
||||||
|
- **2026-04-12** No API keys allowed in AtoCore — LLM-assisted features use OAuth via `claude -p` or equivalent CLI-authenticated paths. *Proposed by:* Antoine.
|
||||||
|
- **2026-04-12** Multi-model extraction direction: extraction/triage should be model-agnostic, with Codex/Gemini/Ollama as second-pass reviewers for robustness. *Proposed by:* Antoine.
|
||||||
|
- **2026-04-11** Adopt this ledger as shared operating memory between Claude and Codex. *Proposed by:* Antoine. *Ratified by:* Antoine.
|
||||||
|
- **2026-04-11** Accept Codex's 8-day mini-phase plan verbatim as Active Plan. *Proposed by:* Codex. *Ratified by:* Antoine.
|
||||||
|
- **2026-04-11** Review findings live in `DEV-LEDGER.md` with Codex owning finding text and Claude updating status fields only. *Proposed by:* Codex. *Ratified by:* Antoine.
|
||||||
|
- **2026-04-11** Project memories land in the pack under `--- Project Memories ---` at 25% budget ratio, gated on canonical project hint. *Proposed by:* Claude.
|
||||||
|
- **2026-04-11** Extraction stays off the capture hot path. Batch / manual only. *Proposed by:* Antoine.
|
||||||
|
- **2026-04-11** 4-step roadmap: extractor -> harness expansion -> Wave 2 ingestion -> OpenClaw finish. Steps 1+2 as one mini-phase. *Ratified by:* Antoine.
|
||||||
|
- **2026-04-11** Codex branches must fork from `main`, not be orphan commits. *Proposed by:* Claude. *Agreed by:* Codex.
|
||||||
|
|
||||||
|
## Session Log
|
||||||
|
|
||||||
|
- **2026-04-12 Codex (audit branch `codex/audit-2026-04-12`)** audited `c5bad99..146f2e4` against code, live Dalidou, and the 36 active memories. Confirmed: `claude -p` invocation is not shell-injection-prone (`subprocess.run(args)` with no shell), off-host backup wiring matches the ledger, and R1 remains unresolved in practice. Added R5-R8. Corrected Orientation `main_tip` (`146f2e4`, not `5c69f77`) and tightened the harness note: p06-firmware-interface is a ranking-tie issue, p06-offline-design comes from a project-scope miss in live triage, and p06-tailscale is retrieved-chunk bleed rather than memory-band budget contention.
|
||||||
|
- **2026-04-12 Claude** `06792d8..5c69f77` Day 5-8 close. Documented extractor scope (5 in-scope, 6 out-of-scope categories). Expanded harness from 6 to 18 fixtures (p04 +1, p05 +1, p06 +7, adversarial +2). Per-entry memory cap at 250 chars fixed 1 of 4 budget-contention failures. Final harness: 15/18 PASS. Mini-phase complete. Before/after: rule extractor 0% recall -> LLM 100%; harness 6/6 -> 15/18; active memories 20 -> 36.
|
||||||
|
- **2026-04-12 Claude** `330ecfb..06792d8` (merged eval-loop branch + triage). Day 1-4 of the mini-phase completed in one session. Day 2 baseline: rule extractor 0% recall, 5 distinct miss classes. Day 4 gate cleared: LLM extractor (claude -p haiku, OAuth) hit 100% recall, 2.55 yield/interaction. Refactored from anthropic SDK to subprocess after "no API key" rule. First live triage: 51 candidates -> 16 promoted, 35 rejected. Active memories 20->36. p06-polisher went from 2 to 16 memories (firmware/telemetry architecture set). POST /memory now accepts status field. Test count 264->278.
|
||||||
|
- **2026-04-11 Claude** `claude/extractor-eval-loop @ 7d8d599` — Day 1+2 of the mini-phase. Froze a 64-interaction snapshot (`scripts/eval_data/interactions_snapshot_2026-04-11.json`) and labeled 20 by length-stratified random sample (5 positive, 15 zero; 7 total expected candidates). Built `scripts/extractor_eval.py` as a file-based eval runner. **Day 2 baseline: rule extractor hit 0% yield / 0% recall / 0% precision on the labeled set; 5 false negatives across 5 distinct miss classes (recommendation_prose, architectural_change_summary, spec_update_announcement, layered_recommendation, alignment_assertion).** This is the Day 4 hard-stop signal arriving two days early — a single rule expansion cannot close a 5-way miss, and widening rules blindly will collapse precision. The Day 4 decision gate is escalated to Antoine for ratification before Day 3 touches any extractor code. No extractor code on main has changed.
|
||||||
|
- **2026-04-11 Codex (ledger audit)** fixed stale `main_tip`, retargeted R1 from the API surface to the live Claude Stop hook, and formalized the review write protocol so Claude can consume findings without rewriting them.
|
||||||
|
- **2026-04-11 Claude** `b3253f3..59331e5` (1 commit). Wired the DEV-LEDGER, added session protocol to AGENTS.md, created project-local CLAUDE.md, deleted stale `codex/port-atocore-ops-client` remote branch. No code changes, no redeploy needed.
|
||||||
|
- **2026-04-11 Claude** `c5bad99..b3253f3` (11 commits + 1 merge). Length-aware reinforcement, project memories in pack, query-relevance memory ranking, hyphenated-identifier tokenizer, retrieval eval harness seeded, off-host backup wired end-to-end, docs synced, codex integration-pass branch merged. Harness went 0->6/6 on live Dalidou.
|
||||||
|
- **2026-04-11 Codex (async review)** identified 2 P1s against a stale checkout. R1 was fair (extraction not automated), R2 was outdated (project memories already landed on main). Delivered the 8-day execution plan now in Active Plan.
|
||||||
|
- **2026-04-06 Antoine** created `codex/atocore-integration-pass` with the `t420-openclaw/` workspace (merged 2026-04-11).
|
||||||
|
|
||||||
|
## Working Rules
|
||||||
|
|
||||||
|
- Claude builds; Codex audits. No parallel work on the same files.
|
||||||
|
- Codex branches fork from `main`: `git fetch origin && git checkout -b codex/<topic> origin/main`.
|
||||||
|
- P1 findings block further main commits until acknowledged in Open Review Findings.
|
||||||
|
- Every session appends at least one Session Log line and bumps Orientation.
|
||||||
|
- Trim Session Log and Recent Decisions to the last 20 at session end.
|
||||||
|
- Docs in `docs/` may overclaim stale status; the ledger is the one-file source of truth for "what is true right now."
|
||||||
|
|
||||||
|
## Quick Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check live state
|
||||||
|
ssh papa@dalidou "curl -s http://localhost:8100/health"
|
||||||
|
|
||||||
|
# Run the retrieval harness
|
||||||
|
python scripts/retrieval_eval.py # human-readable
|
||||||
|
python scripts/retrieval_eval.py --json # machine-readable
|
||||||
|
|
||||||
|
# Deploy a new main tip
|
||||||
|
git push origin main && ssh papa@dalidou "bash /srv/storage/atocore/app/deploy/dalidou/deploy.sh"
|
||||||
|
|
||||||
|
# Reflection-loop ops
|
||||||
|
python scripts/atocore_client.py batch-extract '' '' 200 false # preview
|
||||||
|
python scripts/atocore_client.py batch-extract '' '' 200 true # persist
|
||||||
|
python scripts/atocore_client.py triage
|
||||||
|
```
|
||||||
@@ -226,14 +226,53 @@ candidate was a synthetic test capture from earlier in the session
|
|||||||
- Capture → reinforce is working correctly on live data (length-aware
|
- Capture → reinforce is working correctly on live data (length-aware
|
||||||
matcher verified on live paraphrase of a p04 memory).
|
matcher verified on live paraphrase of a p04 memory).
|
||||||
|
|
||||||
Follow-up candidates (not yet scheduled):
|
Follow-up candidates:
|
||||||
|
|
||||||
1. Extractor rule expansion — add conversational-form rules so real
|
1. ~~Extractor rule expansion~~ — Day 2 baseline showed 0% recall
|
||||||
session text has a chance of surfacing candidates.
|
across 5 distinct miss classes; rule expansion cannot close a
|
||||||
2. LLM-assisted extractor as a separate rule family, guarded by
|
5-way miss. Deprioritized.
|
||||||
confidence and always landing in `status=candidate` (never active).
|
2. ~~LLM-assisted extractor~~ — DONE 2026-04-12. `extractor_llm.py`
|
||||||
3. Retrieval eval harness — diffable scorecard of
|
shells out to `claude -p` (Haiku, OAuth, no API key). First live
|
||||||
`formatted_context` across a fixed question set per active project.
|
run: 100% recall, 2.55 yield/interaction on a 20-interaction
|
||||||
|
labeled set. First triage: 51 candidates → 16 promoted, 35
|
||||||
|
rejected (31% accept rate). Active memories 20 → 36.
|
||||||
|
3. ~~Retrieval eval harness~~ — DONE 2026-04-11 (scripts/retrieval_eval.py,
|
||||||
|
6/6 passing). Expansion to 15-20 fixtures is mini-phase Day 6.
|
||||||
|
|
||||||
|
## Extractor Scope — 2026-04-12
|
||||||
|
|
||||||
|
What the LLM-assisted extractor (`src/atocore/memory/extractor_llm.py`)
|
||||||
|
extracts from conversational Claude Code captures:
|
||||||
|
|
||||||
|
**In scope:**
|
||||||
|
|
||||||
|
- Architectural commitments (e.g. "Z-axis is engage/retract, not
|
||||||
|
continuous position")
|
||||||
|
- Ratified decisions with project scope (e.g. "USB SSD mandatory on
|
||||||
|
RPi for telemetry storage")
|
||||||
|
- Durable engineering facts (e.g. "telemetry data rate ~29 MB/hour")
|
||||||
|
- Working rules and adaptation patterns (e.g. "extraction stays off
|
||||||
|
the capture hot path")
|
||||||
|
- Interface invariants (e.g. "controller-job.v1 in, run-log.v1 out;
|
||||||
|
no firmware change needed")
|
||||||
|
|
||||||
|
**Out of scope (intentionally rejected by triage):**
|
||||||
|
|
||||||
|
- Transient roadmap / plan steps that will be stale in a week
|
||||||
|
- Operational instructions ("run this command to deploy")
|
||||||
|
- Process rules that live in DEV-LEDGER.md / AGENTS.md, not in memory
|
||||||
|
- Implementation details that are too granular (individual field names
|
||||||
|
when the parent concept is already captured)
|
||||||
|
- Already-fixed review findings (P1/P2 that no longer apply)
|
||||||
|
- Duplicates of existing active memories with wrong project tags
|
||||||
|
|
||||||
|
**Trust model:**
|
||||||
|
|
||||||
|
- Extraction stays off the capture hot path (batch / manual only)
|
||||||
|
- All candidates land as `status=candidate`, never auto-promoted
|
||||||
|
- Human or auto-triage reviews before promotion to active
|
||||||
|
- Future direction: multi-model extraction + triage (Codex/Gemini as
|
||||||
|
second-pass reviewers for robustness against single-model bias)
|
||||||
|
|
||||||
## Long-Run Goal
|
## Long-Run Goal
|
||||||
|
|
||||||
|
|||||||
51
scripts/eval_data/candidate_queue_snapshot.jsonl
Normal file
51
scripts/eval_data/candidate_queue_snapshot.jsonl
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
{"id": "0dd85386-cace-4f9a-9098-c6732f3c64fa", "type": "project", "project": "atocore", "confidence": 0.5, "content": "AtoCore roadmap: (1) extractor improvement, (2) harness expansion, (3) Wave 2 ingestion, (4) OpenClaw finish; steps 1+2 are current mini-phase"}
|
||||||
|
{"id": "8939b875-152c-4c90-8614-3cfdc64cd1d6", "type": "knowledge", "project": "atocore", "confidence": 0.5, "content": "AtoCore is FastAPI (Python 3.12, SQLite + ChromaDB) on Dalidou home server (dalidou:8100), repo C:\\Users\\antoi\\ATOCore, data /srv/storage/atocore/, ingests Obsidian vault + Google Drive into vector memory system."}
|
||||||
|
{"id": "93e37d2a-b512-4a97-b230-e64ac913d087", "type": "knowledge", "project": "atocore", "confidence": 0.5, "content": "Deploy AtoCore: git push origin main, then ssh papa@dalidou and run /srv/storage/atocore/app/deploy/dalidou/deploy.sh"}
|
||||||
|
{"id": "4b82fe01-4393-464a-b935-9ad5d112d3d8", "type": "adaptation", "project": "atocore", "confidence": 0.5, "content": "Do not add memory extraction to interaction capture hot path; keep extraction as separate batch/manual step. Reason: latency and queue noise before review rhythm is comfortable."}
|
||||||
|
{"id": "c873ec00-063e-488c-ad32-1233290a3feb", "type": "project", "project": "atocore", "confidence": 0.5, "content": "As of 2026-04-11, approved roadmap in order: observe reinforcement, batch extraction, candidate triage, off-Dalidou backup, retrieval quality review."}
|
||||||
|
{"id": "665cdd27-0057-4e73-82f5-5d4f47189b5d", "type": "project", "project": "atocore", "confidence": 0.5, "content": "AtoCore adopts DEV-LEDGER.md as shared operating memory with stable headers; updated at session boundaries"}
|
||||||
|
{"id": "5f89c51d-7e8b-4fb9-830d-a35bb649f9f7", "type": "adaptation", "project": "atocore", "confidence": 0.5, "content": "Codex branches for AtoCore fork from main (never orphan); use naming pattern codex/<topic>"}
|
||||||
|
{"id": "25ac367c-8bbe-4ba4-8d8e-d533db33f2d9", "type": "adaptation", "project": "atocore", "confidence": 0.5, "content": "In AtoCore, Claude builds and Codex audits; never work in parallel on same files"}
|
||||||
|
{"id": "89446ebe-fd42-4177-80db-3657bc41d048", "type": "adaptation", "project": "atocore", "confidence": 0.5, "content": "In AtoCore, P1-severity findings in DEV-LEDGER.md block further main commits until acknowledged"}
|
||||||
|
{"id": "1f077e98-f945-4480-96ab-110b0671ebc6", "type": "adaptation", "project": "atocore", "confidence": 0.5, "content": "Every AtoCore session appends to DEV-LEDGER.md Session Log and updates Orientation before ending"}
|
||||||
|
{"id": "89f60018-c23b-4b2f-80ca-e6f7d02c5cd3", "type": "preference", "project": "atocore", "confidence": 0.5, "content": "User prefers receiving standalone testing prompts they can paste into Claude Code on target deployments rather than having the assistant run tests directly."}
|
||||||
|
{"id": "2f69a6ed-6de2-4565-87df-1ea3e8c42963", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "USB SSD on RPi is mandatory for polishing telemetry storage; must be independent of network for data integrity during runs."}
|
||||||
|
{"id": "6bcaebde-9e45-4de5-a220-65d9c4cd451e", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Use Tailscale mesh for RPi remote access to provide SSH, file transfer, and NAT traversal without port forwarding."}
|
||||||
|
{"id": "82f17880-92da-485e-a24a-0599ab1836e7", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Auto-sync telemetry data via rsync over Tailscale after runs complete; fire-and-forget pattern with automatic retry on network interruption."}
|
||||||
|
{"id": "2dd36f74-db47-4c72-a185-fec025d07d4f", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Real-time telemetry monitoring should target 10 Hz downsampling; full 100 Hz streaming over network is not necessary."}
|
||||||
|
{"id": "7519d82b-8065-41f0-812e-9c1a3573d7b9", "type": "knowledge", "project": "p06-polisher", "confidence": 0.5, "content": "Polishing telemetry data rate is approximately 29 MB per hour (100 Hz × 20 channels × 4 bytes = 8 KB/s)."}
|
||||||
|
{"id": "78678162-5754-478b-b1fc-e25f22e0ee03", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Machine spec (shareable) + Atomaste spec (internal) separate concerns. Machine spec hides program generation as 'separate scope' to protect IP/business strategy."}
|
||||||
|
{"id": "6657b4ae-d4ec-4fec-a66f-2975cdb10d13", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Firmware interface contract is invariant: controller-job.v1 input, run-log.v1 + telemetry output. No firmware changes needed regardless of program generation implementation."}
|
||||||
|
{"id": "6d6f4fe9-73e5-449f-a802-6dc0a974f87b", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Atomaste sim spec documents forward/return paths, calibration model (Preston k), translation loss, and service/IP strategy—details hidden from shareable machine spec."}
|
||||||
|
{"id": "932f38df-58f3-49c2-9968-8d422dc54b42", "type": "project", "project": "", "confidence": 0.5, "content": "USB SSD mandatory for storage (not SD card); directory structure /data/runs/{id}/, /data/manual/{id}/; status.json for machine state"}
|
||||||
|
{"id": "2b3178e8-fe38-4338-b2b0-75a01da18cea", "type": "project", "project": "", "confidence": 0.5, "content": "RPi joins Tailscale mesh for remote access over SSH VPN; no public IP or port forwarding; fully offline operation"}
|
||||||
|
{"id": "254c394d-3f80-4b34-a891-9f1cbfec74d7", "type": "project", "project": "", "confidence": 0.5, "content": "Data synchronization via rsync over Tailscale, failure-tolerant and non-blocking; USB stick as manual fallback"}
|
||||||
|
{"id": "ee626650-1ee0-439c-85c9-6d32a876f239", "type": "project", "project": "", "confidence": 0.5, "content": "Machine design principle: works fully offline and independently; network connection is for remote access only"}
|
||||||
|
{"id": "34add99d-8d2e-4586-b002-fc7b7d22bcb3", "type": "project", "project": "", "confidence": 0.5, "content": "No cloud, no real-time streaming, no remote control features in design scope"}
|
||||||
|
{"id": "993e0afe-9910-4984-b608-f5e9de7c0453", "type": "project", "project": "atocore", "confidence": 0.5, "content": "P1: Reflection loop integration incomplete—extraction remains manual (POST /interactions/{id}/extract), not auto-triggered with reinforcement. Live capture won't auto-populate candidate review queue."}
|
||||||
|
{"id": "bdf488d7-9200-441e-afbf-5335020ea78b", "type": "project", "project": "atocore", "confidence": 0.5, "content": "P1: Project memories excluded from context injection; build_context() requests [\"identity\", \"preference\"] only. Reinforcement signal doesn't reach assembled context packs."}
|
||||||
|
{"id": "188197af-a61d-4616-9e39-712aeaaadf61", "type": "project", "project": "atocore", "confidence": 0.5, "content": "Current batch-extract rules produce only 1 candidate from 42 real captures. Extractor needs conversational-cue detection or LLM-assisted path to improve yield."}
|
||||||
|
{"id": "acffcaa4-5966-4ec1-a0b2-3b8dcebe75bd", "type": "project", "project": "atocore", "confidence": 0.5, "content": "Next priority: extractor rule expansion (cheapest validation of reflection loop), then Wave 2 trusted operational ingestion (master-plan priority). Defer retrieval eval harness focus."}
|
||||||
|
{"id": "1b44a886-a5af-4426-bf10-a92baf3a6502", "type": "knowledge", "project": "atocore", "confidence": 0.5, "content": "Alias canonicalization fix (resolve_project_name() boundary) is consistently applied across project state, memories, interactions, and context lookup. Code review approved directionally."}
|
||||||
|
{"id": "e8f4e704-367b-4759-b20c-da0ccf06cf7d", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Machine capabilities now define z_type: engage_retract and cam_type: mechanical_with_encoder instead of actuator-driven setpoints."}
|
||||||
|
{"id": "ab2b607c-52b1-405f-a874-c6078393c21c", "type": "knowledge", "project": "", "confidence": 0.5, "content": "Codex is an audit agent; communicate with it via markdown prompts with numbered steps; it updates findings via commits to codex/* branches or direct messages."}
|
||||||
|
{"id": "5a5fd29d-291f-4e22-88fe-825cf55f745a", "type": "preference", "project": "", "confidence": 0.5, "content": "Audit-first workflow recommended: have codex audit DEV-LEDGER.md and recent commits before execution; validates round-trip, catches errors early."}
|
||||||
|
{"id": "4c238106-017e-4283-99a1-639497b6ddde", "type": "knowledge", "project": "", "confidence": 0.5, "content": "DEV-LEDGER.md at repo root is the shared coordination document with Orientation, Active Plan, and Open Review Findings sections."}
|
||||||
|
{"id": "83aed988-4257-4220-b612-6c725d6cd95a", "type": "project", "project": "atocore", "confidence": 0.5, "content": "Roadmap: Extractor improvement → Harness expansion → Wave 2 trusted operational ingestion → Finish OpenClaw integration (in that order)"}
|
||||||
|
{"id": "95d87d1a-5daa-414d-95ff-a344a62e0b6b", "type": "project", "project": "atocore", "confidence": 0.5, "content": "Phase 1 (Extractor): eval-driven loop—label captures, improve rules/add LLM mode, measure yield & FP, stop when queue reviewable (not coverage metrics)"}
|
||||||
|
{"id": "7aafb588-51b0-4536-a414-ebaaea924b98", "type": "project", "project": "atocore", "confidence": 0.5, "content": "Phases 1 & 2 (Extractor + Harness) are a mini-phase; without harness, extractor improvements are blind edits"}
|
||||||
|
{"id": "aa50c51a-27d7-4db9-b7a3-7ca75dba2118", "type": "knowledge", "project": "", "confidence": 0.5, "content": "Dalidou stores Claude Code interactions via a Stop hook that fires after each turn and POSTs to http://dalidou:8100/interactions with client=claude-code parameter"}
|
||||||
|
{"id": "5951108b-3a5e-49d0-9308-dfab449664d3", "type": "adaptation", "project": "", "confidence": 0.5, "content": "Interaction capture system is passive and automatic; no manual action required, interactions accumulate automatically during normal Claude Code usage"}
|
||||||
|
{"id": "9d2cbbe9-cf2e-4aab-9cb8-c4951da70826", "type": "project", "project": "", "confidence": 0.5, "content": "Session Log/Ledger system tracks work state across sessions so future sessions immediately know what is true and what is next; phases marked by git SHAs."}
|
||||||
|
{"id": "db88eecf-e31a-4fee-b07d-0b51db7e315e", "type": "project", "project": "atocore", "confidence": 0.5, "content": "atocore uses multi-model coordination: Claude and codex share DEV-LEDGER.md (current state / active plan / P1+P2 findings / recent decisions / commit log) read at session start, appended at session end"}
|
||||||
|
{"id": "8748f071-ff28-47a6-8504-65ca30a8336a", "type": "project", "project": "atocore", "confidence": 0.5, "content": "atocore starts with manual-event-loop (/audit or /status prompts) using DEV-LEDGER.md before upgrading to automated git hooks/CI review"}
|
||||||
|
{"id": "f9210883-67a8-4dae-9f27-6b5ae7bd8a6b", "type": "project", "project": "atocore", "confidence": 0.5, "content": "atocore development involves coordinating between Claude and codex models with shared plan/review strategy and counter-validation to improve system quality"}
|
||||||
|
{"id": "85f008b9-2d6d-49ad-81a1-e254dac2a2ac", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Z-axis is a binary engage/retract mechanism (z_engaged bool), not continuous position control; confirmation timeout z_engage_timeout_s required."}
|
||||||
|
{"id": "0cc417ed-ac38-4231-9786-a9582ac6a60f", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Cam amplitude and offset are mechanically set by operator and read via encoders; no actuators control them, controller receives encoder telemetry only."}
|
||||||
|
{"id": "2e001aaf-0c5c-4547-9b96-ebc4172b258d", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Cam parameters in controller are expected_cam_amplitude_deg and expected_cam_offset_deg (read-only reference for verification), not command setpoints."}
|
||||||
|
{"id": "47778126-b0cf-41d9-9e21-f2418f53e792", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Manual mode UI displays cam encoder readings (cam_amplitude_deg, cam_offset_deg) as read-only for operator verification of mechanical setting."}
|
||||||
|
{"id": "410e4a70-ae12-4de2-8f31-071ffee3cad4", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Manual session log records cam_setting measured at session start; run-log segment actual block includes cam_amplitude_deg_mean and cam_offset_deg_mean."}
|
||||||
|
{"id": "e94f94f0-3538-40dd-aef2-0189eacc7eb7", "type": "knowledge", "project": "atocore", "confidence": 0.5, "content": "AtoCore deployments to dalidou use the script /srv/storage/atocore/app/deploy/dalidou/deploy.sh instead of manual docker commands"}
|
||||||
|
{"id": "23fa6fdf-cfb9-4850-ad04-3ea56551c30a", "type": "project", "project": "", "confidence": 0.5, "content": "Retrieval/extraction evaluation follows 8-day mini-phase plan with hard gates to prevent scope drift. Preflight checks must validate git SHAs, baselines, and fixture stability before coding."}
|
||||||
|
{"id": "3e1fad28-031b-4670-a9d0-0af2e8ba1361", "type": "project", "project": "", "confidence": 0.5, "content": "Day 1: Create labeled extractor eval set from 30 captures (10 zero-candidate, 10 single-candidate, 10 ambiguous) with metadata; create scoring tool to measure precision/recall."}
|
||||||
|
{"id": "d49378a4-d03c-4730-be87-f0fcb2d199db", "type": "project", "project": "", "confidence": 0.5, "content": "Day 2: Measure current extractor against labeled set, recording yield, true/false positives, and false negatives by pattern."}
|
||||||
145
scripts/eval_data/extractor_labels_2026-04-11.json
Normal file
145
scripts/eval_data/extractor_labels_2026-04-11.json
Normal file
@@ -0,0 +1,145 @@
|
|||||||
|
{
|
||||||
|
"version": "0.1",
|
||||||
|
"frozen_at": "2026-04-11",
|
||||||
|
"snapshot_file": "scripts/eval_data/interactions_snapshot_2026-04-11.json",
|
||||||
|
"labeled_count": 20,
|
||||||
|
"plan_deviation": "Codex's plan called for 30 labeled interactions (10 zero / 10 plausible / 10 ambiguous). Actual corpus is heavily skewed toward instructional/status content; after reading 20 drawn by length-stratified random sample, the honest positive rate is ~25% (5/20). Labeling more would mostly add zeros; the Day 2 measurement is not bottlenecked on sample size.",
|
||||||
|
"positive_count": 5,
|
||||||
|
"labels": [
|
||||||
|
{
|
||||||
|
"id": "ab239158-d6ac-4c51-b6e4-dd4ccea384a2",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Instructional deploy guidance. No durable claim."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "da153f2a-b20a-4dee-8c72-431ebb71f08c",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "'Deploy still in progress.' Pure status."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "7d8371ee-c6d3-4dfe-a7b0-2d091f075c15",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Git command walkthrough. No durable claim."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "14bf3f90-e318-466e-81ac-d35522741ba5",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Ledger status update. Transient fact, not a durable memory candidate."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "8f855235-c38d-4c27-9f2b-8530ebe1a2d8",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Short-term recommendation ('merge to main and deploy'), not a standing decision."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "04a96eb5-cd00-4e9f-9252-b2cc919000a4",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Dev server config table. Operational detail, not a memory."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "79d606ed-8981-454a-83af-c25226b1b65c",
|
||||||
|
"expected_count": 1,
|
||||||
|
"expected_type": "adaptation",
|
||||||
|
"expected_project": "",
|
||||||
|
"expected_snippet": "shared DEV-LEDGER as operating memory",
|
||||||
|
"miss_class": "recommendation_prose",
|
||||||
|
"notes": "A recommendation that later became a ratified decision. Rule extractor would need a 'simplest version that could work today' / 'I'd start with' cue class."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "a6b0d279-c564-4bce-a703-e476f4a148ad",
|
||||||
|
"expected_count": 2,
|
||||||
|
"expected_type": "project",
|
||||||
|
"expected_project": "p06-polisher",
|
||||||
|
"expected_snippet": "z_engaged bool; cam amplitude set mechanically and read by encoders",
|
||||||
|
"miss_class": "architectural_change_summary",
|
||||||
|
"notes": "Two durable architectural facts about the polisher machine (Z-axis is engage/retract, cam is read-only). Extractor would need to recognize 'A is now B' / 'X removed, Y added' patterns."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "4e00e398-2e89-4653-8ee5-3f65c7f4d2d3",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Clarification question to user."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "a6a7816a-7590-4616-84f4-49d9054c2a91",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Instructional response offering two next moves."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "03527502-316a-4a3e-989c-00719392c7d1",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Troubleshooting a paste failure. Ephemeral."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "1fff59fc-545f-42df-9dd1-a0e6dec1b7ee",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Agreement + follow-up question. No durable claim."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "eb65dc18-0030-4720-ace7-f55af9df719d",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Explanation of how the capture hook works. Instructional."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "52c8c0f3-32fb-4b48-9065-73c778a08417",
|
||||||
|
"expected_count": 1,
|
||||||
|
"expected_type": "project",
|
||||||
|
"expected_project": "p06-polisher",
|
||||||
|
"expected_snippet": "USB SSD mandatory on RPi; Tailscale for remote access",
|
||||||
|
"miss_class": "spec_update_announcement",
|
||||||
|
"notes": "Concrete architectural commitments just added to the polisher spec. Phrased as '§17.1 Local Storage - USB SSD mandatory, not SD card.' The '§' section markers could be a new cue."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "32d40414-15af-47ee-944b-2cceae9574b8",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Session recap. Historical summary, not a durable memory."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "b6d2cdfc-37fb-459a-96bd-caefb9beaab4",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Deployment prompt for Dalidou. Operational, not a memory."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "ee03d823-931b-4d4e-9258-88b4ed5eeb07",
|
||||||
|
"expected_count": 2,
|
||||||
|
"expected_type": "knowledge",
|
||||||
|
"expected_project": "p06-polisher",
|
||||||
|
"expected_snippet": "USB SSD is non-negotiable for local storage; Tailscale mesh for SSH/file transfer",
|
||||||
|
"miss_class": "layered_recommendation",
|
||||||
|
"notes": "Layered infra recommendation with 'non-negotiable' / 'strongly recommended' strength markers. The 'non-negotiable' token could be a new cue class."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "dd234d9f-0d1c-47e8-b01c-eebcb568c7e7",
|
||||||
|
"expected_count": 1,
|
||||||
|
"expected_type": "project",
|
||||||
|
"expected_project": "p06-polisher",
|
||||||
|
"expected_snippet": "interface contract is identical regardless of who generates the programs; machine is a standalone box",
|
||||||
|
"miss_class": "alignment_assertion",
|
||||||
|
"notes": "Architectural invariant assertion. '**Alignment verified**' / 'nothing changes for X' style. Likely too subtle for rule matching without LLM assistance."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "1f95891a-cf37-400e-9d68-4fad8e04dcbb",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Huge session handoff prompt. Informational only."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "5580950f-d010-4544-be4b-b3071271a698",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Ledger schema sketch. Structural design proposal, later ratified — but the same idea was already captured as a ratified decision in the recent decisions section, so not worth re-extracting from this conversational form."
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
518
scripts/eval_data/extractor_llm_baseline_2026-04-11.json
Normal file
518
scripts/eval_data/extractor_llm_baseline_2026-04-11.json
Normal file
@@ -0,0 +1,518 @@
|
|||||||
|
{
|
||||||
|
"summary": {
|
||||||
|
"total": 20,
|
||||||
|
"exact_match": 6,
|
||||||
|
"positive_expected": 5,
|
||||||
|
"total_expected_candidates": 7,
|
||||||
|
"total_actual_candidates": 51,
|
||||||
|
"yield_rate": 2.55,
|
||||||
|
"recall": 1.0,
|
||||||
|
"precision": 0.357,
|
||||||
|
"false_positive_interactions": 9,
|
||||||
|
"false_negative_interactions": 0,
|
||||||
|
"miss_classes": {},
|
||||||
|
"mode": "llm"
|
||||||
|
},
|
||||||
|
"results": [
|
||||||
|
{
|
||||||
|
"id": "ab239158-d6ac-4c51-b6e4-dd4ccea384a2",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 1,
|
||||||
|
"ok": false,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Instructional deploy guidance. No durable claim.",
|
||||||
|
"actual_candidates": [
|
||||||
|
{
|
||||||
|
"memory_type": "knowledge",
|
||||||
|
"content": "AtoCore deployments to dalidou use the script /srv/storage/atocore/app/deploy/dalidou/deploy.sh instead of manual docker commands",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "da153f2a-b20a-4dee-8c72-431ebb71f08c",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 0,
|
||||||
|
"ok": true,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "'Deploy still in progress.' Pure status.",
|
||||||
|
"actual_candidates": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "7d8371ee-c6d3-4dfe-a7b0-2d091f075c15",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 0,
|
||||||
|
"ok": true,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Git command walkthrough. No durable claim.",
|
||||||
|
"actual_candidates": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "14bf3f90-e318-466e-81ac-d35522741ba5",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 4,
|
||||||
|
"ok": false,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Ledger status update. Transient fact, not a durable memory candidate.",
|
||||||
|
"actual_candidates": [
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Retrieval/extraction evaluation follows 8-day mini-phase plan with hard gates to prevent scope drift. Preflight checks must validate git SHAs, baselines, and fixture stability before coding.",
|
||||||
|
"project": "",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Day 1: Create labeled extractor eval set from 30 captures (10 zero-candidate, 10 single-candidate, 10 ambiguous) with metadata; create scoring tool to measure precision/recall.",
|
||||||
|
"project": "",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Day 2: Measure current extractor against labeled set, recording yield, true/false positives, and false negatives by pattern.",
|
||||||
|
"project": "",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Session Log/Ledger system tracks work state across sessions so future sessions immediately know what is true and what is next; phases marked by git SHAs.",
|
||||||
|
"project": "",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "8f855235-c38d-4c27-9f2b-8530ebe1a2d8",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 0,
|
||||||
|
"ok": true,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Short-term recommendation ('merge to main and deploy'), not a standing decision.",
|
||||||
|
"actual_candidates": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "04a96eb5-cd00-4e9f-9252-b2cc919000a4",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 0,
|
||||||
|
"ok": true,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Dev server config table. Operational detail, not a memory.",
|
||||||
|
"actual_candidates": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "79d606ed-8981-454a-83af-c25226b1b65c",
|
||||||
|
"expected_count": 1,
|
||||||
|
"actual_count": 3,
|
||||||
|
"ok": false,
|
||||||
|
"miss_class": "recommendation_prose",
|
||||||
|
"notes": "A recommendation that later became a ratified decision. Rule extractor would need a 'simplest version that could work today' / 'I'd start with' cue class.",
|
||||||
|
"actual_candidates": [
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "atocore uses multi-model coordination: Claude and codex share DEV-LEDGER.md (current state / active plan / P1+P2 findings / recent decisions / commit log) read at session start, appended at session end",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "atocore starts with manual-event-loop (/audit or /status prompts) using DEV-LEDGER.md before upgrading to automated git hooks/CI review",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "atocore development involves coordinating between Claude and codex models with shared plan/review strategy and counter-validation to improve system quality",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "a6b0d279-c564-4bce-a703-e476f4a148ad",
|
||||||
|
"expected_count": 2,
|
||||||
|
"actual_count": 6,
|
||||||
|
"ok": false,
|
||||||
|
"miss_class": "architectural_change_summary",
|
||||||
|
"notes": "Two durable architectural facts about the polisher machine (Z-axis is engage/retract, cam is read-only). Extractor would need to recognize 'A is now B' / 'X removed, Y added' patterns.",
|
||||||
|
"actual_candidates": [
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Z-axis is a binary engage/retract mechanism (z_engaged bool), not continuous position control; confirmation timeout z_engage_timeout_s required.",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Cam amplitude and offset are mechanically set by operator and read via encoders; no actuators control them, controller receives encoder telemetry only.",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Cam parameters in controller are expected_cam_amplitude_deg and expected_cam_offset_deg (read-only reference for verification), not command setpoints.",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Manual mode UI displays cam encoder readings (cam_amplitude_deg, cam_offset_deg) as read-only for operator verification of mechanical setting.",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Manual session log records cam_setting measured at session start; run-log segment actual block includes cam_amplitude_deg_mean and cam_offset_deg_mean.",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Machine capabilities now define z_type: engage_retract and cam_type: mechanical_with_encoder instead of actuator-driven setpoints.",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "4e00e398-2e89-4653-8ee5-3f65c7f4d2d3",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 0,
|
||||||
|
"ok": true,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Clarification question to user.",
|
||||||
|
"actual_candidates": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "a6a7816a-7590-4616-84f4-49d9054c2a91",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 3,
|
||||||
|
"ok": false,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Instructional response offering two next moves.",
|
||||||
|
"actual_candidates": [
|
||||||
|
{
|
||||||
|
"memory_type": "knowledge",
|
||||||
|
"content": "Codex is an audit agent; communicate with it via markdown prompts with numbered steps; it updates findings via commits to codex/* branches or direct messages.",
|
||||||
|
"project": "",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "preference",
|
||||||
|
"content": "Audit-first workflow recommended: have codex audit DEV-LEDGER.md and recent commits before execution; validates round-trip, catches errors early.",
|
||||||
|
"project": "",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "knowledge",
|
||||||
|
"content": "DEV-LEDGER.md at repo root is the shared coordination document with Orientation, Active Plan, and Open Review Findings sections.",
|
||||||
|
"project": "",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "03527502-316a-4a3e-989c-00719392c7d1",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 0,
|
||||||
|
"ok": true,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Troubleshooting a paste failure. Ephemeral.",
|
||||||
|
"actual_candidates": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "1fff59fc-545f-42df-9dd1-a0e6dec1b7ee",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 3,
|
||||||
|
"ok": false,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Agreement + follow-up question. No durable claim.",
|
||||||
|
"actual_candidates": [
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Roadmap: Extractor improvement → Harness expansion → Wave 2 trusted operational ingestion → Finish OpenClaw integration (in that order)",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Phase 1 (Extractor): eval-driven loop—label captures, improve rules/add LLM mode, measure yield & FP, stop when queue reviewable (not coverage metrics)",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Phases 1 & 2 (Extractor + Harness) are a mini-phase; without harness, extractor improvements are blind edits",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "eb65dc18-0030-4720-ace7-f55af9df719d",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 2,
|
||||||
|
"ok": false,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Explanation of how the capture hook works. Instructional.",
|
||||||
|
"actual_candidates": [
|
||||||
|
{
|
||||||
|
"memory_type": "knowledge",
|
||||||
|
"content": "Dalidou stores Claude Code interactions via a Stop hook that fires after each turn and POSTs to http://dalidou:8100/interactions with client=claude-code parameter",
|
||||||
|
"project": "",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "adaptation",
|
||||||
|
"content": "Interaction capture system is passive and automatic; no manual action required, interactions accumulate automatically during normal Claude Code usage",
|
||||||
|
"project": "",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "52c8c0f3-32fb-4b48-9065-73c778a08417",
|
||||||
|
"expected_count": 1,
|
||||||
|
"actual_count": 5,
|
||||||
|
"ok": false,
|
||||||
|
"miss_class": "spec_update_announcement",
|
||||||
|
"notes": "Concrete architectural commitments just added to the polisher spec. Phrased as '§17.1 Local Storage - USB SSD mandatory, not SD card.' The '§' section markers could be a new cue.",
|
||||||
|
"actual_candidates": [
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "USB SSD mandatory for storage (not SD card); directory structure /data/runs/{id}/, /data/manual/{id}/; status.json for machine state",
|
||||||
|
"project": "",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "RPi joins Tailscale mesh for remote access over SSH VPN; no public IP or port forwarding; fully offline operation",
|
||||||
|
"project": "",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Data synchronization via rsync over Tailscale, failure-tolerant and non-blocking; USB stick as manual fallback",
|
||||||
|
"project": "",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Machine design principle: works fully offline and independently; network connection is for remote access only",
|
||||||
|
"project": "",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "No cloud, no real-time streaming, no remote control features in design scope",
|
||||||
|
"project": "",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "32d40414-15af-47ee-944b-2cceae9574b8",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 5,
|
||||||
|
"ok": false,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Session recap. Historical summary, not a durable memory.",
|
||||||
|
"actual_candidates": [
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "P1: Reflection loop integration incomplete—extraction remains manual (POST /interactions/{id}/extract), not auto-triggered with reinforcement. Live capture won't auto-populate candidate review queue.",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "P1: Project memories excluded from context injection; build_context() requests [\"identity\", \"preference\"] only. Reinforcement signal doesn't reach assembled context packs.",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Current batch-extract rules produce only 1 candidate from 42 real captures. Extractor needs conversational-cue detection or LLM-assisted path to improve yield.",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Next priority: extractor rule expansion (cheapest validation of reflection loop), then Wave 2 trusted operational ingestion (master-plan priority). Defer retrieval eval harness focus.",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "knowledge",
|
||||||
|
"content": "Alias canonicalization fix (resolve_project_name() boundary) is consistently applied across project state, memories, interactions, and context lookup. Code review approved directionally.",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "b6d2cdfc-37fb-459a-96bd-caefb9beaab4",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 1,
|
||||||
|
"ok": false,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Deployment prompt for Dalidou. Operational, not a memory.",
|
||||||
|
"actual_candidates": [
|
||||||
|
{
|
||||||
|
"memory_type": "preference",
|
||||||
|
"content": "User prefers receiving standalone testing prompts they can paste into Claude Code on target deployments rather than having the assistant run tests directly.",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "ee03d823-931b-4d4e-9258-88b4ed5eeb07",
|
||||||
|
"expected_count": 2,
|
||||||
|
"actual_count": 5,
|
||||||
|
"ok": false,
|
||||||
|
"miss_class": "layered_recommendation",
|
||||||
|
"notes": "Layered infra recommendation with 'non-negotiable' / 'strongly recommended' strength markers. The 'non-negotiable' token could be a new cue class.",
|
||||||
|
"actual_candidates": [
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "USB SSD on RPi is mandatory for polishing telemetry storage; must be independent of network for data integrity during runs.",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Use Tailscale mesh for RPi remote access to provide SSH, file transfer, and NAT traversal without port forwarding.",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Auto-sync telemetry data via rsync over Tailscale after runs complete; fire-and-forget pattern with automatic retry on network interruption.",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Real-time telemetry monitoring should target 10 Hz downsampling; full 100 Hz streaming over network is not necessary.",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "knowledge",
|
||||||
|
"content": "Polishing telemetry data rate is approximately 29 MB per hour (100 Hz × 20 channels × 4 bytes = 8 KB/s).",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "dd234d9f-0d1c-47e8-b01c-eebcb568c7e7",
|
||||||
|
"expected_count": 1,
|
||||||
|
"actual_count": 3,
|
||||||
|
"ok": false,
|
||||||
|
"miss_class": "alignment_assertion",
|
||||||
|
"notes": "Architectural invariant assertion. '**Alignment verified**' / 'nothing changes for X' style. Likely too subtle for rule matching without LLM assistance.",
|
||||||
|
"actual_candidates": [
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Machine spec (shareable) + Atomaste spec (internal) separate concerns. Machine spec hides program generation as 'separate scope' to protect IP/business strategy.",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Firmware interface contract is invariant: controller-job.v1 input, run-log.v1 + telemetry output. No firmware changes needed regardless of program generation implementation.",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Atomaste sim spec documents forward/return paths, calibration model (Preston k), translation loss, and service/IP strategy—details hidden from shareable machine spec.",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "1f95891a-cf37-400e-9d68-4fad8e04dcbb",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 4,
|
||||||
|
"ok": false,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Huge session handoff prompt. Informational only.",
|
||||||
|
"actual_candidates": [
|
||||||
|
{
|
||||||
|
"memory_type": "knowledge",
|
||||||
|
"content": "AtoCore is FastAPI (Python 3.12, SQLite + ChromaDB) on Dalidou home server (dalidou:8100), repo C:\\Users\\antoi\\ATOCore, data /srv/storage/atocore/, ingests Obsidian vault + Google Drive into vector memory system.",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "knowledge",
|
||||||
|
"content": "Deploy AtoCore: git push origin main, then ssh papa@dalidou and run /srv/storage/atocore/app/deploy/dalidou/deploy.sh",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "adaptation",
|
||||||
|
"content": "Do not add memory extraction to interaction capture hot path; keep extraction as separate batch/manual step. Reason: latency and queue noise before review rhythm is comfortable.",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "As of 2026-04-11, approved roadmap in order: observe reinforcement, batch extraction, candidate triage, off-Dalidou backup, retrieval quality review.",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "5580950f-d010-4544-be4b-b3071271a698",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 6,
|
||||||
|
"ok": false,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Ledger schema sketch. Structural design proposal, later ratified — but the same idea was already captured as a ratified decision in the recent decisions section, so not worth re-extracting from this conversational form.",
|
||||||
|
"actual_candidates": [
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "AtoCore adopts DEV-LEDGER.md as shared operating memory with stable headers; updated at session boundaries",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "adaptation",
|
||||||
|
"content": "Codex branches for AtoCore fork from main (never orphan); use naming pattern codex/<topic>",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "adaptation",
|
||||||
|
"content": "In AtoCore, Claude builds and Codex audits; never work in parallel on same files",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "adaptation",
|
||||||
|
"content": "In AtoCore, P1-severity findings in DEV-LEDGER.md block further main commits until acknowledged",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "adaptation",
|
||||||
|
"content": "Every AtoCore session appends to DEV-LEDGER.md Session Log and updates Orientation before ending",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "AtoCore roadmap: (1) extractor improvement, (2) harness expansion, (3) Wave 2 ingestion, (4) OpenClaw finish; steps 1+2 are current mini-phase",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
1
scripts/eval_data/interactions_snapshot_2026-04-11.json
Normal file
1
scripts/eval_data/interactions_snapshot_2026-04-11.json
Normal file
File diff suppressed because one or more lines are too long
1
scripts/eval_data/triage_verdict_2026-04-12.json
Normal file
1
scripts/eval_data/triage_verdict_2026-04-12.json
Normal file
@@ -0,0 +1 @@
|
|||||||
|
{"promote": ["4b82fe01-4393-464a-b935-9ad5d112d3d8", "665cdd27-0057-4e73-82f5-5d4f47189b5d", "5f89c51d-7e8b-4fb9-830d-a35bb649f9f7", "25ac367c-8bbe-4ba4-8d8e-d533db33f2d9", "2f69a6ed-6de2-4565-87df-1ea3e8c42963", "6bcaebde-9e45-4de5-a220-65d9c4cd451e", "2dd36f74-db47-4c72-a185-fec025d07d4f", "7519d82b-8065-41f0-812e-9c1a3573d7b9", "78678162-5754-478b-b1fc-e25f22e0ee03", "6657b4ae-d4ec-4fec-a66f-2975cdb10d13", "ee626650-1ee0-439c-85c9-6d32a876f239", "1b44a886-a5af-4426-bf10-a92baf3a6502", "aa50c51a-27d7-4db9-b7a3-7ca75dba2118", "5951108b-3a5e-49d0-9308-dfab449664d3", "85f008b9-2d6d-49ad-81a1-e254dac2a2ac", "0cc417ed-ac38-4231-9786-a9582ac6a60f"], "reject": ["0dd85386-cace-4f9a-9098-c6732f3c64fa", "8939b875-152c-4c90-8614-3cfdc64cd1d6", "93e37d2a-b512-4a97-b230-e64ac913d087", "c873ec00-063e-488c-ad32-1233290a3feb", "89446ebe-fd42-4177-80db-3657bc41d048", "1f077e98-f945-4480-96ab-110b0671ebc6", "89f60018-c23b-4b2f-80ca-e6f7d02c5cd3", "82f17880-92da-485e-a24a-0599ab1836e7", "6d6f4fe9-73e5-449f-a802-6dc0a974f87b", "932f38df-58f3-49c2-9968-8d422dc54b42", "2b3178e8-fe38-4338-b2b0-75a01da18cea", "254c394d-3f80-4b34-a891-9f1cbfec74d7", "34add99d-8d2e-4586-b002-fc7b7d22bcb3", "993e0afe-9910-4984-b608-f5e9de7c0453", "bdf488d7-9200-441e-afbf-5335020ea78b", "188197af-a61d-4616-9e39-712aeaaadf61", "acffcaa4-5966-4ec1-a0b2-3b8dcebe75bd", "e8f4e704-367b-4759-b20c-da0ccf06cf7d", "ab2b607c-52b1-405f-a874-c6078393c21c", "5a5fd29d-291f-4e22-88fe-825cf55f745a", "4c238106-017e-4283-99a1-639497b6ddde", "83aed988-4257-4220-b612-6c725d6cd95a", "95d87d1a-5daa-414d-95ff-a344a62e0b6b", "7aafb588-51b0-4536-a414-ebaaea924b98", "9d2cbbe9-cf2e-4aab-9cb8-c4951da70826", "db88eecf-e31a-4fee-b07d-0b51db7e315e", "8748f071-ff28-47a6-8504-65ca30a8336a", "f9210883-67a8-4dae-9f27-6b5ae7bd8a6b", "2e001aaf-0c5c-4547-9b96-ebc4172b258d", "47778126-b0cf-41d9-9e21-f2418f53e792", "410e4a70-ae12-4de2-8f31-071ffee3cad4", "e94f94f0-3538-40dd-aef2-0189eacc7eb7", "23fa6fdf-cfb9-4850-ad04-3ea56551c30a", "3e1fad28-031b-4670-a9d0-0af2e8ba1361", "d49378a4-d03c-4730-be87-f0fcb2d199db"]}
|
||||||
274
scripts/extractor_eval.py
Normal file
274
scripts/extractor_eval.py
Normal file
@@ -0,0 +1,274 @@
|
|||||||
|
"""Extractor eval runner — scores the rule-based extractor against a
|
||||||
|
labeled interaction corpus.
|
||||||
|
|
||||||
|
Pulls full interaction content from a frozen snapshot, runs each through
|
||||||
|
``extract_candidates_from_interaction``, and compares the output to the
|
||||||
|
expected counts from a labels file. Produces a per-label scorecard plus
|
||||||
|
aggregate precision / recall / yield numbers.
|
||||||
|
|
||||||
|
This harness deliberately stays file-based: snapshot + labels + this
|
||||||
|
runner. No Dalidou HTTP dependency once the snapshot is frozen, so the
|
||||||
|
eval is reproducible run-to-run even as live captures drift.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
|
||||||
|
python scripts/extractor_eval.py # human report
|
||||||
|
python scripts/extractor_eval.py --json # machine-readable
|
||||||
|
python scripts/extractor_eval.py \\
|
||||||
|
--snapshot scripts/eval_data/interactions_snapshot_2026-04-11.json \\
|
||||||
|
--labels scripts/eval_data/extractor_labels_2026-04-11.json
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import io
|
||||||
|
import json
|
||||||
|
import sys
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
# Force UTF-8 on stdout so real LLM output (arrows, em-dashes, CJK)
|
||||||
|
# doesn't crash the human report on Windows cp1252 consoles.
|
||||||
|
if hasattr(sys.stdout, "buffer"):
|
||||||
|
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding="utf-8", errors="replace", line_buffering=True)
|
||||||
|
|
||||||
|
# Make src/ importable without requiring an install.
|
||||||
|
_REPO_ROOT = Path(__file__).resolve().parent.parent
|
||||||
|
sys.path.insert(0, str(_REPO_ROOT / "src"))
|
||||||
|
|
||||||
|
from atocore.interactions.service import Interaction # noqa: E402
|
||||||
|
from atocore.memory.extractor import extract_candidates_from_interaction # noqa: E402
|
||||||
|
from atocore.memory.extractor_llm import extract_candidates_llm # noqa: E402
|
||||||
|
|
||||||
|
DEFAULT_SNAPSHOT = _REPO_ROOT / "scripts" / "eval_data" / "interactions_snapshot_2026-04-11.json"
|
||||||
|
DEFAULT_LABELS = _REPO_ROOT / "scripts" / "eval_data" / "extractor_labels_2026-04-11.json"
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class LabelResult:
|
||||||
|
id: str
|
||||||
|
expected_count: int
|
||||||
|
actual_count: int
|
||||||
|
ok: bool
|
||||||
|
miss_class: str
|
||||||
|
notes: str
|
||||||
|
actual_candidates: list[dict] = field(default_factory=list)
|
||||||
|
|
||||||
|
|
||||||
|
def load_snapshot(path: Path) -> dict[str, dict]:
|
||||||
|
data = json.loads(path.read_text(encoding="utf-8"))
|
||||||
|
return {item["id"]: item for item in data.get("interactions", [])}
|
||||||
|
|
||||||
|
|
||||||
|
def load_labels(path: Path) -> dict:
|
||||||
|
return json.loads(path.read_text(encoding="utf-8"))
|
||||||
|
|
||||||
|
|
||||||
|
def interaction_from_snapshot(snap: dict) -> Interaction:
|
||||||
|
return Interaction(
|
||||||
|
id=snap["id"],
|
||||||
|
prompt=snap.get("prompt", "") or "",
|
||||||
|
response=snap.get("response", "") or "",
|
||||||
|
response_summary="",
|
||||||
|
project=snap.get("project", "") or "",
|
||||||
|
client=snap.get("client", "") or "",
|
||||||
|
session_id=snap.get("session_id", "") or "",
|
||||||
|
created_at=snap.get("created_at", "") or "",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def score(snapshot: dict[str, dict], labels_doc: dict, mode: str = "rule") -> list[LabelResult]:
|
||||||
|
results: list[LabelResult] = []
|
||||||
|
for label in labels_doc["labels"]:
|
||||||
|
iid = label["id"]
|
||||||
|
snap = snapshot.get(iid)
|
||||||
|
if snap is None:
|
||||||
|
results.append(
|
||||||
|
LabelResult(
|
||||||
|
id=iid,
|
||||||
|
expected_count=int(label.get("expected_count", 0)),
|
||||||
|
actual_count=-1,
|
||||||
|
ok=False,
|
||||||
|
miss_class="not_in_snapshot",
|
||||||
|
notes=label.get("notes", ""),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
continue
|
||||||
|
interaction = interaction_from_snapshot(snap)
|
||||||
|
if mode == "llm":
|
||||||
|
candidates = extract_candidates_llm(interaction)
|
||||||
|
else:
|
||||||
|
candidates = extract_candidates_from_interaction(interaction)
|
||||||
|
actual_count = len(candidates)
|
||||||
|
expected_count = int(label.get("expected_count", 0))
|
||||||
|
results.append(
|
||||||
|
LabelResult(
|
||||||
|
id=iid,
|
||||||
|
expected_count=expected_count,
|
||||||
|
actual_count=actual_count,
|
||||||
|
ok=(actual_count == expected_count),
|
||||||
|
miss_class=label.get("miss_class", "n/a"),
|
||||||
|
notes=label.get("notes", ""),
|
||||||
|
actual_candidates=[
|
||||||
|
{
|
||||||
|
"memory_type": c.memory_type,
|
||||||
|
"content": c.content,
|
||||||
|
"project": c.project,
|
||||||
|
"rule": c.rule,
|
||||||
|
}
|
||||||
|
for c in candidates
|
||||||
|
],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
return results
|
||||||
|
|
||||||
|
|
||||||
|
def aggregate(results: list[LabelResult]) -> dict:
|
||||||
|
total = len(results)
|
||||||
|
exact_match = sum(1 for r in results if r.ok)
|
||||||
|
true_positive = sum(1 for r in results if r.expected_count > 0 and r.actual_count > 0)
|
||||||
|
false_positive_interactions = sum(
|
||||||
|
1 for r in results if r.expected_count == 0 and r.actual_count > 0
|
||||||
|
)
|
||||||
|
false_negative_interactions = sum(
|
||||||
|
1 for r in results if r.expected_count > 0 and r.actual_count == 0
|
||||||
|
)
|
||||||
|
positive_expected = sum(1 for r in results if r.expected_count > 0)
|
||||||
|
total_expected_candidates = sum(r.expected_count for r in results)
|
||||||
|
total_actual_candidates = sum(max(r.actual_count, 0) for r in results)
|
||||||
|
yield_rate = total_actual_candidates / total if total else 0.0
|
||||||
|
# Recall over interaction count that had at least one expected candidate:
|
||||||
|
recall = true_positive / positive_expected if positive_expected else 0.0
|
||||||
|
# Precision over interaction count that produced any candidate:
|
||||||
|
precision_denom = true_positive + false_positive_interactions
|
||||||
|
precision = true_positive / precision_denom if precision_denom else 0.0
|
||||||
|
# Miss class breakdown
|
||||||
|
miss_classes: dict[str, int] = {}
|
||||||
|
for r in results:
|
||||||
|
if r.expected_count > 0 and r.actual_count == 0:
|
||||||
|
key = r.miss_class or "unlabeled"
|
||||||
|
miss_classes[key] = miss_classes.get(key, 0) + 1
|
||||||
|
return {
|
||||||
|
"total": total,
|
||||||
|
"exact_match": exact_match,
|
||||||
|
"positive_expected": positive_expected,
|
||||||
|
"total_expected_candidates": total_expected_candidates,
|
||||||
|
"total_actual_candidates": total_actual_candidates,
|
||||||
|
"yield_rate": round(yield_rate, 3),
|
||||||
|
"recall": round(recall, 3),
|
||||||
|
"precision": round(precision, 3),
|
||||||
|
"false_positive_interactions": false_positive_interactions,
|
||||||
|
"false_negative_interactions": false_negative_interactions,
|
||||||
|
"miss_classes": miss_classes,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def print_human(results: list[LabelResult], summary: dict) -> None:
|
||||||
|
print("=== Extractor eval ===")
|
||||||
|
print(
|
||||||
|
f"labeled={summary['total']} "
|
||||||
|
f"exact_match={summary['exact_match']} "
|
||||||
|
f"positive_expected={summary['positive_expected']}"
|
||||||
|
)
|
||||||
|
print(
|
||||||
|
f"yield={summary['yield_rate']} "
|
||||||
|
f"recall={summary['recall']} "
|
||||||
|
f"precision={summary['precision']}"
|
||||||
|
)
|
||||||
|
print(
|
||||||
|
f"false_positives={summary['false_positive_interactions']} "
|
||||||
|
f"false_negatives={summary['false_negative_interactions']}"
|
||||||
|
)
|
||||||
|
print()
|
||||||
|
print("miss class breakdown (FN):")
|
||||||
|
if summary["miss_classes"]:
|
||||||
|
for k, v in sorted(summary["miss_classes"].items(), key=lambda kv: -kv[1]):
|
||||||
|
print(f" {v:3d} {k}")
|
||||||
|
else:
|
||||||
|
print(" (none)")
|
||||||
|
print()
|
||||||
|
print("per-interaction:")
|
||||||
|
for r in results:
|
||||||
|
marker = "OK " if r.ok else "MISS"
|
||||||
|
iid_short = r.id[:8]
|
||||||
|
print(f" {marker} {iid_short} expected={r.expected_count} actual={r.actual_count} class={r.miss_class}")
|
||||||
|
if r.actual_candidates:
|
||||||
|
for c in r.actual_candidates:
|
||||||
|
preview = (c["content"] or "")[:80]
|
||||||
|
print(f" [{c['memory_type']}] {preview}")
|
||||||
|
|
||||||
|
|
||||||
|
def print_json(results: list[LabelResult], summary: dict) -> None:
|
||||||
|
payload = {
|
||||||
|
"summary": summary,
|
||||||
|
"results": [
|
||||||
|
{
|
||||||
|
"id": r.id,
|
||||||
|
"expected_count": r.expected_count,
|
||||||
|
"actual_count": r.actual_count,
|
||||||
|
"ok": r.ok,
|
||||||
|
"miss_class": r.miss_class,
|
||||||
|
"notes": r.notes,
|
||||||
|
"actual_candidates": r.actual_candidates,
|
||||||
|
}
|
||||||
|
for r in results
|
||||||
|
],
|
||||||
|
}
|
||||||
|
json.dump(payload, sys.stdout, indent=2)
|
||||||
|
sys.stdout.write("\n")
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> int:
|
||||||
|
parser = argparse.ArgumentParser(description="AtoCore extractor eval")
|
||||||
|
parser.add_argument("--snapshot", type=Path, default=DEFAULT_SNAPSHOT)
|
||||||
|
parser.add_argument("--labels", type=Path, default=DEFAULT_LABELS)
|
||||||
|
parser.add_argument("--json", action="store_true", help="emit machine-readable JSON")
|
||||||
|
parser.add_argument(
|
||||||
|
"--output",
|
||||||
|
type=Path,
|
||||||
|
default=None,
|
||||||
|
help="write JSON result to this file (bypasses log/stdout interleaving)",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--mode",
|
||||||
|
choices=["rule", "llm"],
|
||||||
|
default="rule",
|
||||||
|
help="which extractor to score (default: rule)",
|
||||||
|
)
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
snapshot = load_snapshot(args.snapshot)
|
||||||
|
labels = load_labels(args.labels)
|
||||||
|
results = score(snapshot, labels, mode=args.mode)
|
||||||
|
summary = aggregate(results)
|
||||||
|
summary["mode"] = args.mode
|
||||||
|
|
||||||
|
if args.output is not None:
|
||||||
|
payload = {
|
||||||
|
"summary": summary,
|
||||||
|
"results": [
|
||||||
|
{
|
||||||
|
"id": r.id,
|
||||||
|
"expected_count": r.expected_count,
|
||||||
|
"actual_count": r.actual_count,
|
||||||
|
"ok": r.ok,
|
||||||
|
"miss_class": r.miss_class,
|
||||||
|
"notes": r.notes,
|
||||||
|
"actual_candidates": r.actual_candidates,
|
||||||
|
}
|
||||||
|
for r in results
|
||||||
|
],
|
||||||
|
}
|
||||||
|
args.output.write_text(json.dumps(payload, indent=2, ensure_ascii=False), encoding="utf-8")
|
||||||
|
print(f"wrote {args.output} ({summary['mode']}: recall={summary['recall']} precision={summary['precision']})")
|
||||||
|
elif args.json:
|
||||||
|
print_json(results, summary)
|
||||||
|
else:
|
||||||
|
print_human(results, summary)
|
||||||
|
|
||||||
|
return 0 if summary["false_negative_interactions"] == 0 and summary["false_positive_interactions"] == 0 else 1
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
raise SystemExit(main())
|
||||||
89
scripts/persist_llm_candidates.py
Normal file
89
scripts/persist_llm_candidates.py
Normal file
@@ -0,0 +1,89 @@
|
|||||||
|
"""Persist LLM-extracted candidates from a baseline JSON to Dalidou.
|
||||||
|
|
||||||
|
One-shot script: reads a saved extractor eval output file, filters to
|
||||||
|
candidates the LLM actually produced, and POSTs each to the Dalidou
|
||||||
|
memory API with ``status=candidate``. Deduplicates against already-
|
||||||
|
existing candidate content so the script is safe to re-run.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
|
||||||
|
python scripts/persist_llm_candidates.py \\
|
||||||
|
scripts/eval_data/extractor_llm_baseline_2026-04-11.json
|
||||||
|
|
||||||
|
Then triage via:
|
||||||
|
|
||||||
|
python scripts/atocore_client.py triage
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import urllib.error
|
||||||
|
import urllib.parse
|
||||||
|
import urllib.request
|
||||||
|
|
||||||
|
BASE_URL = os.environ.get("ATOCORE_BASE_URL", "http://dalidou:8100")
|
||||||
|
TIMEOUT = int(os.environ.get("ATOCORE_TIMEOUT_SECONDS", "10"))
|
||||||
|
|
||||||
|
|
||||||
|
def post_json(path: str, body: dict) -> dict:
|
||||||
|
data = json.dumps(body).encode("utf-8")
|
||||||
|
req = urllib.request.Request(
|
||||||
|
url=f"{BASE_URL}{path}",
|
||||||
|
method="POST",
|
||||||
|
headers={"Content-Type": "application/json"},
|
||||||
|
data=data,
|
||||||
|
)
|
||||||
|
with urllib.request.urlopen(req, timeout=TIMEOUT) as resp:
|
||||||
|
return json.loads(resp.read().decode("utf-8"))
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> int:
|
||||||
|
if len(sys.argv) < 2:
|
||||||
|
print(f"usage: {sys.argv[0]} <baseline_json>", file=sys.stderr)
|
||||||
|
return 1
|
||||||
|
|
||||||
|
data = json.loads(open(sys.argv[1], encoding="utf-8").read())
|
||||||
|
results = data.get("results", [])
|
||||||
|
|
||||||
|
persisted = 0
|
||||||
|
skipped = 0
|
||||||
|
errors = 0
|
||||||
|
|
||||||
|
for r in results:
|
||||||
|
for c in r.get("actual_candidates", []):
|
||||||
|
content = (c.get("content") or "").strip()
|
||||||
|
if not content:
|
||||||
|
continue
|
||||||
|
mem_type = c.get("memory_type", "knowledge")
|
||||||
|
project = c.get("project", "")
|
||||||
|
confidence = c.get("confidence", 0.5)
|
||||||
|
|
||||||
|
try:
|
||||||
|
resp = post_json("/memory", {
|
||||||
|
"memory_type": mem_type,
|
||||||
|
"content": content,
|
||||||
|
"project": project,
|
||||||
|
"confidence": float(confidence),
|
||||||
|
"status": "candidate",
|
||||||
|
})
|
||||||
|
persisted += 1
|
||||||
|
print(f" + {resp.get('id','?')[:8]} [{mem_type}] {content[:80]}")
|
||||||
|
except urllib.error.HTTPError as exc:
|
||||||
|
if exc.code == 400:
|
||||||
|
skipped += 1
|
||||||
|
else:
|
||||||
|
errors += 1
|
||||||
|
print(f" ! error {exc.code}: {content[:60]}", file=sys.stderr)
|
||||||
|
except Exception as exc:
|
||||||
|
errors += 1
|
||||||
|
print(f" ! {exc}: {content[:60]}", file=sys.stderr)
|
||||||
|
|
||||||
|
print(f"\npersisted={persisted} skipped={skipped} errors={errors}")
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
raise SystemExit(main())
|
||||||
@@ -13,7 +13,7 @@
|
|||||||
"p06-polisher",
|
"p06-polisher",
|
||||||
"folded-beam"
|
"folded-beam"
|
||||||
],
|
],
|
||||||
"notes": "Canonical p04 decision — should surface both Trusted Project State (selected_mirror_architecture) and the project-memory band with the Option B memory"
|
"notes": "Canonical p04 decision — should surface both Trusted Project State and the project-memory band"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "p04-constraints",
|
"name": "p04-constraints",
|
||||||
@@ -27,7 +27,17 @@
|
|||||||
"expect_absent": [
|
"expect_absent": [
|
||||||
"polisher suite"
|
"polisher suite"
|
||||||
],
|
],
|
||||||
"notes": "Key constraints are in Trusted Project State (key_constraints) and in the mission-framing memory"
|
"notes": "Key constraints are in Trusted Project State and in the mission-framing memory"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "p04-short-ambiguous",
|
||||||
|
"project": "p04-gigabit",
|
||||||
|
"prompt": "current status",
|
||||||
|
"expect_present": [
|
||||||
|
"--- Trusted Project State ---"
|
||||||
|
],
|
||||||
|
"expect_absent": [],
|
||||||
|
"notes": "Short ambiguous prompt — at minimum project state should surface. Hard case: the prompt is generic enough that chunks may not rank well."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "p05-configuration",
|
"name": "p05-configuration",
|
||||||
@@ -42,7 +52,7 @@
|
|||||||
"conical back",
|
"conical back",
|
||||||
"polisher suite"
|
"polisher suite"
|
||||||
],
|
],
|
||||||
"notes": "P05 architecture memory covers folded-beam + CGH. GigaBIT M1 is the mirror under test and legitimately appears in p05 source docs (the interferometer measures it), so we only flag genuinely p04-only decisions like the mirror architecture choice."
|
"notes": "P05 architecture memory covers folded-beam + CGH. GigaBIT M1 legitimately appears in p05 source docs."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "p05-vendor-signal",
|
"name": "p05-vendor-signal",
|
||||||
@@ -57,6 +67,19 @@
|
|||||||
],
|
],
|
||||||
"notes": "Vendor memory mentions 4D as strongest technical candidate and Zygo Verifire SV as value path"
|
"notes": "Vendor memory mentions 4D as strongest technical candidate and Zygo Verifire SV as value path"
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"name": "p05-cgh-calibration",
|
||||||
|
"project": "p05-interferometer",
|
||||||
|
"prompt": "how does CGH calibration work for the interferometer",
|
||||||
|
"expect_present": [
|
||||||
|
"CGH"
|
||||||
|
],
|
||||||
|
"expect_absent": [
|
||||||
|
"polisher-sim",
|
||||||
|
"polisher-post"
|
||||||
|
],
|
||||||
|
"notes": "CGH is a core p05 concept. Should surface via chunks and possibly the architecture memory. Must not bleed p06 polisher-suite terms."
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "p06-suite-split",
|
"name": "p06-suite-split",
|
||||||
"project": "p06-polisher",
|
"project": "p06-polisher",
|
||||||
@@ -69,7 +92,7 @@
|
|||||||
"expect_absent": [
|
"expect_absent": [
|
||||||
"GigaBIT"
|
"GigaBIT"
|
||||||
],
|
],
|
||||||
"notes": "The three-layer split is in multiple p06 memories; check all three names surface together"
|
"notes": "The three-layer split is in multiple p06 memories"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "p06-control-rule",
|
"name": "p06-control-rule",
|
||||||
@@ -82,5 +105,121 @@
|
|||||||
"interferometer"
|
"interferometer"
|
||||||
],
|
],
|
||||||
"notes": "Control design rule memory mentions interlocks and state transitions"
|
"notes": "Control design rule memory mentions interlocks and state transitions"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "p06-firmware-interface",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"prompt": "what is the firmware interface contract for the polisher machine",
|
||||||
|
"expect_present": [
|
||||||
|
"controller-job"
|
||||||
|
],
|
||||||
|
"expect_absent": [
|
||||||
|
"interferometer",
|
||||||
|
"GigaBIT"
|
||||||
|
],
|
||||||
|
"notes": "New p06 memory from the first triage: firmware interface contract is invariant controller-job.v1 in, run-log.v1 out"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "p06-z-axis",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"prompt": "how does the polisher Z-axis work",
|
||||||
|
"expect_present": [
|
||||||
|
"engage"
|
||||||
|
],
|
||||||
|
"expect_absent": [
|
||||||
|
"interferometer"
|
||||||
|
],
|
||||||
|
"notes": "New p06 memory: Z-axis is binary engage/retract, not continuous position. The word 'engage' should appear."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "p06-cam-mechanism",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"prompt": "how is cam amplitude controlled on the polisher",
|
||||||
|
"expect_present": [
|
||||||
|
"encoder"
|
||||||
|
],
|
||||||
|
"expect_absent": [
|
||||||
|
"GigaBIT"
|
||||||
|
],
|
||||||
|
"notes": "New p06 memory: cam set mechanically by operator, read by encoders. The word 'encoder' should appear."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "p06-telemetry-rate",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"prompt": "what is the expected polishing telemetry data rate",
|
||||||
|
"expect_present": [
|
||||||
|
"29 MB"
|
||||||
|
],
|
||||||
|
"expect_absent": [
|
||||||
|
"interferometer"
|
||||||
|
],
|
||||||
|
"notes": "New p06 knowledge memory: approximately 29 MB per hour at 100 Hz"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "p06-offline-design",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"prompt": "does the polisher machine need network to operate",
|
||||||
|
"expect_present": [
|
||||||
|
"offline"
|
||||||
|
],
|
||||||
|
"expect_absent": [
|
||||||
|
"CGH"
|
||||||
|
],
|
||||||
|
"notes": "New p06 memory: machine works fully offline and independently; network is for remote access only"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "p06-short-ambiguous",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"prompt": "current status",
|
||||||
|
"expect_present": [
|
||||||
|
"--- Trusted Project State ---"
|
||||||
|
],
|
||||||
|
"expect_absent": [],
|
||||||
|
"notes": "Short ambiguous prompt — project state should surface at minimum"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "cross-project-no-bleed",
|
||||||
|
"project": "p04-gigabit",
|
||||||
|
"prompt": "what telemetry rate should we target",
|
||||||
|
"expect_present": [],
|
||||||
|
"expect_absent": [
|
||||||
|
"29 MB",
|
||||||
|
"polisher"
|
||||||
|
],
|
||||||
|
"notes": "Adversarial: telemetry rate is a p06 fact. A p04 query for 'telemetry rate' must NOT surface p06 memories. Tests cross-project gating."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "no-project-hint",
|
||||||
|
"project": "",
|
||||||
|
"prompt": "tell me about the current projects",
|
||||||
|
"expect_present": [],
|
||||||
|
"expect_absent": [
|
||||||
|
"--- Project Memories ---"
|
||||||
|
],
|
||||||
|
"notes": "Without a project hint, project memories must not appear (cross-project bleed guard). Chunks may appear if any match."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "p06-usb-ssd",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"prompt": "what storage solution is specified for the polisher RPi",
|
||||||
|
"expect_present": [
|
||||||
|
"USB SSD"
|
||||||
|
],
|
||||||
|
"expect_absent": [
|
||||||
|
"interferometer"
|
||||||
|
],
|
||||||
|
"notes": "New p06 memory from triage: USB SSD mandatory, not SD card"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "p06-tailscale",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"prompt": "how do we access the polisher machine remotely",
|
||||||
|
"expect_present": [
|
||||||
|
"Tailscale"
|
||||||
|
],
|
||||||
|
"expect_absent": [
|
||||||
|
"GigaBIT"
|
||||||
|
],
|
||||||
|
"notes": "New p06 memory: Tailscale mesh for RPi remote access"
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -141,6 +141,7 @@ class MemoryCreateRequest(BaseModel):
|
|||||||
content: str
|
content: str
|
||||||
project: str = ""
|
project: str = ""
|
||||||
confidence: float = 1.0
|
confidence: float = 1.0
|
||||||
|
status: str = "active"
|
||||||
|
|
||||||
|
|
||||||
class MemoryUpdateRequest(BaseModel):
|
class MemoryUpdateRequest(BaseModel):
|
||||||
@@ -344,6 +345,7 @@ def api_create_memory(req: MemoryCreateRequest) -> dict:
|
|||||||
content=req.content,
|
content=req.content,
|
||||||
project=req.project,
|
project=req.project,
|
||||||
confidence=req.confidence,
|
confidence=req.confidence,
|
||||||
|
status=req.status,
|
||||||
)
|
)
|
||||||
except ValueError as e:
|
except ValueError as e:
|
||||||
raise HTTPException(status_code=400, detail=str(e))
|
raise HTTPException(status_code=400, detail=str(e))
|
||||||
|
|||||||
281
src/atocore/memory/extractor_llm.py
Normal file
281
src/atocore/memory/extractor_llm.py
Normal file
@@ -0,0 +1,281 @@
|
|||||||
|
"""LLM-assisted candidate-memory extraction via the Claude Code CLI.
|
||||||
|
|
||||||
|
Day 4 of the 2026-04-11 mini-phase: the rule-based extractor hit 0%
|
||||||
|
recall against real conversational claude-code captures (Day 2 baseline
|
||||||
|
scorecard in ``scripts/eval_data/extractor_labels_2026-04-11.json``),
|
||||||
|
with false negatives spread across 5 distinct miss classes. A single
|
||||||
|
rule expansion cannot close that gap, so this module adds an optional
|
||||||
|
LLM-assisted mode that shells out to the ``claude -p`` (Claude Code
|
||||||
|
non-interactive) CLI with a focused extraction system prompt. That
|
||||||
|
path reuses the user's existing Claude.ai OAuth credentials — no API
|
||||||
|
key anywhere, per the 2026-04-11 decision.
|
||||||
|
|
||||||
|
Trust rules carried forward from the rule-based extractor:
|
||||||
|
|
||||||
|
- Candidates are NEVER auto-promoted. Caller persists with
|
||||||
|
``status="candidate"`` and a human reviews via the triage CLI.
|
||||||
|
- This path is additive. The rule-based extractor keeps working
|
||||||
|
exactly as before; callers opt in by importing this module.
|
||||||
|
- Extraction stays off the capture hot path — this is batch / manual
|
||||||
|
only, per the 2026-04-11 decision.
|
||||||
|
- Failure is silent. Missing CLI, non-zero exit, malformed JSON,
|
||||||
|
timeout — all return an empty list and log an error. Never raises
|
||||||
|
into the caller; the capture audit trail must not break on an
|
||||||
|
optional side effect.
|
||||||
|
|
||||||
|
Configuration:
|
||||||
|
|
||||||
|
- Requires the ``claude`` CLI on PATH (``claude --version`` should work).
|
||||||
|
- ``ATOCORE_LLM_EXTRACTOR_MODEL`` overrides the model alias (default
|
||||||
|
``haiku``).
|
||||||
|
- ``ATOCORE_LLM_EXTRACTOR_TIMEOUT_S`` overrides the per-call timeout
|
||||||
|
(default 90 seconds — first invocation is slow because Node.js
|
||||||
|
startup plus OAuth check is non-trivial).
|
||||||
|
|
||||||
|
Implementation notes:
|
||||||
|
|
||||||
|
- We run ``claude -p`` with ``--model <alias>``,
|
||||||
|
``--append-system-prompt`` for the extraction instructions,
|
||||||
|
``--no-session-persistence`` so we don't pollute session history,
|
||||||
|
and ``--disable-slash-commands`` so stray ``/foo`` in an extracted
|
||||||
|
response never triggers something.
|
||||||
|
- The CLI is invoked from a temp working directory so it does not
|
||||||
|
auto-discover ``CLAUDE.md`` / ``DEV-LEDGER.md`` / ``AGENTS.md``
|
||||||
|
from the repo root. We want a bare extraction context, not the
|
||||||
|
full project briefing. We can't use ``--bare`` because that
|
||||||
|
forces API-key auth; the temp-cwd trick is the lightest way to
|
||||||
|
keep OAuth auth while skipping project context loading.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import shutil
|
||||||
|
import subprocess
|
||||||
|
import tempfile
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from functools import lru_cache
|
||||||
|
|
||||||
|
from atocore.interactions.service import Interaction
|
||||||
|
from atocore.memory.extractor import MemoryCandidate
|
||||||
|
from atocore.memory.service import MEMORY_TYPES
|
||||||
|
from atocore.observability.logger import get_logger
|
||||||
|
|
||||||
|
log = get_logger("extractor_llm")
|
||||||
|
|
||||||
|
LLM_EXTRACTOR_VERSION = "llm-0.2.0"
|
||||||
|
DEFAULT_MODEL = os.environ.get("ATOCORE_LLM_EXTRACTOR_MODEL", "haiku")
|
||||||
|
DEFAULT_TIMEOUT_S = float(os.environ.get("ATOCORE_LLM_EXTRACTOR_TIMEOUT_S", "90"))
|
||||||
|
MAX_RESPONSE_CHARS = 8000
|
||||||
|
MAX_PROMPT_CHARS = 2000
|
||||||
|
|
||||||
|
_SYSTEM_PROMPT = """You extract durable memory candidates from LLM conversation turns for a personal context engine called AtoCore.
|
||||||
|
|
||||||
|
Your job is to read one user prompt plus the assistant's response and decide which durable facts, decisions, preferences, architectural rules, or project invariants should be remembered across future sessions.
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
|
||||||
|
1. Only surface durable claims. Skip transient status ("deploy is still running"), instructional guidance ("here is how to run the command"), troubleshooting tactics, ephemeral recommendations ("merge this PR now"), and session recaps.
|
||||||
|
2. A candidate is durable when a reader coming back in two weeks would still need to know it. Architectural choices, named rules, ratified decisions, invariants, procurement commitments, and project-level constraints qualify. Conversational fillers and step-by-step instructions do not.
|
||||||
|
3. Each candidate must stand alone. Rewrite the claim in one sentence under 200 characters with enough context that a reader without the conversation understands it.
|
||||||
|
4. Each candidate must have a type from this closed set: project, knowledge, preference, adaptation.
|
||||||
|
5. If the conversation is clearly scoped to a project (p04-gigabit, p05-interferometer, p06-polisher, atocore), set ``project`` to that id. Otherwise leave ``project`` empty.
|
||||||
|
6. If the response makes no durable claim, return an empty list. It is correct and expected to return [] on most conversational turns.
|
||||||
|
7. Confidence should be 0.5 by default so human review workload is honest. Raise to 0.6 only when the response states the claim in an unambiguous, committed form (e.g. "the decision is X", "the selected approach is Y", "X is non-negotiable").
|
||||||
|
8. Output must be a raw JSON array and nothing else. No prose before or after. No markdown fences. No explanations.
|
||||||
|
|
||||||
|
Each array element has exactly this shape:
|
||||||
|
|
||||||
|
{"type": "project|knowledge|preference|adaptation", "content": "...", "project": "...", "confidence": 0.5}
|
||||||
|
|
||||||
|
Return [] when there is nothing to extract."""
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class LLMExtractionResult:
|
||||||
|
candidates: list[MemoryCandidate]
|
||||||
|
raw_output: str
|
||||||
|
error: str = ""
|
||||||
|
|
||||||
|
|
||||||
|
@lru_cache(maxsize=1)
|
||||||
|
def _sandbox_cwd() -> str:
|
||||||
|
"""Return a stable temp directory for ``claude -p`` invocations.
|
||||||
|
|
||||||
|
We want the CLI to run from a directory that does NOT contain
|
||||||
|
``CLAUDE.md`` / ``DEV-LEDGER.md`` / ``AGENTS.md``, so every
|
||||||
|
extraction call starts with a clean context instead of the full
|
||||||
|
AtoCore project briefing. Cached so the directory persists for
|
||||||
|
the lifetime of the process.
|
||||||
|
"""
|
||||||
|
return tempfile.mkdtemp(prefix="ato-llm-extract-")
|
||||||
|
|
||||||
|
|
||||||
|
def _cli_available() -> bool:
|
||||||
|
return shutil.which("claude") is not None
|
||||||
|
|
||||||
|
|
||||||
|
def extract_candidates_llm(
|
||||||
|
interaction: Interaction,
|
||||||
|
model: str | None = None,
|
||||||
|
timeout_s: float | None = None,
|
||||||
|
) -> list[MemoryCandidate]:
|
||||||
|
"""Run the LLM-assisted extractor against one interaction.
|
||||||
|
|
||||||
|
Returns a list of ``MemoryCandidate`` objects, empty on any
|
||||||
|
failure path. The caller is responsible for persistence.
|
||||||
|
"""
|
||||||
|
return extract_candidates_llm_verbose(
|
||||||
|
interaction,
|
||||||
|
model=model,
|
||||||
|
timeout_s=timeout_s,
|
||||||
|
).candidates
|
||||||
|
|
||||||
|
|
||||||
|
def extract_candidates_llm_verbose(
|
||||||
|
interaction: Interaction,
|
||||||
|
model: str | None = None,
|
||||||
|
timeout_s: float | None = None,
|
||||||
|
) -> LLMExtractionResult:
|
||||||
|
"""Like ``extract_candidates_llm`` but also returns the raw
|
||||||
|
subprocess output and any error encountered, for eval / debugging.
|
||||||
|
"""
|
||||||
|
if not _cli_available():
|
||||||
|
return LLMExtractionResult(
|
||||||
|
candidates=[],
|
||||||
|
raw_output="",
|
||||||
|
error="claude_cli_missing",
|
||||||
|
)
|
||||||
|
|
||||||
|
response_text = (interaction.response or "").strip()
|
||||||
|
if not response_text:
|
||||||
|
return LLMExtractionResult(candidates=[], raw_output="", error="empty_response")
|
||||||
|
|
||||||
|
prompt_excerpt = (interaction.prompt or "")[:MAX_PROMPT_CHARS]
|
||||||
|
response_excerpt = response_text[:MAX_RESPONSE_CHARS]
|
||||||
|
user_message = (
|
||||||
|
f"PROJECT HINT (may be empty): {interaction.project or ''}\n\n"
|
||||||
|
f"USER PROMPT:\n{prompt_excerpt}\n\n"
|
||||||
|
f"ASSISTANT RESPONSE:\n{response_excerpt}\n\n"
|
||||||
|
"Return the JSON array now."
|
||||||
|
)
|
||||||
|
|
||||||
|
args = [
|
||||||
|
"claude",
|
||||||
|
"-p",
|
||||||
|
"--model",
|
||||||
|
model or DEFAULT_MODEL,
|
||||||
|
"--append-system-prompt",
|
||||||
|
_SYSTEM_PROMPT,
|
||||||
|
"--no-session-persistence",
|
||||||
|
"--disable-slash-commands",
|
||||||
|
user_message,
|
||||||
|
]
|
||||||
|
|
||||||
|
try:
|
||||||
|
completed = subprocess.run(
|
||||||
|
args,
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
timeout=timeout_s or DEFAULT_TIMEOUT_S,
|
||||||
|
cwd=_sandbox_cwd(),
|
||||||
|
encoding="utf-8",
|
||||||
|
errors="replace",
|
||||||
|
)
|
||||||
|
except subprocess.TimeoutExpired:
|
||||||
|
log.error("llm_extractor_timeout", interaction_id=interaction.id)
|
||||||
|
return LLMExtractionResult(candidates=[], raw_output="", error="timeout")
|
||||||
|
except Exception as exc: # pragma: no cover - unexpected subprocess failure
|
||||||
|
log.error("llm_extractor_subprocess_failed", error=str(exc))
|
||||||
|
return LLMExtractionResult(candidates=[], raw_output="", error=f"subprocess_error: {exc}")
|
||||||
|
|
||||||
|
if completed.returncode != 0:
|
||||||
|
log.error(
|
||||||
|
"llm_extractor_nonzero_exit",
|
||||||
|
interaction_id=interaction.id,
|
||||||
|
returncode=completed.returncode,
|
||||||
|
stderr_prefix=(completed.stderr or "")[:200],
|
||||||
|
)
|
||||||
|
return LLMExtractionResult(
|
||||||
|
candidates=[],
|
||||||
|
raw_output=completed.stdout or "",
|
||||||
|
error=f"exit_{completed.returncode}",
|
||||||
|
)
|
||||||
|
|
||||||
|
raw_output = (completed.stdout or "").strip()
|
||||||
|
candidates = _parse_candidates(raw_output, interaction)
|
||||||
|
log.info(
|
||||||
|
"llm_extractor_done",
|
||||||
|
interaction_id=interaction.id,
|
||||||
|
candidate_count=len(candidates),
|
||||||
|
model=model or DEFAULT_MODEL,
|
||||||
|
)
|
||||||
|
return LLMExtractionResult(candidates=candidates, raw_output=raw_output)
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_candidates(raw_output: str, interaction: Interaction) -> list[MemoryCandidate]:
|
||||||
|
"""Parse the model's JSON output into MemoryCandidate objects.
|
||||||
|
|
||||||
|
Tolerates common model glitches: surrounding whitespace, stray
|
||||||
|
markdown fences, leading/trailing prose. Silently drops malformed
|
||||||
|
array elements rather than raising.
|
||||||
|
"""
|
||||||
|
text = raw_output.strip()
|
||||||
|
if text.startswith("```"):
|
||||||
|
text = text.strip("`")
|
||||||
|
first_newline = text.find("\n")
|
||||||
|
if first_newline >= 0:
|
||||||
|
text = text[first_newline + 1 :]
|
||||||
|
if text.endswith("```"):
|
||||||
|
text = text[:-3]
|
||||||
|
text = text.strip()
|
||||||
|
|
||||||
|
if not text or text == "[]":
|
||||||
|
return []
|
||||||
|
|
||||||
|
if not text.lstrip().startswith("["):
|
||||||
|
start = text.find("[")
|
||||||
|
end = text.rfind("]")
|
||||||
|
if start >= 0 and end > start:
|
||||||
|
text = text[start : end + 1]
|
||||||
|
|
||||||
|
try:
|
||||||
|
parsed = json.loads(text)
|
||||||
|
except json.JSONDecodeError as exc:
|
||||||
|
log.error("llm_extractor_parse_failed", error=str(exc), raw_prefix=raw_output[:120])
|
||||||
|
return []
|
||||||
|
|
||||||
|
if not isinstance(parsed, list):
|
||||||
|
return []
|
||||||
|
|
||||||
|
results: list[MemoryCandidate] = []
|
||||||
|
for item in parsed:
|
||||||
|
if not isinstance(item, dict):
|
||||||
|
continue
|
||||||
|
mem_type = str(item.get("type") or "").strip().lower()
|
||||||
|
content = str(item.get("content") or "").strip()
|
||||||
|
project = str(item.get("project") or "").strip()
|
||||||
|
confidence_raw = item.get("confidence", 0.5)
|
||||||
|
if mem_type not in MEMORY_TYPES:
|
||||||
|
continue
|
||||||
|
if not content:
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
confidence = float(confidence_raw)
|
||||||
|
except (TypeError, ValueError):
|
||||||
|
confidence = 0.5
|
||||||
|
confidence = max(0.0, min(1.0, confidence))
|
||||||
|
results.append(
|
||||||
|
MemoryCandidate(
|
||||||
|
memory_type=mem_type,
|
||||||
|
content=content[:1000],
|
||||||
|
rule="llm_extraction",
|
||||||
|
source_span=content[:200],
|
||||||
|
project=project,
|
||||||
|
confidence=confidence,
|
||||||
|
source_interaction_id=interaction.id,
|
||||||
|
extractor_version=LLM_EXTRACTOR_VERSION,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
return results
|
||||||
@@ -413,8 +413,17 @@ def get_memories_for_context(
|
|||||||
if query_tokens is not None:
|
if query_tokens is not None:
|
||||||
pool = _rank_memories_for_query(pool, query_tokens)
|
pool = _rank_memories_for_query(pool, query_tokens)
|
||||||
|
|
||||||
|
# Per-entry cap prevents a single long memory from monopolizing
|
||||||
|
# the band. With 16 p06 memories competing for ~700 chars, an
|
||||||
|
# uncapped 530-char overview memory fills the entire budget before
|
||||||
|
# a query-relevant 150-char memory gets a slot. The cap ensures at
|
||||||
|
# least 2-3 entries fit regardless of individual memory length.
|
||||||
|
max_entry_chars = 250
|
||||||
for mem in pool:
|
for mem in pool:
|
||||||
entry = f"[{mem.memory_type}] {mem.content}"
|
content = mem.content
|
||||||
|
if len(content) > max_entry_chars:
|
||||||
|
content = content[:max_entry_chars - 3].rstrip() + "..."
|
||||||
|
entry = f"[{mem.memory_type}] {content}"
|
||||||
entry_len = len(entry) + 1
|
entry_len = len(entry) + 1
|
||||||
if entry_len > available - used:
|
if entry_len > available - used:
|
||||||
continue
|
continue
|
||||||
|
|||||||
158
tests/test_extractor_llm.py
Normal file
158
tests/test_extractor_llm.py
Normal file
@@ -0,0 +1,158 @@
|
|||||||
|
"""Tests for the LLM-assisted extractor path.
|
||||||
|
|
||||||
|
Focused on the parser and failure-mode contracts — the actual network
|
||||||
|
call is exercised out of band by running
|
||||||
|
``python scripts/extractor_eval.py --mode llm`` against the frozen
|
||||||
|
labeled corpus with ``ANTHROPIC_API_KEY`` set. These tests only
|
||||||
|
exercise the pieces that don't need network.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import os
|
||||||
|
from unittest.mock import patch
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from atocore.interactions.service import Interaction
|
||||||
|
from atocore.memory.extractor_llm import (
|
||||||
|
LLM_EXTRACTOR_VERSION,
|
||||||
|
_parse_candidates,
|
||||||
|
extract_candidates_llm,
|
||||||
|
extract_candidates_llm_verbose,
|
||||||
|
)
|
||||||
|
import atocore.memory.extractor_llm as extractor_llm
|
||||||
|
|
||||||
|
|
||||||
|
def _make_interaction(prompt: str = "p", response: str = "r") -> Interaction:
|
||||||
|
return Interaction(
|
||||||
|
id="test-id",
|
||||||
|
prompt=prompt,
|
||||||
|
response=response,
|
||||||
|
response_summary="",
|
||||||
|
project="",
|
||||||
|
client="test",
|
||||||
|
session_id="",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_parser_handles_empty_array():
|
||||||
|
result = _parse_candidates("[]", _make_interaction())
|
||||||
|
assert result == []
|
||||||
|
|
||||||
|
|
||||||
|
def test_parser_handles_malformed_json():
|
||||||
|
result = _parse_candidates("{ not valid json", _make_interaction())
|
||||||
|
assert result == []
|
||||||
|
|
||||||
|
|
||||||
|
def test_parser_strips_markdown_fences():
|
||||||
|
raw = "```json\n[{\"type\": \"knowledge\", \"content\": \"x is y\", \"project\": \"\", \"confidence\": 0.5}]\n```"
|
||||||
|
result = _parse_candidates(raw, _make_interaction())
|
||||||
|
assert len(result) == 1
|
||||||
|
assert result[0].memory_type == "knowledge"
|
||||||
|
assert result[0].content == "x is y"
|
||||||
|
|
||||||
|
|
||||||
|
def test_parser_strips_surrounding_prose():
|
||||||
|
raw = "Here are the candidates:\n[{\"type\": \"project\", \"content\": \"foo\", \"project\": \"p04\", \"confidence\": 0.6}]\nThat's it."
|
||||||
|
result = _parse_candidates(raw, _make_interaction())
|
||||||
|
assert len(result) == 1
|
||||||
|
assert result[0].memory_type == "project"
|
||||||
|
assert result[0].project == "p04"
|
||||||
|
|
||||||
|
|
||||||
|
def test_parser_drops_invalid_memory_types():
|
||||||
|
raw = '[{"type": "nonsense", "content": "x"}, {"type": "project", "content": "y"}]'
|
||||||
|
result = _parse_candidates(raw, _make_interaction())
|
||||||
|
assert len(result) == 1
|
||||||
|
assert result[0].memory_type == "project"
|
||||||
|
|
||||||
|
|
||||||
|
def test_parser_drops_empty_content():
|
||||||
|
raw = '[{"type": "knowledge", "content": " "}, {"type": "knowledge", "content": "real"}]'
|
||||||
|
result = _parse_candidates(raw, _make_interaction())
|
||||||
|
assert len(result) == 1
|
||||||
|
assert result[0].content == "real"
|
||||||
|
|
||||||
|
|
||||||
|
def test_parser_clamps_confidence_to_unit_interval():
|
||||||
|
raw = '[{"type": "knowledge", "content": "c1", "confidence": 2.5}, {"type": "knowledge", "content": "c2", "confidence": -0.4}]'
|
||||||
|
result = _parse_candidates(raw, _make_interaction())
|
||||||
|
assert result[0].confidence == 1.0
|
||||||
|
assert result[1].confidence == 0.0
|
||||||
|
|
||||||
|
|
||||||
|
def test_parser_defaults_confidence_on_missing_field():
|
||||||
|
raw = '[{"type": "knowledge", "content": "c1"}]'
|
||||||
|
result = _parse_candidates(raw, _make_interaction())
|
||||||
|
assert result[0].confidence == 0.5
|
||||||
|
|
||||||
|
|
||||||
|
def test_parser_tags_version_and_rule():
|
||||||
|
raw = '[{"type": "project", "content": "c1"}]'
|
||||||
|
result = _parse_candidates(raw, _make_interaction())
|
||||||
|
assert result[0].rule == "llm_extraction"
|
||||||
|
assert result[0].extractor_version == LLM_EXTRACTOR_VERSION
|
||||||
|
assert result[0].source_interaction_id == "test-id"
|
||||||
|
|
||||||
|
|
||||||
|
def test_missing_cli_returns_empty(monkeypatch):
|
||||||
|
"""If ``claude`` is not on PATH the extractor returns empty, never raises."""
|
||||||
|
monkeypatch.setattr(extractor_llm, "_cli_available", lambda: False)
|
||||||
|
result = extract_candidates_llm_verbose(_make_interaction("p", "some real response"))
|
||||||
|
assert result.candidates == []
|
||||||
|
assert result.error == "claude_cli_missing"
|
||||||
|
|
||||||
|
|
||||||
|
def test_empty_response_returns_empty(monkeypatch):
|
||||||
|
monkeypatch.setattr(extractor_llm, "_cli_available", lambda: True)
|
||||||
|
result = extract_candidates_llm_verbose(_make_interaction("p", ""))
|
||||||
|
assert result.candidates == []
|
||||||
|
assert result.error == "empty_response"
|
||||||
|
|
||||||
|
|
||||||
|
def test_subprocess_timeout_returns_empty(monkeypatch):
|
||||||
|
"""A subprocess timeout must not raise into the caller."""
|
||||||
|
monkeypatch.setattr(extractor_llm, "_cli_available", lambda: True)
|
||||||
|
|
||||||
|
import subprocess as _sp
|
||||||
|
|
||||||
|
def _boom(*a, **kw):
|
||||||
|
raise _sp.TimeoutExpired(cmd=a[0] if a else "claude", timeout=1)
|
||||||
|
|
||||||
|
monkeypatch.setattr(extractor_llm.subprocess, "run", _boom)
|
||||||
|
result = extract_candidates_llm_verbose(_make_interaction("p", "real response"))
|
||||||
|
assert result.candidates == []
|
||||||
|
assert result.error == "timeout"
|
||||||
|
|
||||||
|
|
||||||
|
def test_subprocess_nonzero_exit_returns_empty(monkeypatch):
|
||||||
|
"""A non-zero CLI exit (auth failure, etc.) must not raise."""
|
||||||
|
monkeypatch.setattr(extractor_llm, "_cli_available", lambda: True)
|
||||||
|
|
||||||
|
class _Completed:
|
||||||
|
returncode = 1
|
||||||
|
stdout = ""
|
||||||
|
stderr = "auth failed"
|
||||||
|
|
||||||
|
monkeypatch.setattr(extractor_llm.subprocess, "run", lambda *a, **kw: _Completed())
|
||||||
|
result = extract_candidates_llm_verbose(_make_interaction("p", "real response"))
|
||||||
|
assert result.candidates == []
|
||||||
|
assert result.error == "exit_1"
|
||||||
|
|
||||||
|
|
||||||
|
def test_happy_path_parses_stdout(monkeypatch):
|
||||||
|
monkeypatch.setattr(extractor_llm, "_cli_available", lambda: True)
|
||||||
|
|
||||||
|
class _Completed:
|
||||||
|
returncode = 0
|
||||||
|
stdout = '[{"type": "project", "content": "p04 selected Option B", "project": "p04-gigabit", "confidence": 0.6}]'
|
||||||
|
stderr = ""
|
||||||
|
|
||||||
|
monkeypatch.setattr(extractor_llm.subprocess, "run", lambda *a, **kw: _Completed())
|
||||||
|
result = extract_candidates_llm_verbose(_make_interaction("p", "r"))
|
||||||
|
assert len(result.candidates) == 1
|
||||||
|
assert result.candidates[0].memory_type == "project"
|
||||||
|
assert result.candidates[0].project == "p04-gigabit"
|
||||||
|
assert abs(result.candidates[0].confidence - 0.6) < 1e-9
|
||||||
Reference in New Issue
Block a user