Compare commits
22 Commits
b3253f35ee
...
codex/open
| Author | SHA1 | Date | |
|---|---|---|---|
| dc5742b46a | |||
| 289735d51f | |||
| 2b79680167 | |||
| 39d73e91b4 | |||
| 7ddf0e38ee | |||
| b0fde3ee60 | |||
| 89c7964237 | |||
| 146f2e4a5e | |||
| 5c69f77b45 | |||
| 3921c5ffc7 | |||
| 93f796207f | |||
| b98a658831 | |||
| 06792d862e | |||
| 95daa5c040 | |||
| 3a7e8ccba4 | |||
| a29b5e22f2 | |||
| b309e7fd49 | |||
| 330ecfb6a6 | |||
| 7d8d599030 | |||
| d9dc55f841 | |||
| 81307cec47 | |||
| 59331e522d |
@@ -1,5 +1,13 @@
|
|||||||
# AGENTS.md
|
# AGENTS.md
|
||||||
|
|
||||||
|
## Session protocol (read first, every session)
|
||||||
|
|
||||||
|
**Before doing anything else, read `DEV-LEDGER.md` at the repo root.** It is the one-file source of truth for "what is currently true" — live SHA, active plan, open review findings, recent decisions. The narrative docs under `docs/` may lag; the ledger does not.
|
||||||
|
|
||||||
|
**Before ending a session, append a Session Log line to `DEV-LEDGER.md`** with what you did and which commit range it covers, and bump the Orientation section if anything there changed.
|
||||||
|
|
||||||
|
This rule applies equally to Claude, Codex, and any future agent working in this repo.
|
||||||
|
|
||||||
## Project role
|
## Project role
|
||||||
This repository is AtoCore, the runtime and machine-memory layer of the Ato ecosystem.
|
This repository is AtoCore, the runtime and machine-memory layer of the Ato ecosystem.
|
||||||
|
|
||||||
|
|||||||
30
CLAUDE.md
Normal file
30
CLAUDE.md
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
# CLAUDE.md — project instructions for AtoCore
|
||||||
|
|
||||||
|
## Session protocol
|
||||||
|
|
||||||
|
Before doing anything else in this repo, read `DEV-LEDGER.md` at the repo root. It is the shared operating memory between Claude, Codex, and the human operator — live Dalidou SHA, active plan, open P1/P2 review findings, recent decisions, and session log. The narrative docs under `docs/` sometimes lag; the ledger does not.
|
||||||
|
|
||||||
|
Before ending a session, append a Session Log line to `DEV-LEDGER.md` covering:
|
||||||
|
|
||||||
|
- which commits you produced (sha range)
|
||||||
|
- what changed at a high level
|
||||||
|
- any harness / test count deltas
|
||||||
|
- anything you overclaimed and later corrected
|
||||||
|
|
||||||
|
Bump the **Orientation** section if `live_sha`, `main_tip`, `test_count`, or `harness` changed.
|
||||||
|
|
||||||
|
`AGENTS.md` at the repo root carries the broader project principles (storage separation, deployment model, coding guidance). Read it when you need the "why" behind a constraint.
|
||||||
|
|
||||||
|
## Deploy workflow
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git push origin main && ssh papa@dalidou "bash /srv/storage/atocore/app/deploy/dalidou/deploy.sh"
|
||||||
|
```
|
||||||
|
|
||||||
|
The deploy script self-verifies via `/health` build_sha — if it exits non-zero, do not assume the change is live.
|
||||||
|
|
||||||
|
## Working model
|
||||||
|
|
||||||
|
- Claude builds; Codex audits. No parallel work on the same files.
|
||||||
|
- P1 review findings block further `main` commits until acknowledged in the ledger's **Open Review Findings** table.
|
||||||
|
- Codex branches must fork from `origin/main` (no orphan commits that require `--allow-unrelated-histories`).
|
||||||
191
DEV-LEDGER.md
Normal file
191
DEV-LEDGER.md
Normal file
@@ -0,0 +1,191 @@
|
|||||||
|
# AtoCore Dev Ledger
|
||||||
|
|
||||||
|
> Shared operating memory between humans, Claude, and Codex.
|
||||||
|
> **Every session MUST read this file at start and append a Session Log entry before ending.**
|
||||||
|
> Section headers are stable - do not rename them. Trim Session Log and Recent Decisions to the last 20 entries at session end; older history lives in `git log` and `docs/`.
|
||||||
|
|
||||||
|
## Orientation
|
||||||
|
|
||||||
|
- **live_sha** (Dalidou `/health` build_sha): `39d73e9`
|
||||||
|
- **last_updated**: 2026-04-12 by Claude (Wave 2 ingestion + R6 fix deployed)
|
||||||
|
- **main_tip**: `39d73e9`
|
||||||
|
- **test_count**: 280 passing
|
||||||
|
- **harness**: `16/18 PASS` (p06-firmware-interface = R7 ranking tie; p06-tailscale = chunk bleed)
|
||||||
|
- **active_memories**: 36 (p06-polisher 16, p05-interferometer 6, p04-gigabit 5, atocore 5, other 4)
|
||||||
|
- **project_state_entries**: p04=6, p05=7, p06=7 (Wave 2 added 8 new entries)
|
||||||
|
- **off_host_backup**: `papa@192.168.86.39:/home/papa/atocore-backups/` via cron env `ATOCORE_BACKUP_RSYNC`, verified
|
||||||
|
|
||||||
|
## Active Plan
|
||||||
|
|
||||||
|
**Mini-phase**: Extractor improvement (eval-driven) + retrieval harness expansion.
|
||||||
|
**Duration**: 8 days, hard gates at each day boundary.
|
||||||
|
**Plan author**: Codex (2026-04-11). **Executor**: Claude. **Audit**: Codex.
|
||||||
|
|
||||||
|
### Preflight (before Day 1)
|
||||||
|
|
||||||
|
Stop if any of these fail:
|
||||||
|
|
||||||
|
- `git rev-parse HEAD` on `main` matches the expected branching tip
|
||||||
|
- Live `/health` on Dalidou reports the SHA you think is deployed
|
||||||
|
- `python scripts/retrieval_eval.py --json` still passes at the current baseline
|
||||||
|
- `batch-extract` over the known 42-capture slice reproduces the current low-yield baseline
|
||||||
|
- A frozen sample set exists for extractor labeling so the target does not move mid-phase
|
||||||
|
|
||||||
|
Success: baseline eval output saved, baseline extract output saved, working branch created from `origin/main`.
|
||||||
|
|
||||||
|
### Day 1 - Labeled extractor eval set
|
||||||
|
|
||||||
|
Pick 30 real captures: 10 that should produce 0 candidates, 10 that should plausibly produce 1, 10 ambiguous/hard. Store as a stable artifact (interaction id, expected count, expected type, notes). Add a runner that scores extractor output against labels.
|
||||||
|
|
||||||
|
Success: 30 labeled interactions in a stable artifact, one-command precision/recall output.
|
||||||
|
Fail-early: if labeling 30 takes more than a day because the concept is unclear, tighten the extraction target before touching code.
|
||||||
|
|
||||||
|
### Day 2 - Measure current extractor
|
||||||
|
|
||||||
|
Run the rule-based extractor on all 30. Record yield, TP, FP, FN. Bucket misses by class (conversational preference, decision summary, status/constraint, meta chatter).
|
||||||
|
|
||||||
|
Success: short scorecard with counts by miss type, top 2 miss classes obvious.
|
||||||
|
Fail-early: if the labeled set shows fewer than 5 plausible positives total, the corpus is too weak - relabel before tuning.
|
||||||
|
|
||||||
|
### Day 3 - Smallest rule expansion for top miss class
|
||||||
|
|
||||||
|
Add 1-2 narrow, explainable rules for the worst miss class. Add unit tests from real paraphrase examples in the labeled set. Then rerun eval.
|
||||||
|
|
||||||
|
Success: recall up on the labeled set, false positives do not materially rise, new tests cover the new cue class.
|
||||||
|
Fail-early: if one rule expansion raises FP above ~20% of extracted candidates, revert or narrow before adding more.
|
||||||
|
|
||||||
|
### Day 4 - Decision gate: more rules or LLM-assisted prototype
|
||||||
|
|
||||||
|
If rule expansion reaches a **meaningfully reviewable queue**, keep going with rules. Otherwise prototype an LLM-assisted extraction mode behind a flag.
|
||||||
|
|
||||||
|
"Meaningfully reviewable queue":
|
||||||
|
- >= 15-25% candidate yield on the 30 labeled captures
|
||||||
|
- FP rate low enough that manual triage feels tolerable
|
||||||
|
- >= 2 real non-synthetic candidates worth review
|
||||||
|
|
||||||
|
Hard stop: if candidate yield is still under 10% after this point, stop rule tinkering and switch to architecture review (LLM-assisted OR narrower extraction scope).
|
||||||
|
|
||||||
|
### Day 5 - Stabilize and document
|
||||||
|
|
||||||
|
Add remaining focused rules or the flagged LLM-assisted path. Write down in-scope and out-of-scope utterance kinds.
|
||||||
|
|
||||||
|
Success: labeled eval green against target threshold, extractor scope explainable in <= 5 bullets.
|
||||||
|
|
||||||
|
### Day 6 - Retrieval harness expansion (6 -> 15-20 fixtures)
|
||||||
|
|
||||||
|
Grow across p04/p05/p06. Include short ambiguous prompts, cross-project collision cases, expected project-state wins, expected project-memory wins, and 1-2 "should fail open / low confidence" cases.
|
||||||
|
|
||||||
|
Success: >= 15 fixtures, each active project has easy + medium + hard cases.
|
||||||
|
Fail-early: if fixtures are mostly obvious wins, add harder adversarial cases before claiming coverage.
|
||||||
|
|
||||||
|
### Day 7 - Regression pass and calibration
|
||||||
|
|
||||||
|
Run harness on current code vs live Dalidou. Inspect failures (ranking, ingestion gap, project bleed, budget). Make at most ONE ranking/budget tweak if the harness clearly justifies it. Do not mix harness expansion and ranking changes in a single commit unless tightly coupled.
|
||||||
|
|
||||||
|
Success: harness still passes or improves after extractor work; any ranking tweak is justified by a concrete fixture delta.
|
||||||
|
Fail-early: if > 20-25% of harness fixtures regress after extractor changes, separate concerns before merging.
|
||||||
|
|
||||||
|
### Day 8 - Merge and close
|
||||||
|
|
||||||
|
Clean commit sequence. Save before/after metrics (extractor scorecard, harness results). Update docs only with claims the metrics support.
|
||||||
|
|
||||||
|
Merge order: labeled corpus + runner -> extractor improvements + tests -> harness expansion -> any justified ranking tweak -> docs sync last.
|
||||||
|
|
||||||
|
Success: point to a before/after delta for both extraction and retrieval; docs do not overclaim.
|
||||||
|
|
||||||
|
### Hard Gates (stop/rethink points)
|
||||||
|
|
||||||
|
- Extractor yield < 10% after 30 labeled interactions -> stop, reconsider rule-only extraction
|
||||||
|
- FP rate > 20% on labeled set -> narrow rules before adding more
|
||||||
|
- Harness expansion finds < 3 genuinely hard cases -> harness still too soft
|
||||||
|
- Ranking change improves one project but regresses another -> do not merge without explicit tradeoff note
|
||||||
|
|
||||||
|
### Branching
|
||||||
|
|
||||||
|
One branch `codex/extractor-eval-loop` for Day 1-5, a second `codex/retrieval-harness-expansion` for Day 6-7. Keeps extraction and retrieval judgments auditable.
|
||||||
|
|
||||||
|
## Review Protocol
|
||||||
|
|
||||||
|
- Codex records review findings in **Open Review Findings**.
|
||||||
|
- Claude must read **Open Review Findings** at session start before coding.
|
||||||
|
- Codex owns finding text. Claude may update operational fields only:
|
||||||
|
- `status`
|
||||||
|
- `owner`
|
||||||
|
- `resolved_by`
|
||||||
|
- If Claude disagrees with a finding, do not rewrite it. Mark it `declined` and explain why in the **Session Log**.
|
||||||
|
- Any commit or session that addresses a finding should reference the finding id in the commit message or **Session Log**.
|
||||||
|
- `P1` findings block further commits in the affected area until they are at least acknowledged and explicitly tracked.
|
||||||
|
- Findings may be code-level, claim-level, or ops-level. If the implementation boundary changes, retarget the finding instead of silently closing it.
|
||||||
|
|
||||||
|
## Open Review Findings
|
||||||
|
|
||||||
|
| id | finder | severity | file:line | summary | status | owner | opened_at | resolved_by |
|
||||||
|
|-----|--------|----------|------------------------------------|-------------------------------------------------------------------------|--------------|--------|------------|-------------|
|
||||||
|
| R1 | Codex | P1 | deploy/hooks/capture_stop.py:76-85 | Live Claude capture still omits `extract`, so "loop closed both sides" remains overstated in practice even though the API supports it | acknowledged | Claude | 2026-04-11 | |
|
||||||
|
| R2 | Codex | P1 | src/atocore/context/builder.py | Project memories excluded from pack | fixed | Claude | 2026-04-11 | 8ea53f4 |
|
||||||
|
| R3 | Claude | P2 | src/atocore/memory/extractor.py | Rule cues (`## Decision:`) never fire on conversational LLM text | open | Claude | 2026-04-11 | |
|
||||||
|
| R4 | Codex | P2 | DEV-LEDGER.md:11 | Orientation `main_tip` was stale versus `HEAD` / `origin/main` | fixed | Codex | 2026-04-11 | 81307ce |
|
||||||
|
| R5 | Codex | P1 | src/atocore/interactions/service.py:157-174 | The deployed extraction path still calls only the rule extractor; the new LLM extractor is eval/script-only, so Day 4 "gate cleared" is true as a benchmark result but not as an operational extraction path | acknowledged | Claude | 2026-04-12 | |
|
||||||
|
| R6 | Codex | P1 | src/atocore/memory/extractor_llm.py:258-276 | LLM extraction accepts model-supplied `project` verbatim with no fallback to `interaction.project`; live triage promoted a clearly p06 memory (offline/network rule) as project=`""`, which explains the p06-offline-design harness miss and falsifies the current "all 3 failures are budget-contention" claim | fixed | Claude | 2026-04-12 | this commit |
|
||||||
|
| R7 | Codex | P2 | src/atocore/memory/service.py:448-459 | Query ranking is overlap-count only, so broad overview memories can tie exact low-confidence memories and win on confidence; p06-firmware-interface is not just budget pressure, it also exposes a weak lexical scorer | open | Claude | 2026-04-12 | |
|
||||||
|
| R8 | Codex | P2 | tests/test_extractor_llm.py:1-7 | LLM extractor tests stop at parser/failure contracts; there is no automated coverage for the script-only persistence/review path that produced the 16 promoted memories, including project-scope preservation | open | Claude | 2026-04-12 | |
|
||||||
|
|
||||||
|
## Recent Decisions
|
||||||
|
|
||||||
|
- **2026-04-12** Day 4 gate cleared: LLM-assisted extraction via `claude -p` (OAuth, no API key) is the path forward. Rule extractor stays as default for structural cues. *Proposed by:* Claude. *Ratified by:* Antoine.
|
||||||
|
- **2026-04-12** First live triage: 16 promoted, 35 rejected from 51 LLM-extracted candidates. 31% accept rate. Active memory count 20->36. *Executed by:* Claude. *Ratified by:* Antoine.
|
||||||
|
- **2026-04-12** No API keys allowed in AtoCore — LLM-assisted features use OAuth via `claude -p` or equivalent CLI-authenticated paths. *Proposed by:* Antoine.
|
||||||
|
- **2026-04-12** Multi-model extraction direction: extraction/triage should be model-agnostic, with Codex/Gemini/Ollama as second-pass reviewers for robustness. *Proposed by:* Antoine.
|
||||||
|
- **2026-04-11** Adopt this ledger as shared operating memory between Claude and Codex. *Proposed by:* Antoine. *Ratified by:* Antoine.
|
||||||
|
- **2026-04-11** Accept Codex's 8-day mini-phase plan verbatim as Active Plan. *Proposed by:* Codex. *Ratified by:* Antoine.
|
||||||
|
- **2026-04-11** Review findings live in `DEV-LEDGER.md` with Codex owning finding text and Claude updating status fields only. *Proposed by:* Codex. *Ratified by:* Antoine.
|
||||||
|
- **2026-04-11** Project memories land in the pack under `--- Project Memories ---` at 25% budget ratio, gated on canonical project hint. *Proposed by:* Claude.
|
||||||
|
- **2026-04-11** Extraction stays off the capture hot path. Batch / manual only. *Proposed by:* Antoine.
|
||||||
|
- **2026-04-11** 4-step roadmap: extractor -> harness expansion -> Wave 2 ingestion -> OpenClaw finish. Steps 1+2 as one mini-phase. *Ratified by:* Antoine.
|
||||||
|
- **2026-04-11** Codex branches must fork from `main`, not be orphan commits. *Proposed by:* Claude. *Agreed by:* Codex.
|
||||||
|
|
||||||
|
## Session Log
|
||||||
|
|
||||||
|
- **2026-04-23 Codex** Phase 2 policy/doc cleanup for the OpenClaw x AtoCore operating model. Normalized the 5 Phase 1 docs to clean ASCII, removed Screenpipe from V1 active scope, added `docs/openclaw-atocore-v1-proof-runbook.md`, and added a non-applied shared-client consolidation preview at `docs/openclaw-atocore-shared-client-consolidation-preview.md`. Also updated OpenClaw governance text in `/home/papa/clawd/AGENTS.md` and `/home/papa/clawd/skills/atocore-context/SKILL.md` so Discord-originated AtoCore actions are read-only by default and mutating actions require explicit current-thread/session approval. No code/runtime/schema changes, no deploy, no tests run.
|
||||||
|
|
||||||
|
- **2026-04-23 Codex** Phase 1 OpenClaw × AtoCore operating-model audit/design/doc pass only. Read AGENTS/CLAUDE/DEV-LEDGER plus requested integration docs, verified OpenClaw helper surface vs shared operator client, confirmed live fail-open read path, confirmed discrawl presence, and confirmed Screenpipe was not installed locally. Wrote 5 new docs: `docs/openclaw-atocore-audit-note.md`, `docs/openclaw-atocore-v1-architecture.md`, `docs/openclaw-atocore-write-policy-matrix.md`, `docs/openclaw-atocore-promotion-pipeline.md`, `docs/openclaw-atocore-nightly-screener-runbook.md`. No code/runtime/skill changes, no deploy, no tests run.
|
||||||
|
|
||||||
|
- **2026-04-12 Claude** Wave 2 trusted operational ingestion + codex audit response. Read 6 vault docs, created 8 new Trusted Project State entries (p04 +2, p05 +3, p06 +3). Fixed R6 (project fallback in LLM extractor) per codex audit. Fixed misscoped p06 offline memory on live Dalidou. Merged codex/audit-2026-04-12. Switched default LLM model from haiku to sonnet. Harness 15/18 -> 16/18. Tests 278 -> 280. main_tip 146f2e4 -> 39d73e9.
|
||||||
|
|
||||||
|
- **2026-04-12 Codex (audit branch `codex/audit-2026-04-12`)** audited `c5bad99..146f2e4` against code, live Dalidou, and the 36 active memories. Confirmed: `claude -p` invocation is not shell-injection-prone (`subprocess.run(args)` with no shell), off-host backup wiring matches the ledger, and R1 remains unresolved in practice. Added R5-R8. Corrected Orientation `main_tip` (`146f2e4`, not `5c69f77`) and tightened the harness note: p06-firmware-interface is a ranking-tie issue, p06-offline-design comes from a project-scope miss in live triage, and p06-tailscale is retrieved-chunk bleed rather than memory-band budget contention.
|
||||||
|
- **2026-04-12 Claude** `06792d8..5c69f77` Day 5-8 close. Documented extractor scope (5 in-scope, 6 out-of-scope categories). Expanded harness from 6 to 18 fixtures (p04 +1, p05 +1, p06 +7, adversarial +2). Per-entry memory cap at 250 chars fixed 1 of 4 budget-contention failures. Final harness: 15/18 PASS. Mini-phase complete. Before/after: rule extractor 0% recall -> LLM 100%; harness 6/6 -> 15/18; active memories 20 -> 36.
|
||||||
|
- **2026-04-12 Claude** `330ecfb..06792d8` (merged eval-loop branch + triage). Day 1-4 of the mini-phase completed in one session. Day 2 baseline: rule extractor 0% recall, 5 distinct miss classes. Day 4 gate cleared: LLM extractor (claude -p haiku, OAuth) hit 100% recall, 2.55 yield/interaction. Refactored from anthropic SDK to subprocess after "no API key" rule. First live triage: 51 candidates -> 16 promoted, 35 rejected. Active memories 20->36. p06-polisher went from 2 to 16 memories (firmware/telemetry architecture set). POST /memory now accepts status field. Test count 264->278.
|
||||||
|
- **2026-04-11 Claude** `claude/extractor-eval-loop @ 7d8d599` — Day 1+2 of the mini-phase. Froze a 64-interaction snapshot (`scripts/eval_data/interactions_snapshot_2026-04-11.json`) and labeled 20 by length-stratified random sample (5 positive, 15 zero; 7 total expected candidates). Built `scripts/extractor_eval.py` as a file-based eval runner. **Day 2 baseline: rule extractor hit 0% yield / 0% recall / 0% precision on the labeled set; 5 false negatives across 5 distinct miss classes (recommendation_prose, architectural_change_summary, spec_update_announcement, layered_recommendation, alignment_assertion).** This is the Day 4 hard-stop signal arriving two days early — a single rule expansion cannot close a 5-way miss, and widening rules blindly will collapse precision. The Day 4 decision gate is escalated to Antoine for ratification before Day 3 touches any extractor code. No extractor code on main has changed.
|
||||||
|
- **2026-04-11 Codex (ledger audit)** fixed stale `main_tip`, retargeted R1 from the API surface to the live Claude Stop hook, and formalized the review write protocol so Claude can consume findings without rewriting them.
|
||||||
|
- **2026-04-11 Claude** `b3253f3..59331e5` (1 commit). Wired the DEV-LEDGER, added session protocol to AGENTS.md, created project-local CLAUDE.md, deleted stale `codex/port-atocore-ops-client` remote branch. No code changes, no redeploy needed.
|
||||||
|
- **2026-04-11 Claude** `c5bad99..b3253f3` (11 commits + 1 merge). Length-aware reinforcement, project memories in pack, query-relevance memory ranking, hyphenated-identifier tokenizer, retrieval eval harness seeded, off-host backup wired end-to-end, docs synced, codex integration-pass branch merged. Harness went 0->6/6 on live Dalidou.
|
||||||
|
- **2026-04-11 Codex (async review)** identified 2 P1s against a stale checkout. R1 was fair (extraction not automated), R2 was outdated (project memories already landed on main). Delivered the 8-day execution plan now in Active Plan.
|
||||||
|
- **2026-04-06 Antoine** created `codex/atocore-integration-pass` with the `t420-openclaw/` workspace (merged 2026-04-11).
|
||||||
|
|
||||||
|
## Working Rules
|
||||||
|
|
||||||
|
- Claude builds; Codex audits. No parallel work on the same files.
|
||||||
|
- Codex branches fork from `main`: `git fetch origin && git checkout -b codex/<topic> origin/main`.
|
||||||
|
- P1 findings block further main commits until acknowledged in Open Review Findings.
|
||||||
|
- Every session appends at least one Session Log line and bumps Orientation.
|
||||||
|
- Trim Session Log and Recent Decisions to the last 20 at session end.
|
||||||
|
- Docs in `docs/` may overclaim stale status; the ledger is the one-file source of truth for "what is true right now."
|
||||||
|
|
||||||
|
## Quick Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check live state
|
||||||
|
ssh papa@dalidou "curl -s http://localhost:8100/health"
|
||||||
|
|
||||||
|
# Run the retrieval harness
|
||||||
|
python scripts/retrieval_eval.py # human-readable
|
||||||
|
python scripts/retrieval_eval.py --json # machine-readable
|
||||||
|
|
||||||
|
# Deploy a new main tip
|
||||||
|
git push origin main && ssh papa@dalidou "bash /srv/storage/atocore/app/deploy/dalidou/deploy.sh"
|
||||||
|
|
||||||
|
# Reflection-loop ops
|
||||||
|
python scripts/atocore_client.py batch-extract '' '' 200 false # preview
|
||||||
|
python scripts/atocore_client.py batch-extract '' '' 200 true # persist
|
||||||
|
python scripts/atocore_client.py triage
|
||||||
|
```
|
||||||
@@ -226,14 +226,53 @@ candidate was a synthetic test capture from earlier in the session
|
|||||||
- Capture → reinforce is working correctly on live data (length-aware
|
- Capture → reinforce is working correctly on live data (length-aware
|
||||||
matcher verified on live paraphrase of a p04 memory).
|
matcher verified on live paraphrase of a p04 memory).
|
||||||
|
|
||||||
Follow-up candidates (not yet scheduled):
|
Follow-up candidates:
|
||||||
|
|
||||||
1. Extractor rule expansion — add conversational-form rules so real
|
1. ~~Extractor rule expansion~~ — Day 2 baseline showed 0% recall
|
||||||
session text has a chance of surfacing candidates.
|
across 5 distinct miss classes; rule expansion cannot close a
|
||||||
2. LLM-assisted extractor as a separate rule family, guarded by
|
5-way miss. Deprioritized.
|
||||||
confidence and always landing in `status=candidate` (never active).
|
2. ~~LLM-assisted extractor~~ — DONE 2026-04-12. `extractor_llm.py`
|
||||||
3. Retrieval eval harness — diffable scorecard of
|
shells out to `claude -p` (Haiku, OAuth, no API key). First live
|
||||||
`formatted_context` across a fixed question set per active project.
|
run: 100% recall, 2.55 yield/interaction on a 20-interaction
|
||||||
|
labeled set. First triage: 51 candidates → 16 promoted, 35
|
||||||
|
rejected (31% accept rate). Active memories 20 → 36.
|
||||||
|
3. ~~Retrieval eval harness~~ — DONE 2026-04-11 (scripts/retrieval_eval.py,
|
||||||
|
6/6 passing). Expansion to 15-20 fixtures is mini-phase Day 6.
|
||||||
|
|
||||||
|
## Extractor Scope — 2026-04-12
|
||||||
|
|
||||||
|
What the LLM-assisted extractor (`src/atocore/memory/extractor_llm.py`)
|
||||||
|
extracts from conversational Claude Code captures:
|
||||||
|
|
||||||
|
**In scope:**
|
||||||
|
|
||||||
|
- Architectural commitments (e.g. "Z-axis is engage/retract, not
|
||||||
|
continuous position")
|
||||||
|
- Ratified decisions with project scope (e.g. "USB SSD mandatory on
|
||||||
|
RPi for telemetry storage")
|
||||||
|
- Durable engineering facts (e.g. "telemetry data rate ~29 MB/hour")
|
||||||
|
- Working rules and adaptation patterns (e.g. "extraction stays off
|
||||||
|
the capture hot path")
|
||||||
|
- Interface invariants (e.g. "controller-job.v1 in, run-log.v1 out;
|
||||||
|
no firmware change needed")
|
||||||
|
|
||||||
|
**Out of scope (intentionally rejected by triage):**
|
||||||
|
|
||||||
|
- Transient roadmap / plan steps that will be stale in a week
|
||||||
|
- Operational instructions ("run this command to deploy")
|
||||||
|
- Process rules that live in DEV-LEDGER.md / AGENTS.md, not in memory
|
||||||
|
- Implementation details that are too granular (individual field names
|
||||||
|
when the parent concept is already captured)
|
||||||
|
- Already-fixed review findings (P1/P2 that no longer apply)
|
||||||
|
- Duplicates of existing active memories with wrong project tags
|
||||||
|
|
||||||
|
**Trust model:**
|
||||||
|
|
||||||
|
- Extraction stays off the capture hot path (batch / manual only)
|
||||||
|
- All candidates land as `status=candidate`, never auto-promoted
|
||||||
|
- Human or auto-triage reviews before promotion to active
|
||||||
|
- Future direction: multi-model extraction + triage (Codex/Gemini as
|
||||||
|
second-pass reviewers for robustness against single-model bias)
|
||||||
|
|
||||||
## Long-Run Goal
|
## Long-Run Goal
|
||||||
|
|
||||||
|
|||||||
317
docs/openclaw-atocore-audit-note.md
Normal file
317
docs/openclaw-atocore-audit-note.md
Normal file
@@ -0,0 +1,317 @@
|
|||||||
|
# OpenClaw x AtoCore V1 Audit Note
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
|
||||||
|
This note is the Phase 1 audit for a safe OpenClaw x AtoCore operating model.
|
||||||
|
It covers only what was directly verified in `/home/papa/ATOCore` and `/home/papa/clawd` on 2026-04-23, plus explicit assumptions called out as assumptions.
|
||||||
|
|
||||||
|
This phase does not change code, runtime behavior, skills, helpers, or automation.
|
||||||
|
|
||||||
|
## Files requested and verified
|
||||||
|
|
||||||
|
The following requested AtoCore files were present and reviewed:
|
||||||
|
|
||||||
|
- `docs/openclaw-integration-contract.md`
|
||||||
|
- `docs/architecture/llm-client-integration.md`
|
||||||
|
- `docs/architecture/representation-authority.md`
|
||||||
|
- `docs/operating-model.md`
|
||||||
|
- `docs/current-state.md`
|
||||||
|
- `docs/master-plan-status.md`
|
||||||
|
- `docs/operations.md`
|
||||||
|
- `AGENTS.md`
|
||||||
|
- `CLAUDE.md`
|
||||||
|
- `DEV-LEDGER.md`
|
||||||
|
|
||||||
|
No requested files were missing.
|
||||||
|
|
||||||
|
## What was directly verified
|
||||||
|
|
||||||
|
### 1. OpenClaw instruction surface
|
||||||
|
|
||||||
|
In `/home/papa/clawd/AGENTS.md`, OpenClaw is currently instructed to:
|
||||||
|
|
||||||
|
- use the `atocore-context` skill for project-dependent work
|
||||||
|
- treat AtoCore as additive and fail-open
|
||||||
|
- prefer `auto-context` for project knowledge questions
|
||||||
|
- prefer `project-state` for trusted current truth
|
||||||
|
- use `refresh-project` if the human explicitly asked to refresh or ingest project changes
|
||||||
|
- use `discrawl` automatically when Antoine asks about prior Discord discussions
|
||||||
|
|
||||||
|
This is already close to the intended additive read path, but it also exposes mutating project operations in a general operator workflow.
|
||||||
|
|
||||||
|
### 2. OpenClaw helper skill surface
|
||||||
|
|
||||||
|
The current helper skill is:
|
||||||
|
|
||||||
|
- `/home/papa/clawd/skills/atocore-context/SKILL.md`
|
||||||
|
- `/home/papa/clawd/skills/atocore-context/scripts/atocore.sh`
|
||||||
|
|
||||||
|
The skill describes AtoCore as a read-only additive context service, but the helper script currently exposes the following commands:
|
||||||
|
|
||||||
|
- `health`
|
||||||
|
- `sources`
|
||||||
|
- `stats`
|
||||||
|
- `projects`
|
||||||
|
- `project-template`
|
||||||
|
- `detect-project`
|
||||||
|
- `auto-context`
|
||||||
|
- `debug-context`
|
||||||
|
- `propose-project`
|
||||||
|
- `register-project`
|
||||||
|
- `update-project`
|
||||||
|
- `refresh-project`
|
||||||
|
- `project-state`
|
||||||
|
- `query`
|
||||||
|
- `context-build`
|
||||||
|
- `ingest-sources`
|
||||||
|
|
||||||
|
That means the helper is not actually read-only. It can drive registry mutation and ingestion-related operations.
|
||||||
|
|
||||||
|
### 3. AtoCore shared operator client surface
|
||||||
|
|
||||||
|
The shared operator client in `/home/papa/ATOCore/scripts/atocore_client.py` exposes a broader surface than the OpenClaw helper, including:
|
||||||
|
|
||||||
|
- all of the project and context operations above
|
||||||
|
- `project-state-set`
|
||||||
|
- `project-state-invalidate`
|
||||||
|
- `capture`
|
||||||
|
- `extract`
|
||||||
|
- `reinforce-interaction`
|
||||||
|
- `list-interactions`
|
||||||
|
- `get-interaction`
|
||||||
|
- `queue`
|
||||||
|
- `promote`
|
||||||
|
- `reject`
|
||||||
|
- `batch-extract`
|
||||||
|
- `triage`
|
||||||
|
|
||||||
|
This matches the architectural intent in `docs/architecture/llm-client-integration.md`: a shared operator client should be the canonical reusable surface for multiple frontends.
|
||||||
|
|
||||||
|
### 4. Actual layering status today
|
||||||
|
|
||||||
|
The intended layering is documented in `docs/architecture/llm-client-integration.md` as:
|
||||||
|
|
||||||
|
- AtoCore HTTP API
|
||||||
|
- shared operator client
|
||||||
|
- thin per-agent frontends
|
||||||
|
|
||||||
|
But the current OpenClaw helper is still its own Bash implementation. It does not shell out to the shared operator client today.
|
||||||
|
|
||||||
|
So the shared-client pattern is documented, but not yet applied to OpenClaw.
|
||||||
|
|
||||||
|
### 5. AtoCore availability and fail-open behavior
|
||||||
|
|
||||||
|
The OpenClaw helper successfully reached the live AtoCore instance during this audit.
|
||||||
|
|
||||||
|
Verified live behavior:
|
||||||
|
|
||||||
|
- `health` worked
|
||||||
|
- `projects` worked
|
||||||
|
- the helper still has fail-open logic when network access fails
|
||||||
|
|
||||||
|
This part is consistent with the stated additive and fail-open stance.
|
||||||
|
|
||||||
|
### 6. Discrawl availability
|
||||||
|
|
||||||
|
The `discrawl` CLI is installed locally and available.
|
||||||
|
|
||||||
|
Verified during audit:
|
||||||
|
|
||||||
|
- binary present
|
||||||
|
- version `0.3.0`
|
||||||
|
- OpenClaw workspace instructions explicitly route project-history recall through `discrawl`
|
||||||
|
|
||||||
|
This supports the desired framing of Discord and Discrawl as an evidence stream.
|
||||||
|
|
||||||
|
### 7. Screenpipe status
|
||||||
|
|
||||||
|
`screenpipe` was not present as a local command in this environment during the audit.
|
||||||
|
|
||||||
|
For V1, Screenpipe is deferred and out of scope. No active Screenpipe input lane was verified or adopted in the final V1 policy.
|
||||||
|
|
||||||
|
## Current implementation shape
|
||||||
|
|
||||||
|
### What OpenClaw can do safely right now
|
||||||
|
|
||||||
|
The current safe, directly verified OpenClaw -> AtoCore path is:
|
||||||
|
|
||||||
|
- project detection
|
||||||
|
- context build
|
||||||
|
- query and retrieval
|
||||||
|
- project-state read
|
||||||
|
- service inspection
|
||||||
|
- fail-open fallback
|
||||||
|
|
||||||
|
That is the mature part of the integration.
|
||||||
|
|
||||||
|
### What OpenClaw can also do today, but should be treated as controlled operator actions
|
||||||
|
|
||||||
|
The current helper also exposes:
|
||||||
|
|
||||||
|
- project proposal preview
|
||||||
|
- project registration
|
||||||
|
- project update
|
||||||
|
- project refresh
|
||||||
|
- ingest-sources
|
||||||
|
|
||||||
|
These should not be treated as background or conversational automation. They are operator actions and need explicit approval policy.
|
||||||
|
|
||||||
|
### What exists in AtoCore but is not exposed through the OpenClaw helper
|
||||||
|
|
||||||
|
The shared operator client already supports:
|
||||||
|
|
||||||
|
- interaction capture
|
||||||
|
- candidate extraction
|
||||||
|
- queue review
|
||||||
|
- promote or reject
|
||||||
|
- trusted project-state write and invalidate
|
||||||
|
|
||||||
|
The current OpenClaw helper does not expose that surface.
|
||||||
|
|
||||||
|
This is important for V1 design: the write-capable lanes already exist in AtoCore, but they are not yet safely shaped for Discord-originated automation.
|
||||||
|
|
||||||
|
## Conflicts with the target V1 stance
|
||||||
|
|
||||||
|
The following conflicts are real and should be named explicitly.
|
||||||
|
|
||||||
|
### Conflict 1 - the OpenClaw helper is described as read-only, but it is not read-only
|
||||||
|
|
||||||
|
`SKILL.md` frames the integration as read-only additive context.
|
||||||
|
`atocore.sh` exposes mutating operations:
|
||||||
|
|
||||||
|
- `register-project`
|
||||||
|
- `update-project`
|
||||||
|
- `refresh-project`
|
||||||
|
- `ingest-sources`
|
||||||
|
|
||||||
|
That mismatch needs a policy fix in Phase 2. For Phase 1 it must be documented as a conflict.
|
||||||
|
|
||||||
|
### Conflict 2 - OpenClaw duplicates client logic instead of using the shared operator client
|
||||||
|
|
||||||
|
The architecture docs prefer a shared operator client reused across frontends.
|
||||||
|
The OpenClaw helper currently reimplements request logic and project detection in Bash.
|
||||||
|
|
||||||
|
That is a direct conflict with the preferred shared-client pattern.
|
||||||
|
|
||||||
|
### Conflict 3 - mutating project operations are too close to the conversational surface
|
||||||
|
|
||||||
|
The helper makes registry and ingestion operations reachable from the OpenClaw side without a dedicated Discord-specific approval gate.
|
||||||
|
|
||||||
|
Even if the human explicitly asks for a refresh, the current shape does not yet distinguish between:
|
||||||
|
|
||||||
|
- a direct trusted operator action in a controlled session
|
||||||
|
- a Discord-originated conversational path that should require an explicit human approval step before mutation
|
||||||
|
|
||||||
|
The Phase 2 V1 policy needs that distinction.
|
||||||
|
|
||||||
|
### Conflict 4 - current docs overstate or blur write capabilities
|
||||||
|
|
||||||
|
`docs/current-state.md` says OpenClaw can seed AtoCore through project-scoped memory entries and staged document ingestion.
|
||||||
|
That was not directly verified through the current OpenClaw helper surface in `/home/papa/clawd`.
|
||||||
|
|
||||||
|
The helper script does not expose:
|
||||||
|
|
||||||
|
- `capture`
|
||||||
|
- `extract`
|
||||||
|
- `promote`
|
||||||
|
- `reject`
|
||||||
|
- `project-state-set`
|
||||||
|
|
||||||
|
So there is at least a documentation and runtime-surface mismatch.
|
||||||
|
|
||||||
|
### Conflict 5 - there was no single OpenClaw-facing evidence lane description before this doc set
|
||||||
|
|
||||||
|
The target architecture needs a clean distinction between:
|
||||||
|
|
||||||
|
- raw evidence
|
||||||
|
- reviewable candidates
|
||||||
|
- active memories and entities
|
||||||
|
- trusted project_state
|
||||||
|
|
||||||
|
Today that distinction exists conceptually across several AtoCore docs, but before this Phase 1 doc set there was no single OpenClaw-facing operating model that told an operator exactly where Discord and Discrawl signals are allowed to land.
|
||||||
|
|
||||||
|
That is the main gap this doc set closes.
|
||||||
|
|
||||||
|
## What is already aligned with the target V1 stance
|
||||||
|
|
||||||
|
Several important pieces are already aligned.
|
||||||
|
|
||||||
|
### Aligned 1 - additive plus fail-open
|
||||||
|
|
||||||
|
Both AtoCore and OpenClaw docs consistently say AtoCore should be additive and fail-open from the OpenClaw side.
|
||||||
|
That is the right baseline and was verified live.
|
||||||
|
|
||||||
|
### Aligned 2 - project_state is already treated as special and curated
|
||||||
|
|
||||||
|
AtoCore architecture docs already treat `project_state` as the highest-trust curated layer.
|
||||||
|
This supports the rule that raw signals must not directly auto-write trusted project state.
|
||||||
|
|
||||||
|
### Aligned 3 - canonical-home thinking already exists
|
||||||
|
|
||||||
|
`docs/architecture/representation-authority.md` already establishes that each fact type needs one canonical home.
|
||||||
|
That is exactly the right foundation for the Discord and Discrawl design.
|
||||||
|
|
||||||
|
### Aligned 4 - reflection and candidate lifecycle already exists in AtoCore
|
||||||
|
|
||||||
|
The shared operator client and AtoCore docs already have a candidate workflow:
|
||||||
|
|
||||||
|
- capture
|
||||||
|
- extract
|
||||||
|
- queue
|
||||||
|
- promote or reject
|
||||||
|
|
||||||
|
That means V1 does not need to invent a new trust model. It needs to apply the existing one correctly to Discord and Discrawl signals.
|
||||||
|
|
||||||
|
## Recommended V1 operating interpretation
|
||||||
|
|
||||||
|
Until implementation work begins, the safest V1 operating interpretation is:
|
||||||
|
|
||||||
|
1. Discord and Discrawl are evidence sources, not truth sources.
|
||||||
|
2. OpenClaw is the orchestrator and operator, not canonical storage.
|
||||||
|
3. AtoCore memories may hold reviewed episodic, personal, and loose project signal.
|
||||||
|
4. Future AtoCore entities should hold reviewed structured decisions, requirements, and constraints.
|
||||||
|
5. `project_state` remains manual or tightly gated only.
|
||||||
|
6. Registry mutation, refresh, ingestion, and candidate promotion or rejection require explicit human approval on Discord-originated paths.
|
||||||
|
7. The shared operator client should become the only write-capable operator surface reused by OpenClaw and other frontends.
|
||||||
|
8. Screenpipe remains deferred and out of V1 scope.
|
||||||
|
|
||||||
|
## Assumption log
|
||||||
|
|
||||||
|
The following points were not directly verified and must stay labeled as assumptions.
|
||||||
|
|
||||||
|
1. Screenpipe integration shape is unverified and deferred.
|
||||||
|
- The `screenpipe` command was not present locally.
|
||||||
|
- No verified Screenpipe pipeline files were found in the inspected workspaces.
|
||||||
|
- V1 therefore excludes Screenpipe from active policy and runtime scope.
|
||||||
|
|
||||||
|
2. No direct Discord -> AtoCore auto-mutation path was verified in code.
|
||||||
|
- The OpenClaw workspace clearly contains read and query context behavior and a Discrawl retrieval rule.
|
||||||
|
- It does not clearly expose a verified Discord-triggered path that auto-calls `project-state-set`, `promote`, `reject`, or `register-project`.
|
||||||
|
- The risk is therefore policy and proximity of commands, not a proven live mutation bug.
|
||||||
|
|
||||||
|
3. OpenClaw runtime use of the shared operator client was not verified because it is not implemented yet.
|
||||||
|
- The shared client exists in the AtoCore repo.
|
||||||
|
- The OpenClaw helper is still its own Bash implementation.
|
||||||
|
|
||||||
|
4. A dedicated evidence store was not verified as a first-class AtoCore schema layer.
|
||||||
|
- Existing AtoCore surfaces clearly support interactions and candidate memories.
|
||||||
|
- This V1 model therefore uses evidence artifacts, interactions, and archive bundles as an architectural lane, without claiming a new implemented table already exists.
|
||||||
|
|
||||||
|
5. Future entities remain future.
|
||||||
|
- The entity layer is architected in AtoCore docs.
|
||||||
|
- This audit did not verify a production entity promotion flow being used by OpenClaw.
|
||||||
|
|
||||||
|
## Bottom line
|
||||||
|
|
||||||
|
The good news is that the trust foundations already exist.
|
||||||
|
|
||||||
|
The main conclusion is that the current system is closest to a safe V1 when interpreted this way:
|
||||||
|
|
||||||
|
- keep AtoCore additive and fail-open
|
||||||
|
- treat Discord and Discrawl as evidence only
|
||||||
|
- route reviewed signal into memory candidates first
|
||||||
|
- reserve `project_state` for explicit curation only
|
||||||
|
- move OpenClaw toward the shared operator client instead of maintaining a separate write-capable helper surface
|
||||||
|
- keep Screenpipe out of V1
|
||||||
|
|
||||||
|
That gives a coherent path to Phase 2 without pretending the current implementation is already there.
|
||||||
224
docs/openclaw-atocore-clawd-governance-review.patch
Normal file
224
docs/openclaw-atocore-clawd-governance-review.patch
Normal file
@@ -0,0 +1,224 @@
|
|||||||
|
commit 80bd99aaea1bcab2ea5ea732df2f749e84d84318
|
||||||
|
Author: Anto01 <antoine.letarte@gmail.com>
|
||||||
|
Date: Thu Apr 23 15:59:59 2026 +0000
|
||||||
|
|
||||||
|
Tighten OpenClaw AtoCore governance policy
|
||||||
|
|
||||||
|
diff --git a/AGENTS.md b/AGENTS.md
|
||||||
|
index 1da3385..ea4d103 100644
|
||||||
|
--- a/AGENTS.md
|
||||||
|
+++ b/AGENTS.md
|
||||||
|
@@ -105,7 +105,7 @@ Reactions are lightweight social signals. Humans use them constantly — they sa
|
||||||
|
|
||||||
|
## Tools
|
||||||
|
|
||||||
|
-When a task is contextual and project-dependent, use the `atocore-context` skill to query Dalidou-hosted AtoCore for trusted project state, retrieval, context-building, registered project refresh, or project registration discovery when that will improve accuracy. Treat AtoCore as additive and fail-open; do not replace OpenClaw's own memory with it. Prefer `projects` and `refresh-project <id>` when a known project needs a clean source refresh, and use `project-template` when proposing a new project registration, and `propose-project ...` when you want a normalized preview before editing the registry manually.
|
||||||
|
+When a task is contextual and project-dependent, use the `atocore-context` skill to query Dalidou-hosted AtoCore for trusted project-state reads, retrieval, and context-building when that will improve accuracy. Treat AtoCore as additive and fail-open; do not replace OpenClaw's own memory with it.
|
||||||
|
|
||||||
|
### Organic AtoCore Routing
|
||||||
|
|
||||||
|
@@ -116,14 +116,60 @@ Use AtoCore first when the prompt:
|
||||||
|
- asks about architecture, constraints, status, requirements, vendors, planning, prior decisions, or current project truth
|
||||||
|
- would benefit from cross-source context instead of only the local repo
|
||||||
|
|
||||||
|
-Preferred flow:
|
||||||
|
+Preferred read path:
|
||||||
|
1. `auto-context "<prompt>" 3000` for most project knowledge questions
|
||||||
|
2. `project-state <project>` when the user is clearly asking for trusted current truth
|
||||||
|
-3. `refresh-project <id>` before answering if the user explicitly asked to refresh or ingest project changes
|
||||||
|
+3. fall back to normal OpenClaw tools and memory if AtoCore returns `no_project_match` or is unavailable
|
||||||
|
|
||||||
|
Do not force AtoCore for purely local coding actions like fixing a function, editing one file, or running tests, unless broader project context is likely to matter.
|
||||||
|
|
||||||
|
-If `auto-context` returns `no_project_match` or AtoCore is unavailable, continue normally with OpenClaw's own tools and memory.
|
||||||
|
+### AtoCore Governance
|
||||||
|
+
|
||||||
|
+Default Discord posture for AtoCore is read-only and additive.
|
||||||
|
+
|
||||||
|
+Discord-originated or Discrawl-originated context may inform:
|
||||||
|
+- evidence collection
|
||||||
|
+- retrieval
|
||||||
|
+- context building
|
||||||
|
+- candidate review preparation
|
||||||
|
+
|
||||||
|
+It must not directly perform AtoCore mutating actions.
|
||||||
|
+
|
||||||
|
+Mutating AtoCore actions include:
|
||||||
|
+- `register-project`
|
||||||
|
+- `update-project`
|
||||||
|
+- `refresh-project`
|
||||||
|
+- `ingest-sources`
|
||||||
|
+- `project-state-set`
|
||||||
|
+- `project-state-invalidate`
|
||||||
|
+- `promote`
|
||||||
|
+- `reject`
|
||||||
|
+- any future trusted-state or review mutation
|
||||||
|
+
|
||||||
|
+These actions require explicit human approval for the specific action in the current thread or session.
|
||||||
|
+Do not infer approval from:
|
||||||
|
+- prior Discord discussion
|
||||||
|
+- Discrawl archive recall
|
||||||
|
+- screener output
|
||||||
|
+- vague intent like "we should probably refresh this"
|
||||||
|
+
|
||||||
|
+Hard rules:
|
||||||
|
+- no direct Discord -> `project_state`
|
||||||
|
+- no direct Discord -> register / update / refresh / ingest / promote / reject
|
||||||
|
+- no hidden mutation inside screening or review-prep flows
|
||||||
|
+- PKM notes are not the main operator instruction surface for AtoCore behavior
|
||||||
|
+
|
||||||
|
+### Discord Archive Retrieval (discrawl)
|
||||||
|
+
|
||||||
|
+When Antoine asks in natural language about prior project discussions, decisions, thread history, answers, or whether something was already discussed in Discord, use the local `discrawl` archive automatically.
|
||||||
|
+
|
||||||
|
+Rules:
|
||||||
|
+- Antoine should not need to remember or type `discrawl` commands.
|
||||||
|
+- Treat Discord history as a normal background retrieval source, like memory or project docs.
|
||||||
|
+- Use `discrawl` silently when it will materially improve recall or confidence.
|
||||||
|
+- Prefer this for prompts like "what did we decide", "did we discuss", "summarize the thread", "what were the open questions", or anything clearly anchored in prior Discord conversation.
|
||||||
|
+- If both AtoCore and Discord history are relevant, use both and synthesize.
|
||||||
|
+- If `discrawl` is stale or unavailable, say so briefly and continue with the best available context.
|
||||||
|
|
||||||
|
Skills provide your tools. When you need one, check its `SKILL.md`. Keep local notes (camera names, SSH details, voice preferences) in `TOOLS.md`.
|
||||||
|
|
||||||
|
diff --git a/skills/atocore-context/SKILL.md b/skills/atocore-context/SKILL.md
|
||||||
|
index e42a7b7..fa23207 100644
|
||||||
|
--- a/skills/atocore-context/SKILL.md
|
||||||
|
+++ b/skills/atocore-context/SKILL.md
|
||||||
|
@@ -1,12 +1,11 @@
|
||||||
|
---
|
||||||
|
name: atocore-context
|
||||||
|
-description: Use Dalidou-hosted AtoCore as a read-only external context service for project state, retrieval, and context-building without touching OpenClaw's own memory.
|
||||||
|
+description: Use Dalidou-hosted AtoCore as an additive external context service for project-state reads, retrieval, and context-building without replacing OpenClaw's own memory.
|
||||||
|
---
|
||||||
|
|
||||||
|
# AtoCore Context
|
||||||
|
|
||||||
|
-Use this skill when you need trusted project context, retrieval help, or AtoCore
|
||||||
|
-health/status from the canonical Dalidou instance.
|
||||||
|
+Use this skill when you need trusted project context, retrieval help, or AtoCore health and status from the canonical Dalidou instance.
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
|
||||||
|
@@ -14,7 +13,7 @@ AtoCore is an additive external context service.
|
||||||
|
|
||||||
|
- It does not replace OpenClaw's own memory.
|
||||||
|
- It should be used for contextual work, not trivial prompts.
|
||||||
|
-- It is read-only in this first integration batch.
|
||||||
|
+- The default posture is read-only and fail-open.
|
||||||
|
- If AtoCore is unavailable, continue normally.
|
||||||
|
|
||||||
|
## Canonical Endpoint
|
||||||
|
@@ -31,27 +30,22 @@ Override with:
|
||||||
|
ATOCORE_BASE_URL=http://host:port
|
||||||
|
```
|
||||||
|
|
||||||
|
-## Safe Usage
|
||||||
|
+## V1 scope
|
||||||
|
|
||||||
|
-Use AtoCore for:
|
||||||
|
-- project-state checks
|
||||||
|
+Use this skill in V1 for:
|
||||||
|
+
|
||||||
|
+- project-state reads
|
||||||
|
- automatic project detection for normal project questions
|
||||||
|
-- retrieval over ingested project/ecosystem docs
|
||||||
|
+- retrieval over ingested project and ecosystem docs
|
||||||
|
- context-building for complex project prompts
|
||||||
|
- verifying current AtoCore hosting and architecture state
|
||||||
|
-- listing registered projects and refreshing a known project source set
|
||||||
|
-- inspecting the project registration template before proposing a new project entry
|
||||||
|
-- generating a proposal preview for a new project registration without writing it
|
||||||
|
-- registering an approved project entry when explicitly requested
|
||||||
|
-- updating an existing registered project when aliases or description need refinement
|
||||||
|
+- inspecting project registrations and proposal previews when operator review is needed
|
||||||
|
|
||||||
|
-Do not use AtoCore for:
|
||||||
|
-- automatic memory write-back
|
||||||
|
-- replacing OpenClaw memory
|
||||||
|
-- silent ingestion of broad new corpora without approval
|
||||||
|
-- mutating the registry automatically without human approval
|
||||||
|
+Screenpipe is out of V1 scope. Do not treat it as an active input lane or dependency for this skill.
|
||||||
|
+
|
||||||
|
+## Read path commands
|
||||||
|
|
||||||
|
-## Commands
|
||||||
|
+These are the normal additive commands:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
~/clawd/skills/atocore-context/scripts/atocore.sh health
|
||||||
|
@@ -62,15 +56,56 @@ Do not use AtoCore for:
|
||||||
|
~/clawd/skills/atocore-context/scripts/atocore.sh detect-project "what's the interferometer error budget?"
|
||||||
|
~/clawd/skills/atocore-context/scripts/atocore.sh auto-context "what's the interferometer error budget?" 3000
|
||||||
|
~/clawd/skills/atocore-context/scripts/atocore.sh debug-context
|
||||||
|
-~/clawd/skills/atocore-context/scripts/atocore.sh propose-project p07-example "p07,example-project" vault incoming/projects/p07-example "Example project" "Primary staged project docs"
|
||||||
|
-~/clawd/skills/atocore-context/scripts/atocore.sh register-project p07-example "p07,example-project" vault incoming/projects/p07-example "Example project" "Primary staged project docs"
|
||||||
|
-~/clawd/skills/atocore-context/scripts/atocore.sh update-project p05 "Curated staged docs for the P05 interferometer architecture, vendors, and error-budget project."
|
||||||
|
-~/clawd/skills/atocore-context/scripts/atocore.sh refresh-project p05
|
||||||
|
~/clawd/skills/atocore-context/scripts/atocore.sh project-state atocore
|
||||||
|
~/clawd/skills/atocore-context/scripts/atocore.sh query "What is AtoDrive?"
|
||||||
|
~/clawd/skills/atocore-context/scripts/atocore.sh context-build "Need current AtoCore architecture" atocore 3000
|
||||||
|
```
|
||||||
|
|
||||||
|
+## Approved operator actions only
|
||||||
|
+
|
||||||
|
+The helper currently exposes some mutating commands, but they are not normal background behavior.
|
||||||
|
+Treat them as approved operator actions only:
|
||||||
|
+
|
||||||
|
+```bash
|
||||||
|
+~/clawd/skills/atocore-context/scripts/atocore.sh propose-project ...
|
||||||
|
+~/clawd/skills/atocore-context/scripts/atocore.sh register-project ...
|
||||||
|
+~/clawd/skills/atocore-context/scripts/atocore.sh update-project ...
|
||||||
|
+~/clawd/skills/atocore-context/scripts/atocore.sh refresh-project ...
|
||||||
|
+~/clawd/skills/atocore-context/scripts/atocore.sh ingest-sources
|
||||||
|
+```
|
||||||
|
+
|
||||||
|
+Do not use these from a Discord-originated path unless the human explicitly approves the specific action in the current thread or session.
|
||||||
|
+
|
||||||
|
+## Explicit approval rule
|
||||||
|
+
|
||||||
|
+Explicit approval means all of the following:
|
||||||
|
+
|
||||||
|
+- the human directly instructs the specific mutating action
|
||||||
|
+- the instruction is in the current thread or current session
|
||||||
|
+- the approval is for that specific action
|
||||||
|
+- the approval is not inferred from Discord evidence, Discrawl recall, screener output, or vague intent
|
||||||
|
+
|
||||||
|
+Examples of explicit approval:
|
||||||
|
+
|
||||||
|
+- "refresh p05 now"
|
||||||
|
+- "register this project"
|
||||||
|
+- "update the aliases"
|
||||||
|
+
|
||||||
|
+Non-examples:
|
||||||
|
+
|
||||||
|
+- "we should probably refresh this"
|
||||||
|
+- archived discussion suggesting a refresh
|
||||||
|
+- a screener note recommending promotion or ingestion
|
||||||
|
+
|
||||||
|
+## Do not use AtoCore for
|
||||||
|
+
|
||||||
|
+- automatic memory write-back
|
||||||
|
+- replacing OpenClaw memory
|
||||||
|
+- silent ingestion of broad new corpora without approval
|
||||||
|
+- automatic registry mutation
|
||||||
|
+- direct Discord-originated mutation of trusted or operator state
|
||||||
|
+- direct Discord-originated promote or reject actions
|
||||||
|
+
|
||||||
|
## Contract
|
||||||
|
|
||||||
|
- prefer AtoCore only when additional context is genuinely useful
|
||||||
|
@@ -79,10 +114,6 @@ Do not use AtoCore for:
|
||||||
|
- cite when information came from AtoCore rather than local OpenClaw memory
|
||||||
|
- for normal project knowledge questions, prefer `auto-context "<prompt>" 3000` before answering
|
||||||
|
- use `detect-project "<prompt>"` when you want to inspect project inference explicitly
|
||||||
|
-- use `debug-context` right after `auto-context` or `context-build` when you want
|
||||||
|
- to inspect the exact last AtoCore context pack
|
||||||
|
-- prefer `projects` plus `refresh-project <id>` over long ad hoc ingest instructions when the project is already registered
|
||||||
|
-- use `project-template` when preparing a new project registration proposal
|
||||||
|
-- use `propose-project ...` to draft a normalized entry and review collisions first
|
||||||
|
-- use `register-project ...` only after the proposal has been reviewed and approved
|
||||||
|
-- use `update-project ...` when a registered project's description or aliases need refinement before refresh
|
||||||
|
+- use `debug-context` right after `auto-context` or `context-build` when you want to inspect the exact last AtoCore context pack
|
||||||
|
+- use `project-template` and `propose-project ...` when preparing a reviewed registration proposal
|
||||||
|
+- use `register-project ...`, `update-project ...`, `refresh-project ...`, and `ingest-sources` only after explicit approval
|
||||||
354
docs/openclaw-atocore-nightly-screener-runbook.md
Normal file
354
docs/openclaw-atocore-nightly-screener-runbook.md
Normal file
@@ -0,0 +1,354 @@
|
|||||||
|
# OpenClaw x AtoCore Nightly Screener Runbook
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
|
||||||
|
The nightly screener is the V1 bridge between broad evidence capture and narrow trusted state.
|
||||||
|
|
||||||
|
Its job is to:
|
||||||
|
|
||||||
|
- gather raw evidence from approved V1 sources
|
||||||
|
- reduce noise
|
||||||
|
- produce reviewable candidate material
|
||||||
|
- prepare operator review work
|
||||||
|
- never silently create trusted truth
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
|
||||||
|
The nightly screener is a screening and preparation job.
|
||||||
|
It is not a trusted-state writer.
|
||||||
|
It is not a registry operator.
|
||||||
|
It is not a hidden reviewer.
|
||||||
|
|
||||||
|
V1 active inputs are:
|
||||||
|
|
||||||
|
- Discord and Discrawl evidence
|
||||||
|
- OpenClaw interaction evidence
|
||||||
|
- PKM, repos, and KB references
|
||||||
|
- read-only AtoCore context for comparison and deduplication
|
||||||
|
|
||||||
|
## Explicit approval rule
|
||||||
|
|
||||||
|
If the screener output points at a mutating operator action, that action still requires:
|
||||||
|
|
||||||
|
- direct human instruction
|
||||||
|
- in the current thread or current session
|
||||||
|
- for that specific action
|
||||||
|
- with no inference from evidence or screener output alone
|
||||||
|
|
||||||
|
The screener may recommend review. It may not manufacture approval.
|
||||||
|
|
||||||
|
## Inputs
|
||||||
|
|
||||||
|
The screener may consume the following inputs when available.
|
||||||
|
|
||||||
|
### 1. Discord and Discrawl evidence
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
- recent archived Discord messages
|
||||||
|
- thread excerpts relevant to known projects
|
||||||
|
- conversation clusters around decisions, requirements, constraints, or repeated questions
|
||||||
|
|
||||||
|
### 2. OpenClaw interaction evidence
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
- captured interactions
|
||||||
|
- recent operator conversations relevant to projects
|
||||||
|
- already-logged evidence bundles
|
||||||
|
|
||||||
|
### 3. Read-only AtoCore context inputs
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
- project registry lookup for project matching
|
||||||
|
- project_state read for comparison only
|
||||||
|
- memory or entity lookups for deduplication only
|
||||||
|
|
||||||
|
These reads may help the screener rank or classify candidates, but they must not be used as a write side effect.
|
||||||
|
|
||||||
|
### 4. Optional canonical-source references
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
- PKM notes
|
||||||
|
- repo docs
|
||||||
|
- KB-export summaries
|
||||||
|
|
||||||
|
These may be consulted to decide whether a signal appears to duplicate or contradict already-canonical truth.
|
||||||
|
|
||||||
|
## Outputs
|
||||||
|
|
||||||
|
The screener should produce output in four buckets.
|
||||||
|
|
||||||
|
### 1. Nightly screener report
|
||||||
|
|
||||||
|
A compact report describing:
|
||||||
|
|
||||||
|
- inputs seen
|
||||||
|
- items skipped
|
||||||
|
- candidate counts
|
||||||
|
- project match confidence distribution
|
||||||
|
- failures or unavailable sources
|
||||||
|
- items requiring human review
|
||||||
|
|
||||||
|
### 2. Evidence bundle or manifest
|
||||||
|
|
||||||
|
A structured bundle of the source snippets that justified each candidate or unresolved item.
|
||||||
|
This is the reviewer's provenance package.
|
||||||
|
|
||||||
|
### 3. Candidate manifests
|
||||||
|
|
||||||
|
Separate candidate manifests for:
|
||||||
|
|
||||||
|
- memory candidates
|
||||||
|
- entity candidates later
|
||||||
|
- unresolved "needs canonical-source update first" items
|
||||||
|
|
||||||
|
### 4. Operator action queue
|
||||||
|
|
||||||
|
A short list of items needing explicit human action, such as:
|
||||||
|
|
||||||
|
- review these candidates
|
||||||
|
- decide whether to refresh project X
|
||||||
|
- decide whether to curate project_state
|
||||||
|
- decide whether a Discord-originated claim should first be reflected in PKM, repo, or KB
|
||||||
|
|
||||||
|
## Required non-output
|
||||||
|
|
||||||
|
The screener must not directly produce any of the following:
|
||||||
|
|
||||||
|
- active memories without review
|
||||||
|
- active entities without review
|
||||||
|
- project_state writes
|
||||||
|
- registry mutation
|
||||||
|
- refresh operations
|
||||||
|
- ingestion operations
|
||||||
|
- promote or reject decisions
|
||||||
|
|
||||||
|
## Nightly procedure
|
||||||
|
|
||||||
|
### Step 1 - load last-run checkpoint
|
||||||
|
|
||||||
|
Read the last successful screener checkpoint so the run knows:
|
||||||
|
|
||||||
|
- what time range to inspect
|
||||||
|
- what evidence was already processed
|
||||||
|
- which items were already dropped or bundled
|
||||||
|
|
||||||
|
If no checkpoint exists, use a conservative bounded time window and mark the run as bootstrap mode.
|
||||||
|
|
||||||
|
### Step 2 - gather evidence
|
||||||
|
|
||||||
|
Collect available evidence from each configured source.
|
||||||
|
|
||||||
|
Per-source rule:
|
||||||
|
|
||||||
|
- source unavailable -> note it, continue
|
||||||
|
- source empty -> note it, continue
|
||||||
|
- source noisy -> keep raw capture bounded and deduplicated
|
||||||
|
|
||||||
|
### Step 3 - normalize and deduplicate
|
||||||
|
|
||||||
|
For each collected item:
|
||||||
|
|
||||||
|
- normalize timestamps, source ids, and project hints
|
||||||
|
- remove exact duplicates
|
||||||
|
- group repeated or near-identical evidence when practical
|
||||||
|
- keep provenance pointers intact
|
||||||
|
|
||||||
|
The goal is to avoid flooding review with repeated copies of the same conversation.
|
||||||
|
|
||||||
|
### Step 4 - attempt project association
|
||||||
|
|
||||||
|
For each evidence item, try to associate it with:
|
||||||
|
|
||||||
|
- a registered project id, or
|
||||||
|
- `unassigned` if confidence is low
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
|
||||||
|
- high confidence match -> attach project id
|
||||||
|
- low confidence match -> mark as uncertain
|
||||||
|
- no good match -> leave unassigned
|
||||||
|
|
||||||
|
Do not force a project assignment just to make the output tidier.
|
||||||
|
|
||||||
|
### Step 5 - classify signal type
|
||||||
|
|
||||||
|
Classify each normalized item into one of these buckets:
|
||||||
|
|
||||||
|
- noise / ignore
|
||||||
|
- evidence only
|
||||||
|
- memory candidate
|
||||||
|
- entity candidate
|
||||||
|
- needs canonical-source update first
|
||||||
|
- needs explicit operator decision
|
||||||
|
|
||||||
|
If the classification is uncertain, choose the lower-trust bucket.
|
||||||
|
|
||||||
|
### Step 6 - compare against higher-trust layers
|
||||||
|
|
||||||
|
For non-noise items, compare against the current higher-trust landscape.
|
||||||
|
|
||||||
|
Check for:
|
||||||
|
|
||||||
|
- already-active equivalent memory
|
||||||
|
- already-active equivalent entity later
|
||||||
|
- existing project_state answer
|
||||||
|
- obvious duplication of canonical source truth
|
||||||
|
- obvious contradiction with canonical source truth
|
||||||
|
|
||||||
|
This comparison is read-only.
|
||||||
|
It is used only to rank and annotate output.
|
||||||
|
|
||||||
|
### Step 7 - build candidate bundles
|
||||||
|
|
||||||
|
For each candidate:
|
||||||
|
|
||||||
|
- include the candidate text or shape
|
||||||
|
- include provenance snippets
|
||||||
|
- include source type
|
||||||
|
- include project association confidence
|
||||||
|
- include reason for candidate classification
|
||||||
|
- include conflict or duplicate notes if found
|
||||||
|
|
||||||
|
### Step 8 - build unresolved operator queue
|
||||||
|
|
||||||
|
Some items should not become candidates yet.
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
- "This looks like current truth but should first be updated in PKM, repo, or KB."
|
||||||
|
- "This Discord-originated request asks for refresh or ingest."
|
||||||
|
- "This might be a decision, but confidence is too low."
|
||||||
|
|
||||||
|
These belong in a small operator queue, not in trusted state.
|
||||||
|
|
||||||
|
### Step 9 - persist report artifacts only
|
||||||
|
|
||||||
|
Persist only:
|
||||||
|
|
||||||
|
- screener report
|
||||||
|
- evidence manifests
|
||||||
|
- candidate manifests
|
||||||
|
- checkpoint metadata
|
||||||
|
|
||||||
|
If candidate persistence into AtoCore is enabled later, it still remains a candidate-only path and must not skip review.
|
||||||
|
|
||||||
|
### Step 10 - exit fail-open
|
||||||
|
|
||||||
|
If the screener could not reach AtoCore or some source system:
|
||||||
|
|
||||||
|
- write the failure or skip into the report
|
||||||
|
- keep the checkpoint conservative
|
||||||
|
- do not fake success
|
||||||
|
- do not silently mutate anything elsewhere
|
||||||
|
|
||||||
|
## Failure modes
|
||||||
|
|
||||||
|
### Failure mode 1 - AtoCore unavailable
|
||||||
|
|
||||||
|
Behavior:
|
||||||
|
|
||||||
|
- continue in fail-open mode if possible
|
||||||
|
- write a report that the run was evidence-only or degraded
|
||||||
|
- do not attempt write-side recovery actions
|
||||||
|
|
||||||
|
### Failure mode 2 - Discrawl unavailable or stale
|
||||||
|
|
||||||
|
Behavior:
|
||||||
|
|
||||||
|
- note Discord archive input unavailable or stale
|
||||||
|
- continue with other sources
|
||||||
|
- do not invent Discord evidence summaries
|
||||||
|
|
||||||
|
### Failure mode 3 - candidate explosion
|
||||||
|
|
||||||
|
Behavior:
|
||||||
|
|
||||||
|
- rank candidates
|
||||||
|
- keep only a bounded top set for review
|
||||||
|
- put the remainder into a dropped or deferred manifest
|
||||||
|
- do not overwhelm the reviewer queue
|
||||||
|
|
||||||
|
### Failure mode 4 - low-confidence project mapping
|
||||||
|
|
||||||
|
Behavior:
|
||||||
|
|
||||||
|
- leave items unassigned or uncertain
|
||||||
|
- do not force them into a project-specific truth lane
|
||||||
|
|
||||||
|
### Failure mode 5 - contradiction with trusted truth
|
||||||
|
|
||||||
|
Behavior:
|
||||||
|
|
||||||
|
- flag the contradiction in the report
|
||||||
|
- keep the evidence or candidate for review if useful
|
||||||
|
- do not overwrite project_state
|
||||||
|
|
||||||
|
### Failure mode 6 - direct operator-action request found in evidence
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
- "register this project"
|
||||||
|
- "refresh this source"
|
||||||
|
- "promote this memory"
|
||||||
|
|
||||||
|
Behavior:
|
||||||
|
|
||||||
|
- place the item into the operator action queue
|
||||||
|
- require explicit human approval
|
||||||
|
- do not perform the mutation as part of the screener
|
||||||
|
|
||||||
|
## Review handoff format
|
||||||
|
|
||||||
|
Each screener run should hand off a compact review package containing:
|
||||||
|
|
||||||
|
1. a run summary
|
||||||
|
2. candidate counts by type and project
|
||||||
|
3. top candidates with provenance
|
||||||
|
4. unresolved items needing explicit operator choice
|
||||||
|
5. unavailable-source notes
|
||||||
|
6. checkpoint status
|
||||||
|
|
||||||
|
The handoff should be short enough for a human to review without reading the entire raw archive.
|
||||||
|
|
||||||
|
## Safety rules
|
||||||
|
|
||||||
|
The screener must obey these rules every night.
|
||||||
|
|
||||||
|
1. No direct project_state writes.
|
||||||
|
2. No direct registry mutation.
|
||||||
|
3. No direct refresh or ingest.
|
||||||
|
4. No direct promote or reject.
|
||||||
|
5. No treating Discord or Discrawl as trusted truth.
|
||||||
|
6. No hiding source uncertainty.
|
||||||
|
7. No inventing missing integrations.
|
||||||
|
8. No bringing deferred sources into V1 through policy drift or hidden dependency.
|
||||||
|
|
||||||
|
## Minimum useful run
|
||||||
|
|
||||||
|
A useful screener run can still succeed even if it only does this:
|
||||||
|
|
||||||
|
- gathers available Discord and OpenClaw evidence
|
||||||
|
- filters obvious noise
|
||||||
|
- produces a small candidate manifest
|
||||||
|
- notes unavailable archive inputs if any
|
||||||
|
- leaves trusted state untouched
|
||||||
|
|
||||||
|
That is still a correct V1 run.
|
||||||
|
|
||||||
|
## Deferred from V1
|
||||||
|
|
||||||
|
Screenpipe is deferred from V1. It is not an active input, not a required dependency, and not part of the runtime behavior of this V1 screener.
|
||||||
|
|
||||||
|
## Bottom line
|
||||||
|
|
||||||
|
The nightly screener is not the brain of the system.
|
||||||
|
It is the filter.
|
||||||
|
|
||||||
|
Its purpose is to make human review easier while preserving the trust hierarchy:
|
||||||
|
|
||||||
|
- broad capture in
|
||||||
|
- narrow reviewed truth out
|
||||||
|
- no hidden mutations in the middle
|
||||||
360
docs/openclaw-atocore-promotion-pipeline.md
Normal file
360
docs/openclaw-atocore-promotion-pipeline.md
Normal file
@@ -0,0 +1,360 @@
|
|||||||
|
# OpenClaw x AtoCore V1 Promotion Pipeline
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
|
||||||
|
This document defines the V1 promotion pipeline for signals coming from Discord, Discrawl, OpenClaw, PKM, and repos.
|
||||||
|
|
||||||
|
The rule is simple:
|
||||||
|
|
||||||
|
- raw capture is evidence
|
||||||
|
- screening turns evidence into candidate material
|
||||||
|
- review promotes candidates into canonical homes
|
||||||
|
- trusted state is curated explicitly, not inferred automatically
|
||||||
|
|
||||||
|
## V1 scope
|
||||||
|
|
||||||
|
V1 active inputs are:
|
||||||
|
|
||||||
|
- Discord live conversation
|
||||||
|
- Discrawl archive retrieval
|
||||||
|
- OpenClaw interaction logs and evidence bundles
|
||||||
|
- PKM notes
|
||||||
|
- repos, KB exports, and repo docs
|
||||||
|
|
||||||
|
Read-only AtoCore context may be consulted for comparison and deduplication.
|
||||||
|
|
||||||
|
## Explicit approval rule
|
||||||
|
|
||||||
|
When this pipeline refers to approval or review for a mutating action, it means:
|
||||||
|
|
||||||
|
- the human directly instructs the specific action
|
||||||
|
- the instruction is in the current thread or current session
|
||||||
|
- the approval is for that specific action
|
||||||
|
- the approval is not inferred from evidence, archives, or screener output
|
||||||
|
|
||||||
|
## Pipeline summary
|
||||||
|
|
||||||
|
```text
|
||||||
|
raw capture
|
||||||
|
-> evidence bundle
|
||||||
|
-> nightly screening
|
||||||
|
-> candidate queue
|
||||||
|
-> human review
|
||||||
|
-> canonical home
|
||||||
|
-> optional trusted-state curation
|
||||||
|
```
|
||||||
|
|
||||||
|
## Stage 0 - raw capture
|
||||||
|
|
||||||
|
### Inputs
|
||||||
|
|
||||||
|
Raw capture may come from:
|
||||||
|
|
||||||
|
- Discord live conversation
|
||||||
|
- Discrawl archive retrieval
|
||||||
|
- OpenClaw interaction logs
|
||||||
|
- PKM notes
|
||||||
|
- repos / KB exports / repo docs
|
||||||
|
|
||||||
|
### Rule at this stage
|
||||||
|
|
||||||
|
Nothing captured here is trusted truth yet.
|
||||||
|
Everything is either:
|
||||||
|
|
||||||
|
- raw evidence, or
|
||||||
|
- a pointer to an already-canonical source
|
||||||
|
|
||||||
|
## Stage 1 - evidence bundle
|
||||||
|
|
||||||
|
The first durable V1 destination for raw signals is the evidence lane.
|
||||||
|
|
||||||
|
Examples of evidence bundle forms:
|
||||||
|
|
||||||
|
- AtoCore interaction records
|
||||||
|
- Discrawl retrieval result sets
|
||||||
|
- nightly screener input bundles
|
||||||
|
- local archived artifacts or manifests
|
||||||
|
- optional source snapshots used only for review preparation
|
||||||
|
|
||||||
|
### What evidence is for
|
||||||
|
|
||||||
|
Evidence exists so the operator can later answer:
|
||||||
|
|
||||||
|
- what did we actually see?
|
||||||
|
- where did this claim come from?
|
||||||
|
- what context supported the candidate?
|
||||||
|
- what should the reviewer inspect before promoting anything?
|
||||||
|
|
||||||
|
### What evidence is not for
|
||||||
|
|
||||||
|
Evidence is not:
|
||||||
|
|
||||||
|
- active memory
|
||||||
|
- active entity
|
||||||
|
- trusted project_state
|
||||||
|
- registry truth
|
||||||
|
|
||||||
|
## Stage 2 - screening
|
||||||
|
|
||||||
|
The nightly screener or an explicit review flow reads evidence and classifies it.
|
||||||
|
|
||||||
|
### Screening outputs
|
||||||
|
|
||||||
|
Each observed signal should be classified into one of these lanes:
|
||||||
|
|
||||||
|
1. Ignore / noise
|
||||||
|
- chatter
|
||||||
|
- duplicate archive material
|
||||||
|
- ambiguous fragments
|
||||||
|
- low-signal scraps
|
||||||
|
|
||||||
|
2. Keep as evidence only
|
||||||
|
- useful context, but too ambiguous or too raw to promote
|
||||||
|
|
||||||
|
3. Memory candidate
|
||||||
|
- stable enough to review as episodic, personal, or loose project signal
|
||||||
|
|
||||||
|
4. Entity candidate
|
||||||
|
- structured enough to review as a future decision, requirement, constraint, or validation fact
|
||||||
|
|
||||||
|
5. Needs canonical-source update first
|
||||||
|
- appears to assert current trusted truth but should first be reflected in the real canonical home, such as PKM, repo, or KB tool
|
||||||
|
|
||||||
|
### Key screening rule
|
||||||
|
|
||||||
|
If the screener cannot confidently tell whether a signal is:
|
||||||
|
|
||||||
|
- raw evidence,
|
||||||
|
- a loose durable memory,
|
||||||
|
- or a structured project truth,
|
||||||
|
|
||||||
|
then it must pick the lower-trust lane.
|
||||||
|
|
||||||
|
In V1, uncertainty resolves downward.
|
||||||
|
|
||||||
|
## Stage 3 - candidate queue
|
||||||
|
|
||||||
|
Only screened outputs may enter the candidate queue.
|
||||||
|
|
||||||
|
### Memory-candidate lane
|
||||||
|
|
||||||
|
Use this lane for reviewed-signal candidates such as:
|
||||||
|
|
||||||
|
- preferences
|
||||||
|
- episodic facts
|
||||||
|
- identity facts
|
||||||
|
- loose stable project signal that is useful to remember but not yet a formal structured entity
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
- "Antoine prefers operator summaries without extra ceremony."
|
||||||
|
- "The team discussed moving OpenClaw toward a shared operator client."
|
||||||
|
- "Discord history is useful as evidence but not as direct truth."
|
||||||
|
|
||||||
|
### Entity-candidate lane
|
||||||
|
|
||||||
|
Use this lane for future structured facts such as:
|
||||||
|
|
||||||
|
- decisions
|
||||||
|
- requirements
|
||||||
|
- constraints
|
||||||
|
- validation claims
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
- "Decision: use the shared operator client instead of duplicated frontend logic."
|
||||||
|
- "Constraint: Discord-originated paths must not directly mutate project_state."
|
||||||
|
|
||||||
|
### What cannot enter directly from raw capture
|
||||||
|
|
||||||
|
The following must not be created directly from raw Discord or Discrawl evidence without a screening step:
|
||||||
|
|
||||||
|
- active memories
|
||||||
|
- active entities
|
||||||
|
- project_state entries
|
||||||
|
- registry mutations
|
||||||
|
- promote or reject decisions
|
||||||
|
|
||||||
|
## Stage 4 - human review
|
||||||
|
|
||||||
|
This is the load-bearing stage.
|
||||||
|
|
||||||
|
A human reviewer, mediated by OpenClaw and eventually using the shared operator client, decides whether the candidate:
|
||||||
|
|
||||||
|
- should be promoted
|
||||||
|
- should be rejected
|
||||||
|
- should stay pending
|
||||||
|
- should first be rewritten into the actual canonical source
|
||||||
|
- should become project_state only after stronger curation
|
||||||
|
|
||||||
|
### Review questions
|
||||||
|
|
||||||
|
For every candidate, the reviewer should ask:
|
||||||
|
|
||||||
|
1. Is this actually stable enough to preserve?
|
||||||
|
2. Is this fact ambiguous, historical, or current?
|
||||||
|
3. What is the one canonical home for this fact type?
|
||||||
|
4. Is memory the right home, or should this be an entity later?
|
||||||
|
5. Is project_state justified, or is this still only evidence or candidate material?
|
||||||
|
6. Does the source prove current truth, or only past conversation?
|
||||||
|
|
||||||
|
## Stage 5 - canonical promotion
|
||||||
|
|
||||||
|
After review, the signal can move into exactly one canonical home.
|
||||||
|
|
||||||
|
## Promotion rules by fact shape
|
||||||
|
|
||||||
|
### A. Personal, episodic, or loose project signal
|
||||||
|
|
||||||
|
Promotion destination:
|
||||||
|
|
||||||
|
- AtoCore memory
|
||||||
|
|
||||||
|
Use when the fact is durable and useful, but not a formal structured engineering record.
|
||||||
|
|
||||||
|
### B. Structured engineering fact
|
||||||
|
|
||||||
|
Promotion destination:
|
||||||
|
|
||||||
|
- future AtoCore entity
|
||||||
|
|
||||||
|
Use when the fact is really a:
|
||||||
|
|
||||||
|
- decision
|
||||||
|
- requirement
|
||||||
|
- constraint
|
||||||
|
- validation claim
|
||||||
|
|
||||||
|
### C. Current trusted project answer
|
||||||
|
|
||||||
|
Promotion destination:
|
||||||
|
|
||||||
|
- AtoCore project_state
|
||||||
|
|
||||||
|
But only after explicit curation.
|
||||||
|
|
||||||
|
A candidate does not become project_state just because it looks important.
|
||||||
|
The reviewer must decide that it now represents the trusted current answer.
|
||||||
|
|
||||||
|
### D. Human or tool source truth
|
||||||
|
|
||||||
|
Promotion destination:
|
||||||
|
|
||||||
|
- PKM / repo / KB tool of origin
|
||||||
|
|
||||||
|
If a Discord-originated signal claims current truth but the canonical home is not AtoCore memory or entity, the right move may be:
|
||||||
|
|
||||||
|
1. update the canonical source first
|
||||||
|
2. then optionally refresh or ingest, with explicit approval if the action is mutating
|
||||||
|
3. then optionally curate a project_state answer
|
||||||
|
|
||||||
|
This prevents Discord from becoming the hidden source of truth.
|
||||||
|
|
||||||
|
## Stage 6 - optional trusted-state curation
|
||||||
|
|
||||||
|
`project_state` is not the general destination for important facts.
|
||||||
|
It is the curated destination for current trusted project answers.
|
||||||
|
|
||||||
|
Examples that may justify explicit project_state curation:
|
||||||
|
|
||||||
|
- current selected architecture
|
||||||
|
- current next milestone
|
||||||
|
- current status summary
|
||||||
|
- current trusted decision outcome
|
||||||
|
|
||||||
|
Examples that usually do not justify immediate project_state curation:
|
||||||
|
|
||||||
|
- a raw Discord debate
|
||||||
|
- a speculative suggestion
|
||||||
|
- a historical conversation retrieved through Discrawl
|
||||||
|
|
||||||
|
## Discord-originated pipeline examples
|
||||||
|
|
||||||
|
### Example 1 - raw discussion about operator-client refactor
|
||||||
|
|
||||||
|
1. Discord message enters the evidence lane.
|
||||||
|
2. Nightly screener marks it as either evidence-only or decision candidate.
|
||||||
|
3. Human review checks whether it is an actual decision or just discussion.
|
||||||
|
4. If stable and approved, it becomes a memory or future entity.
|
||||||
|
5. It reaches project_state only if explicitly curated as the trusted current answer.
|
||||||
|
|
||||||
|
### Example 2 - Discord thread says "refresh this project now"
|
||||||
|
|
||||||
|
1. Discord message is evidence of operator intent.
|
||||||
|
2. It does not auto-trigger refresh.
|
||||||
|
3. OpenClaw asks for or recognizes explicit human approval.
|
||||||
|
4. Approved operator action invokes the shared operator client.
|
||||||
|
5. Refresh result may later influence candidates or trusted state, but the raw Discord message never performed the mutation by itself.
|
||||||
|
|
||||||
|
### Example 3 - archived thread says a requirement might be current
|
||||||
|
|
||||||
|
1. Discrawl retrieval enters the evidence lane.
|
||||||
|
2. Screener marks it as evidence-only or a requirement candidate.
|
||||||
|
3. Human review checks the canonical source alignment.
|
||||||
|
4. If accepted later, it becomes an entity candidate or active entity.
|
||||||
|
5. project_state remains a separate explicit curation step.
|
||||||
|
|
||||||
|
## Promotion invariants
|
||||||
|
|
||||||
|
The pipeline must preserve these invariants.
|
||||||
|
|
||||||
|
### Invariant 1 - raw evidence is not trusted truth
|
||||||
|
|
||||||
|
No raw Discord or Discrawl signal can directly become trusted project_state.
|
||||||
|
|
||||||
|
### Invariant 2 - unreviewed signals can at most become candidates
|
||||||
|
|
||||||
|
Automatic processing stops at evidence or candidate creation.
|
||||||
|
|
||||||
|
### Invariant 3 - each fact has one canonical home
|
||||||
|
|
||||||
|
A fact may be supported by many evidence items, but after review it belongs in one canonical place.
|
||||||
|
|
||||||
|
### Invariant 4 - operator mutations require explicit approval
|
||||||
|
|
||||||
|
Registry mutation, refresh, ingest, promote, reject, and project_state writes are operator actions.
|
||||||
|
|
||||||
|
### Invariant 5 - OpenClaw orchestrates; it does not become storage
|
||||||
|
|
||||||
|
OpenClaw should coordinate the pipeline, not silently become the canonical data layer.
|
||||||
|
|
||||||
|
## Decision table
|
||||||
|
|
||||||
|
| Observed signal type | Default pipeline outcome | Canonical destination if accepted |
|
||||||
|
|---|---|---|
|
||||||
|
| Ambiguous or raw conversation | Evidence only | none |
|
||||||
|
| Historical archive context | Evidence only or candidate | memory or entity only after review |
|
||||||
|
| Personal preference | Memory candidate | AtoCore memory |
|
||||||
|
| Episodic fact | Memory candidate | AtoCore memory |
|
||||||
|
| Loose stable project signal | Memory candidate | AtoCore memory |
|
||||||
|
| Structured decision / requirement / constraint | Entity candidate | future AtoCore entity |
|
||||||
|
| Claimed current trusted answer | Needs explicit curation | project_state, but only after review |
|
||||||
|
| Tool-origin engineering fact | Canonical source update first | repo / KB / PKM tool of origin |
|
||||||
|
|
||||||
|
## What the pipeline deliberately prevents
|
||||||
|
|
||||||
|
This V1 pipeline deliberately prevents these bad paths:
|
||||||
|
|
||||||
|
- Discord -> project_state directly
|
||||||
|
- Discrawl archive -> project_state directly
|
||||||
|
- Discord -> registry mutation directly
|
||||||
|
- Discord -> refresh or ingest directly without explicit approval
|
||||||
|
- raw chat -> promote or reject directly
|
||||||
|
- OpenClaw turning evidence into truth without a review gate
|
||||||
|
|
||||||
|
## Deferred from V1
|
||||||
|
|
||||||
|
Screenpipe is deferred from V1. It is not an active input lane in this pipeline and it is not a runtime dependency of this pipeline. If it is revisited later, it should be handled in a separate future design and not treated as an implicit part of this pipeline.
|
||||||
|
|
||||||
|
## Bottom line
|
||||||
|
|
||||||
|
The promotion pipeline is intentionally conservative.
|
||||||
|
|
||||||
|
Its job is not to maximize writes.
|
||||||
|
Its job is to preserve trust while still letting Discord, Discrawl, OpenClaw, PKM, and repos contribute useful signal.
|
||||||
|
|
||||||
|
That means the safe default path is:
|
||||||
|
|
||||||
|
- capture broadly
|
||||||
|
- trust narrowly
|
||||||
|
- promote deliberately
|
||||||
96
docs/openclaw-atocore-shared-client-consolidation-preview.md
Normal file
96
docs/openclaw-atocore-shared-client-consolidation-preview.md
Normal file
@@ -0,0 +1,96 @@
|
|||||||
|
# OpenClaw x AtoCore Shared-Client Consolidation Preview
|
||||||
|
|
||||||
|
## Status
|
||||||
|
|
||||||
|
Proposal only. Not applied.
|
||||||
|
|
||||||
|
## Why this exists
|
||||||
|
|
||||||
|
The current OpenClaw helper script duplicates AtoCore-calling logic that already exists in the shared operator client:
|
||||||
|
|
||||||
|
- request handling
|
||||||
|
- fail-open behavior
|
||||||
|
- project detection
|
||||||
|
- project lifecycle command surface
|
||||||
|
|
||||||
|
The preferred direction is to consolidate OpenClaw toward the shared operator client pattern documented in `docs/architecture/llm-client-integration.md`.
|
||||||
|
|
||||||
|
## Goal
|
||||||
|
|
||||||
|
Keep the OpenClaw skill and operator policy in OpenClaw, but stop maintaining a separate Bash implementation of the AtoCore client surface when the shared client already exists in `/home/papa/ATOCore/scripts/atocore_client.py`.
|
||||||
|
|
||||||
|
## Non-goals for this preview
|
||||||
|
|
||||||
|
- no implementation in this phase
|
||||||
|
- no runtime change in this phase
|
||||||
|
- no new helper command in this phase
|
||||||
|
- no change to approval policy in this preview
|
||||||
|
|
||||||
|
## Preview diff
|
||||||
|
|
||||||
|
This is a conceptual diff preview only.
|
||||||
|
It is not applied.
|
||||||
|
|
||||||
|
```diff
|
||||||
|
--- a/skills/atocore-context/scripts/atocore.sh
|
||||||
|
+++ b/skills/atocore-context/scripts/atocore.sh
|
||||||
|
@@
|
||||||
|
-#!/usr/bin/env bash
|
||||||
|
-set -euo pipefail
|
||||||
|
-
|
||||||
|
-BASE_URL="${ATOCORE_BASE_URL:-http://dalidou:8100}"
|
||||||
|
-TIMEOUT="${ATOCORE_TIMEOUT_SECONDS:-30}"
|
||||||
|
-REFRESH_TIMEOUT="${ATOCORE_REFRESH_TIMEOUT_SECONDS:-1800}"
|
||||||
|
-FAIL_OPEN="${ATOCORE_FAIL_OPEN:-true}"
|
||||||
|
-
|
||||||
|
-request() {
|
||||||
|
- # local curl-based request logic
|
||||||
|
-}
|
||||||
|
-
|
||||||
|
-detect_project() {
|
||||||
|
- # local project detection logic
|
||||||
|
-}
|
||||||
|
-
|
||||||
|
-case "$cmd" in
|
||||||
|
- health) request GET /health ;;
|
||||||
|
- projects) request GET /projects ;;
|
||||||
|
- auto-context) ... ;;
|
||||||
|
- register-project) ... ;;
|
||||||
|
- refresh-project) ... ;;
|
||||||
|
- ingest-sources) ... ;;
|
||||||
|
-esac
|
||||||
|
+#!/usr/bin/env bash
|
||||||
|
+set -euo pipefail
|
||||||
|
+
|
||||||
|
+CLIENT="${ATOCORE_SHARED_CLIENT:-/home/papa/ATOCore/scripts/atocore_client.py}"
|
||||||
|
+
|
||||||
|
+if [[ ! -f "$CLIENT" ]]; then
|
||||||
|
+ echo "Shared AtoCore client not found: $CLIENT" >&2
|
||||||
|
+ exit 1
|
||||||
|
+fi
|
||||||
|
+
|
||||||
|
+exec python3 "$CLIENT" "$@"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Recommended implementation shape later
|
||||||
|
|
||||||
|
If and when this is implemented, the safer shape is:
|
||||||
|
|
||||||
|
1. keep policy and approval guidance in OpenClaw instructions and skill text
|
||||||
|
2. delegate actual AtoCore client behavior to the shared operator client
|
||||||
|
3. avoid adding any new helper command unless explicitly approved
|
||||||
|
4. keep read-path and approved-operator-path distinctions in the OpenClaw guidance layer
|
||||||
|
|
||||||
|
## Risk notes
|
||||||
|
|
||||||
|
Potential follow-up concerns to handle before applying:
|
||||||
|
|
||||||
|
- path dependency on `/home/papa/ATOCore/scripts/atocore_client.py`
|
||||||
|
- what should happen if the AtoCore repo is unavailable from the OpenClaw machine
|
||||||
|
- whether a thin compatibility wrapper is needed for help text or argument normalization
|
||||||
|
- ensuring OpenClaw policy still blocks unapproved Discord-originated mutations even if the shared client exposes them
|
||||||
|
|
||||||
|
## Bottom line
|
||||||
|
|
||||||
|
The duplication is real and consolidation is still the right direction.
|
||||||
|
But in this phase it remains a proposal only.
|
||||||
362
docs/openclaw-atocore-v1-architecture.md
Normal file
362
docs/openclaw-atocore-v1-architecture.md
Normal file
@@ -0,0 +1,362 @@
|
|||||||
|
# OpenClaw x AtoCore V1 Architecture
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
|
||||||
|
This document defines the safe V1 operating model for how Discord, Discrawl, OpenClaw, PKM, repos, and AtoCore work together.
|
||||||
|
|
||||||
|
The goal is to let these systems contribute useful signal into AtoCore without turning AtoCore into a raw dump and without blurring trust boundaries.
|
||||||
|
|
||||||
|
## V1 scope
|
||||||
|
|
||||||
|
V1 active inputs are:
|
||||||
|
|
||||||
|
- Discord and Discrawl evidence
|
||||||
|
- OpenClaw interaction evidence
|
||||||
|
- PKM, repos, and KB sources
|
||||||
|
- read-only AtoCore context for comparison and deduplication
|
||||||
|
|
||||||
|
## Core stance
|
||||||
|
|
||||||
|
The V1 stance is simple:
|
||||||
|
|
||||||
|
- Discord and Discrawl are evidence streams.
|
||||||
|
- OpenClaw is the operator and orchestrator.
|
||||||
|
- PKM, repos, and KB tools remain the canonical human and tool truth.
|
||||||
|
- AtoCore memories hold reviewed episodic, personal, and loose project signal.
|
||||||
|
- AtoCore project_state holds the current trusted project answer, manually or tightly gated only.
|
||||||
|
- Future AtoCore entities hold reviewed structured decisions, requirements, constraints, and related facts.
|
||||||
|
|
||||||
|
## Architectural principles
|
||||||
|
|
||||||
|
1. AtoCore remains additive and fail-open from the OpenClaw side.
|
||||||
|
2. Every fact type has exactly one canonical home.
|
||||||
|
3. Raw evidence is not trusted truth.
|
||||||
|
4. Unreviewed signals become evidence or candidates, not active truth.
|
||||||
|
5. Discord-originated paths never directly mutate project_state, registry state, refresh state, ingestion state, or review decisions without explicit human approval.
|
||||||
|
6. OpenClaw is not canonical storage. It retrieves, compares, summarizes, requests approval, and performs approved operator actions.
|
||||||
|
7. The shared operator client is the canonical mutating operator surface. Frontends should reuse it instead of reimplementing AtoCore-calling logic.
|
||||||
|
|
||||||
|
## Explicit approval rule
|
||||||
|
|
||||||
|
In this V1 policy, explicit approval means all of the following:
|
||||||
|
|
||||||
|
- the human directly instructs the specific mutating action
|
||||||
|
- the instruction appears in the current thread or current session
|
||||||
|
- the approval is for that specific action, not vague intent
|
||||||
|
- the approval is not inferred from Discord evidence, Discrawl recall, screener output, or general discussion
|
||||||
|
|
||||||
|
Examples of explicit approval:
|
||||||
|
|
||||||
|
- "refresh p05 now"
|
||||||
|
- "register this project"
|
||||||
|
- "promote that candidate"
|
||||||
|
- "write this to project_state"
|
||||||
|
|
||||||
|
Examples that are not explicit approval:
|
||||||
|
|
||||||
|
- "we should probably refresh this sometime"
|
||||||
|
- "I think this is the current answer"
|
||||||
|
- archived discussion saying a mutation might be useful
|
||||||
|
- a screener report recommending a mutation
|
||||||
|
|
||||||
|
## System roles
|
||||||
|
|
||||||
|
### Discord
|
||||||
|
|
||||||
|
Discord is a live conversational source.
|
||||||
|
It contains fresh context, discussion, uncertainty, and project language grounded in real work.
|
||||||
|
It is not authoritative by itself.
|
||||||
|
|
||||||
|
Discord-originated material should be treated as:
|
||||||
|
|
||||||
|
- raw evidence
|
||||||
|
- candidate material after screening
|
||||||
|
- possible justification for a later human-reviewed promotion into a canonical home
|
||||||
|
|
||||||
|
Discord should never be treated as direct trusted project truth just because someone said it in chat.
|
||||||
|
|
||||||
|
### Discrawl
|
||||||
|
|
||||||
|
Discrawl is a retrieval and archive layer over Discord history.
|
||||||
|
It turns prior conversation into searchable evidence.
|
||||||
|
That is useful for recall, context building, and finding prior decisions or open questions.
|
||||||
|
|
||||||
|
Discrawl is still evidence, not authority.
|
||||||
|
A retrieved Discord thread may show what people thought or said. It does not by itself become trusted project_state.
|
||||||
|
|
||||||
|
### OpenClaw
|
||||||
|
|
||||||
|
OpenClaw is the orchestrator and operator.
|
||||||
|
It is where the human interacts, where approvals happen, and where cross-source reasoning happens.
|
||||||
|
|
||||||
|
OpenClaw's job is to:
|
||||||
|
|
||||||
|
- retrieve
|
||||||
|
- compare
|
||||||
|
- summarize
|
||||||
|
- ask for approval when mutation is requested
|
||||||
|
- call the shared operator client for approved writes
|
||||||
|
- fail open when AtoCore is unavailable
|
||||||
|
|
||||||
|
OpenClaw is not the canonical place where project facts live long-term.
|
||||||
|
|
||||||
|
### PKM
|
||||||
|
|
||||||
|
PKM is a canonical human-authored prose source.
|
||||||
|
It is where notes, thinking, and ongoing project writing live.
|
||||||
|
|
||||||
|
PKM is the canonical home for:
|
||||||
|
|
||||||
|
- project prose notes
|
||||||
|
- working notes
|
||||||
|
- long-form summaries
|
||||||
|
- journal-style project history
|
||||||
|
|
||||||
|
PKM is not the place where OpenClaw should be taught how to operate AtoCore. Operator instructions belong in repo docs and OpenClaw instructions and skills.
|
||||||
|
|
||||||
|
### Repos and KB tools
|
||||||
|
|
||||||
|
Repos and KB tools are canonical human and tool truth for code and structured engineering artifacts.
|
||||||
|
|
||||||
|
They are the canonical home for:
|
||||||
|
|
||||||
|
- source code
|
||||||
|
- repo design docs
|
||||||
|
- structured tool outputs
|
||||||
|
- KB-CAD and KB-FEM facts where those systems are the tool of origin
|
||||||
|
|
||||||
|
### AtoCore memories
|
||||||
|
|
||||||
|
AtoCore memories are for reviewed, durable machine-usable signal that is still loose enough to belong in memory rather than in a stricter structured layer.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
- episodic facts
|
||||||
|
- preferences
|
||||||
|
- identity facts
|
||||||
|
- reviewed loose project facts
|
||||||
|
|
||||||
|
AtoCore memories are not a place to dump raw Discord capture.
|
||||||
|
|
||||||
|
### AtoCore project_state
|
||||||
|
|
||||||
|
AtoCore project_state is the trusted current answer layer.
|
||||||
|
It is the place for questions like:
|
||||||
|
|
||||||
|
- what is the current selected architecture?
|
||||||
|
- what is the current next focus?
|
||||||
|
- what is the trusted status answer right now?
|
||||||
|
|
||||||
|
Because this layer answers current-truth questions, it must remain manually curated or tightly gated.
|
||||||
|
|
||||||
|
### Future AtoCore entities
|
||||||
|
|
||||||
|
Future entities are the canonical home for structured engineering facts that deserve stronger representation than freeform memory.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
- decisions
|
||||||
|
- requirements
|
||||||
|
- constraints
|
||||||
|
- validation claims
|
||||||
|
- structured relationships later
|
||||||
|
|
||||||
|
These should be promoted from evidence or candidates only after review.
|
||||||
|
|
||||||
|
## Logical flow
|
||||||
|
|
||||||
|
```text
|
||||||
|
Discord live chat --.
|
||||||
|
Discrawl archive ----+--> evidence bundle / interactions / screener input
|
||||||
|
OpenClaw evidence ---'
|
||||||
|
|
|
||||||
|
v
|
||||||
|
nightly screener
|
||||||
|
|
|
||||||
|
.--------+--------.
|
||||||
|
v v
|
||||||
|
memory candidates entity candidates (later)
|
||||||
|
| |
|
||||||
|
'--------+--------'
|
||||||
|
v
|
||||||
|
human review in OpenClaw
|
||||||
|
|
|
||||||
|
.-----------------+-----------------.
|
||||||
|
v v v
|
||||||
|
active memory active entity explicit curation
|
||||||
|
|
|
||||||
|
v
|
||||||
|
project_state
|
||||||
|
```
|
||||||
|
|
||||||
|
The load-bearing rule is that review happens before trust.
|
||||||
|
|
||||||
|
## Canonical-home table
|
||||||
|
|
||||||
|
Every named fact type below has exactly one canonical home.
|
||||||
|
|
||||||
|
| Fact type | Canonical home | Why |
|
||||||
|
|---|---|---|
|
||||||
|
| Raw Discord message | Discord / Discrawl archive | It is conversational evidence, not normalized truth |
|
||||||
|
| Archived Discord thread history | Discrawl archive | It is the retrieval form of Discord evidence |
|
||||||
|
| OpenClaw operator instructions | OpenClaw repo docs / skills / instructions | Operating behavior should live in code-adjacent instructions, not PKM |
|
||||||
|
| Project prose notes | PKM | Human-authored project prose belongs in PKM |
|
||||||
|
| Source code | Repo | Code truth lives in version control |
|
||||||
|
| Repo design or architecture doc | Repo | The documentation belongs with the code or system it describes |
|
||||||
|
| Structured KB-CAD / KB-FEM fact | KB tool of origin | Tool-managed structured engineering facts belong in their tool of origin |
|
||||||
|
| Personal identity fact | AtoCore memory (`identity`) | AtoCore memory is the durable machine-usable home |
|
||||||
|
| Preference fact | AtoCore memory (`preference`) | Same reason |
|
||||||
|
| Episodic fact | AtoCore memory (`episodic`) | It is durable recall, not project_state |
|
||||||
|
| Loose reviewed project signal | AtoCore memory (`project`) | Good fit for reviewed but not fully structured project signal |
|
||||||
|
| Engineering decision | Future AtoCore entity (`Decision`) | Decisions need structured lifecycle and supersession |
|
||||||
|
| Requirement | Future AtoCore entity (`Requirement`) | Requirements need structured management |
|
||||||
|
| Constraint | Future AtoCore entity (`Constraint`) | Constraints need structured management |
|
||||||
|
| Current trusted project answer | AtoCore `project_state` | This layer is explicitly for current trusted truth |
|
||||||
|
| Project registration metadata | AtoCore project registry | Registry state is its own canonical operator layer |
|
||||||
|
| Review action (promote / reject / invalidate) | AtoCore audit trail / operator action log | Review decisions are operator events, not source facts |
|
||||||
|
|
||||||
|
## What this means for Discord-originated facts
|
||||||
|
|
||||||
|
A Discord-originated signal can end in more than one place, but not directly.
|
||||||
|
|
||||||
|
### If the signal is conversational, ambiguous, or historical
|
||||||
|
|
||||||
|
It stays in the evidence lane:
|
||||||
|
|
||||||
|
- Discord
|
||||||
|
- Discrawl archive
|
||||||
|
- optional screener artifact
|
||||||
|
- optional candidate queue
|
||||||
|
|
||||||
|
It does not become trusted project_state.
|
||||||
|
|
||||||
|
### If the signal is a stable personal or episodic fact
|
||||||
|
|
||||||
|
It may be promoted to AtoCore memory after review.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
- "Antoine prefers concise operator summaries."
|
||||||
|
- "We decided in discussion to keep AtoCore additive."
|
||||||
|
|
||||||
|
These belong in reviewed memory, not in project_state.
|
||||||
|
|
||||||
|
### If the signal expresses a structured engineering fact
|
||||||
|
|
||||||
|
It may become an entity candidate and later an active entity.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
- a requirement
|
||||||
|
- a decision
|
||||||
|
- a constraint
|
||||||
|
|
||||||
|
Again, not directly from raw chat. The chat is evidence for the candidate.
|
||||||
|
|
||||||
|
### If the signal is the current trusted answer
|
||||||
|
|
||||||
|
It still should not jump directly from Discord into project_state.
|
||||||
|
Instead, a human should explicitly curate it into project_state after checking it against the right canonical home.
|
||||||
|
|
||||||
|
That canonical home may be:
|
||||||
|
|
||||||
|
- PKM for prose and project notes
|
||||||
|
- repo for code and design docs
|
||||||
|
- KB tools for structured engineering facts
|
||||||
|
- active entity if the engineering layer is the canonical home
|
||||||
|
|
||||||
|
## Approval boundaries
|
||||||
|
|
||||||
|
### Reads
|
||||||
|
|
||||||
|
The following may be invoked automatically when useful:
|
||||||
|
|
||||||
|
- `health`
|
||||||
|
- `projects`
|
||||||
|
- `detect-project`
|
||||||
|
- `auto-context`
|
||||||
|
- `query`
|
||||||
|
- `project-state` read
|
||||||
|
- Discrawl retrieval
|
||||||
|
|
||||||
|
These are additive and fail-open.
|
||||||
|
|
||||||
|
### Mutations requiring explicit human approval
|
||||||
|
|
||||||
|
The following are operator actions, not conversational automation:
|
||||||
|
|
||||||
|
- `register-project`
|
||||||
|
- `update-project`
|
||||||
|
- `refresh-project`
|
||||||
|
- `ingest-sources`
|
||||||
|
- `project-state-set`
|
||||||
|
- `project-state-invalidate`
|
||||||
|
- `capture` when used as a durable write outside conservative logging policy
|
||||||
|
- `extract` with persistence
|
||||||
|
- `promote`
|
||||||
|
- `reject`
|
||||||
|
- future entity promotion or rejection
|
||||||
|
|
||||||
|
For Discord-originated paths, approval must satisfy the explicit approval rule above.
|
||||||
|
|
||||||
|
## Shared operator client rule
|
||||||
|
|
||||||
|
The preferred V1 architecture is:
|
||||||
|
|
||||||
|
- AtoCore HTTP API as system interface
|
||||||
|
- shared operator client as reusable mutating surface
|
||||||
|
- OpenClaw as a thin frontend and operator around that client
|
||||||
|
|
||||||
|
That avoids duplicating:
|
||||||
|
|
||||||
|
- project detection logic
|
||||||
|
- request logic
|
||||||
|
- failure handling
|
||||||
|
- mutation surface behavior
|
||||||
|
- approval wrappers
|
||||||
|
|
||||||
|
OpenClaw should keep its own high-level operating instructions, but it should not keep growing a parallel AtoCore mutation implementation.
|
||||||
|
|
||||||
|
## V1 boundary summary
|
||||||
|
|
||||||
|
### Allowed automatic behavior
|
||||||
|
|
||||||
|
- read-only retrieval
|
||||||
|
- context build
|
||||||
|
- Discrawl recall
|
||||||
|
- evidence collection
|
||||||
|
- nightly screening into reviewable output
|
||||||
|
- fail-open fallback when AtoCore is unavailable
|
||||||
|
|
||||||
|
### Allowed only after explicit human review or approval
|
||||||
|
|
||||||
|
- candidate persistence from evidence
|
||||||
|
- candidate promotion or rejection
|
||||||
|
- project refresh or ingestion
|
||||||
|
- registry mutation
|
||||||
|
- trusted project_state writes
|
||||||
|
|
||||||
|
### Not allowed as automatic behavior
|
||||||
|
|
||||||
|
- direct Discord -> project_state writes
|
||||||
|
- direct Discord -> register / update / refresh / ingest / promote / reject
|
||||||
|
- hidden mutation inside the screener
|
||||||
|
- treating PKM as the main operator-instruction layer for AtoCore behavior
|
||||||
|
|
||||||
|
## Deferred from V1
|
||||||
|
|
||||||
|
Screenpipe is deferred.
|
||||||
|
It is not an active input lane in V1 and it must not become a runtime, skill, or policy dependency in V1.
|
||||||
|
If it is revisited later, it must be treated as a separate future design decision, not as an implicit V1 extension.
|
||||||
|
|
||||||
|
## Bottom line
|
||||||
|
|
||||||
|
The safe V1 architecture is not "everything can write into AtoCore."
|
||||||
|
It is a layered system where:
|
||||||
|
|
||||||
|
- evidence comes in broadly
|
||||||
|
- trust rises slowly
|
||||||
|
- canonical homes stay singular
|
||||||
|
- OpenClaw remains the operator
|
||||||
|
- AtoCore remains the additive machine-memory and trusted-state layer
|
||||||
|
- the shared operator client becomes the one reusable write-capable surface
|
||||||
207
docs/openclaw-atocore-v1-proof-runbook.md
Normal file
207
docs/openclaw-atocore-v1-proof-runbook.md
Normal file
@@ -0,0 +1,207 @@
|
|||||||
|
# OpenClaw x AtoCore V1 Proof Runbook
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
|
||||||
|
This is the concise proof and operator runbook for the final V1 policy.
|
||||||
|
It shows, in concrete paths, that:
|
||||||
|
|
||||||
|
- a Discord-originated signal cannot reach `project_state` without candidate or review gating
|
||||||
|
- Discord cannot directly execute `register-project`, `update-project`, `refresh-project`, `ingest-sources`, `promote`, or `reject` without explicit approval
|
||||||
|
|
||||||
|
## Explicit approval definition
|
||||||
|
|
||||||
|
For V1, explicit approval means:
|
||||||
|
|
||||||
|
- the human directly instructs the specific mutating action
|
||||||
|
- the instruction is in the current thread or current session
|
||||||
|
- the approval is for that exact action
|
||||||
|
- the approval is not inferred from evidence, archives, or screener output
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
- "refresh p05 now"
|
||||||
|
- "register this project"
|
||||||
|
- "promote that candidate"
|
||||||
|
- "write this to project_state"
|
||||||
|
|
||||||
|
Non-examples:
|
||||||
|
|
||||||
|
- "this looks like the current answer"
|
||||||
|
- "we should probably refresh this"
|
||||||
|
- an old Discord thread saying a refresh might help
|
||||||
|
- a screener report recommending a mutation
|
||||||
|
|
||||||
|
## Proof 1 - Discord cannot directly reach project_state
|
||||||
|
|
||||||
|
Blocked path:
|
||||||
|
|
||||||
|
```text
|
||||||
|
Discord message
|
||||||
|
-> evidence
|
||||||
|
-> optional candidate
|
||||||
|
-> review
|
||||||
|
-> optional explicit curation
|
||||||
|
-> project_state
|
||||||
|
```
|
||||||
|
|
||||||
|
What is blocked:
|
||||||
|
|
||||||
|
- Discord -> project_state directly
|
||||||
|
- Discrawl archive -> project_state directly
|
||||||
|
- screener output -> project_state directly
|
||||||
|
|
||||||
|
What is allowed:
|
||||||
|
|
||||||
|
1. Discord message enters the evidence lane.
|
||||||
|
2. It may become a memory or entity candidate after screening.
|
||||||
|
3. A human reviews the candidate.
|
||||||
|
4. If the fact is truly the current trusted answer, the human may explicitly curate it into `project_state`.
|
||||||
|
|
||||||
|
Conclusion:
|
||||||
|
|
||||||
|
`project_state` is reachable only after review and explicit curation. There is no direct Discord-originated write path.
|
||||||
|
|
||||||
|
## Proof 2 - Discord cannot directly execute mutating operator actions
|
||||||
|
|
||||||
|
Blocked direct actions:
|
||||||
|
|
||||||
|
- `register-project`
|
||||||
|
- `update-project`
|
||||||
|
- `refresh-project`
|
||||||
|
- `ingest-sources`
|
||||||
|
- `promote`
|
||||||
|
- `reject`
|
||||||
|
- `project-state-set`
|
||||||
|
- `project-state-invalidate`
|
||||||
|
|
||||||
|
Blocked path:
|
||||||
|
|
||||||
|
```text
|
||||||
|
Discord message
|
||||||
|
-> evidence or operator request context
|
||||||
|
-X-> direct mutation
|
||||||
|
```
|
||||||
|
|
||||||
|
Allowed path:
|
||||||
|
|
||||||
|
```text
|
||||||
|
Discord message
|
||||||
|
-> OpenClaw recognizes requested operator action
|
||||||
|
-> explicit approval check
|
||||||
|
-> approved operator action
|
||||||
|
-> shared operator client or helper call
|
||||||
|
```
|
||||||
|
|
||||||
|
Conclusion:
|
||||||
|
|
||||||
|
Discord can request or justify a mutation, but it cannot perform it on its own.
|
||||||
|
|
||||||
|
## Proof 3 - Discrawl does not create approval
|
||||||
|
|
||||||
|
Discrawl is evidence retrieval.
|
||||||
|
It may surface:
|
||||||
|
|
||||||
|
- prior discussions
|
||||||
|
- earlier decisions
|
||||||
|
- unresolved questions
|
||||||
|
- prior suggestions to mutate state
|
||||||
|
|
||||||
|
It does not create approval for mutation.
|
||||||
|
|
||||||
|
Blocked path:
|
||||||
|
|
||||||
|
```text
|
||||||
|
Discrawl recall
|
||||||
|
-X-> refresh-project
|
||||||
|
-X-> promote
|
||||||
|
-X-> project_state write
|
||||||
|
```
|
||||||
|
|
||||||
|
Allowed path:
|
||||||
|
|
||||||
|
```text
|
||||||
|
Discrawl recall
|
||||||
|
-> evidence for human review
|
||||||
|
-> explicit approval in current thread/session if mutation is desired
|
||||||
|
-> approved operator action
|
||||||
|
```
|
||||||
|
|
||||||
|
Conclusion:
|
||||||
|
|
||||||
|
Archive recall informs review. It does not authorize writes.
|
||||||
|
|
||||||
|
## Proof 4 - Screener has no hidden mutation lane
|
||||||
|
|
||||||
|
The screener may:
|
||||||
|
|
||||||
|
- gather evidence
|
||||||
|
- classify evidence
|
||||||
|
- prepare candidates
|
||||||
|
- prepare operator queues
|
||||||
|
- report contradictions or missing context
|
||||||
|
|
||||||
|
The screener may not:
|
||||||
|
|
||||||
|
- write `project_state`
|
||||||
|
- mutate registry state
|
||||||
|
- refresh or ingest directly
|
||||||
|
- promote or reject directly
|
||||||
|
|
||||||
|
Blocked path:
|
||||||
|
|
||||||
|
```text
|
||||||
|
screener output
|
||||||
|
-X-> hidden mutation
|
||||||
|
```
|
||||||
|
|
||||||
|
Allowed path:
|
||||||
|
|
||||||
|
```text
|
||||||
|
screener output
|
||||||
|
-> review queue or operator queue
|
||||||
|
-> explicit approval if mutation is wanted
|
||||||
|
-> approved operator action
|
||||||
|
```
|
||||||
|
|
||||||
|
Conclusion:
|
||||||
|
|
||||||
|
The screener is a filter, not a hidden writer.
|
||||||
|
|
||||||
|
## Minimal operator decision table
|
||||||
|
|
||||||
|
| Situation | Allowed next step | Blocked next step |
|
||||||
|
|---|---|---|
|
||||||
|
| Discord says "this is the current answer" | evidence, then review, then possible explicit curation | direct `project_state` write |
|
||||||
|
| Discord says "refresh p05" without direct instruction | ask for explicit approval | direct `refresh-project` |
|
||||||
|
| Discord says "refresh p05 now" | approved operator action may run | none, if approval is explicit |
|
||||||
|
| Discrawl finds an old thread asking for registration | use as review context only | direct `register-project` |
|
||||||
|
| Screener recommends promotion | ask for explicit review decision | direct `promote` |
|
||||||
|
|
||||||
|
## Practical runbook
|
||||||
|
|
||||||
|
### Case A - current-truth claim from Discord
|
||||||
|
|
||||||
|
1. Treat the message as evidence.
|
||||||
|
2. Check the canonical home.
|
||||||
|
3. If needed, prepare a candidate or review note.
|
||||||
|
4. Do not write `project_state` unless the human explicitly approves that curation step.
|
||||||
|
|
||||||
|
### Case B - requested refresh from Discord
|
||||||
|
|
||||||
|
1. Determine whether the message is a direct instruction or only discussion.
|
||||||
|
2. If not explicit, ask for approval.
|
||||||
|
3. Only perform `refresh-project` after explicit approval in the current thread or session.
|
||||||
|
|
||||||
|
### Case C - candidate promotion request
|
||||||
|
|
||||||
|
1. Candidate exists or is proposed.
|
||||||
|
2. Review the evidence and the candidate text.
|
||||||
|
3. Only perform `promote` or `reject` after explicit review decision.
|
||||||
|
|
||||||
|
## Bottom line
|
||||||
|
|
||||||
|
The V1 rule is easy to test:
|
||||||
|
|
||||||
|
If the path starts from Discord or Discrawl and ends in trusted or operator state, there must be a visible approval or review step in the middle.
|
||||||
|
|
||||||
|
If that visible step is missing, the action is not allowed.
|
||||||
184
docs/openclaw-atocore-write-policy-matrix.md
Normal file
184
docs/openclaw-atocore-write-policy-matrix.md
Normal file
@@ -0,0 +1,184 @@
|
|||||||
|
# OpenClaw x AtoCore V1 Write-Policy Matrix
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
|
||||||
|
This matrix defines what each source is allowed to write to each target in V1.
|
||||||
|
|
||||||
|
Policy meanings:
|
||||||
|
|
||||||
|
- `auto-write` = allowed automatically without a human approval gate
|
||||||
|
- `candidate-only` = may create reviewable candidate material, but not active truth
|
||||||
|
- `human-review` = allowed only after explicit human review or explicit human approval
|
||||||
|
- `never-auto-write` = never allowed as an automatic write path
|
||||||
|
|
||||||
|
## Explicit approval rule
|
||||||
|
|
||||||
|
In this matrix, `human-review` is concrete, not vague.
|
||||||
|
For Discord-originated or Discrawl-originated paths it means:
|
||||||
|
|
||||||
|
- the human directly instructs the specific mutating action
|
||||||
|
- the instruction is in the current thread or current session
|
||||||
|
- the approval is for that specific action
|
||||||
|
- the approval is not inferred from evidence, archives, screener output, or general discussion
|
||||||
|
|
||||||
|
Examples of explicit approval:
|
||||||
|
|
||||||
|
- "refresh p05 now"
|
||||||
|
- "register this project"
|
||||||
|
- "promote this candidate"
|
||||||
|
- "write this to project_state"
|
||||||
|
|
||||||
|
Non-examples:
|
||||||
|
|
||||||
|
- "this looks important"
|
||||||
|
- "we should probably refresh this"
|
||||||
|
- archived discussion that once mentioned a similar mutation
|
||||||
|
- a screener note recommending promotion
|
||||||
|
|
||||||
|
## V1 scope note
|
||||||
|
|
||||||
|
V1 active inputs are:
|
||||||
|
|
||||||
|
- Discord and Discrawl
|
||||||
|
- OpenClaw interaction evidence
|
||||||
|
- PKM, repos, and KB sources
|
||||||
|
- read-only AtoCore context for comparison and deduplication
|
||||||
|
|
||||||
|
## Targets
|
||||||
|
|
||||||
|
The targets below are the only ones that matter for this policy.
|
||||||
|
|
||||||
|
- Evidence artifacts
|
||||||
|
- Memory candidates
|
||||||
|
- Active memories
|
||||||
|
- Entity candidates
|
||||||
|
- Active entities
|
||||||
|
- Trusted project_state
|
||||||
|
- Registry / refresh / ingest mutations
|
||||||
|
- Review actions
|
||||||
|
|
||||||
|
## Matrix
|
||||||
|
|
||||||
|
| Source | Target | Policy | Notes / gate |
|
||||||
|
|---|---|---|---|
|
||||||
|
| Discord live message | Evidence artifacts | auto-write | Safe evidence capture or archive only |
|
||||||
|
| Discord live message | Memory candidates | candidate-only | Only after screening or extraction; never direct active write |
|
||||||
|
| Discord live message | Active memories | human-review | Promote only after review of the candidate and evidence |
|
||||||
|
| Discord live message | Entity candidates | candidate-only | Only when structured signal is extracted from evidence |
|
||||||
|
| Discord live message | Active entities | human-review | Review required before promotion |
|
||||||
|
| Discord live message | Trusted project_state | human-review | Only via explicit curation; never directly from raw chat |
|
||||||
|
| Discord live message | Registry / refresh / ingest mutations | human-review | Requires explicit approval in the current thread or session |
|
||||||
|
| Discord live message | Review actions | human-review | Discord cannot silently promote or reject on its own |
|
||||||
|
| Discrawl archive result | Evidence artifacts | auto-write | Archive or search result is evidence by design |
|
||||||
|
| Discrawl archive result | Memory candidates | candidate-only | Extract reviewed signal from archived conversation |
|
||||||
|
| Discrawl archive result | Active memories | human-review | Promotion required |
|
||||||
|
| Discrawl archive result | Entity candidates | candidate-only | Archived discussion may justify candidate creation |
|
||||||
|
| Discrawl archive result | Active entities | human-review | Promotion required |
|
||||||
|
| Discrawl archive result | Trusted project_state | human-review | Must be explicitly curated; never inferred directly from archive |
|
||||||
|
| Discrawl archive result | Registry / refresh / ingest mutations | human-review | Archive recall cannot directly mutate operator state |
|
||||||
|
| Discrawl archive result | Review actions | human-review | Archive evidence informs review; it does not perform review |
|
||||||
|
| OpenClaw read/query flow | Evidence artifacts | auto-write | Conservative interaction or evidence logging is acceptable |
|
||||||
|
| OpenClaw read/query flow | Memory candidates | candidate-only | Only through explicit extraction path |
|
||||||
|
| OpenClaw read/query flow | Active memories | human-review | Requires operator review |
|
||||||
|
| OpenClaw read/query flow | Entity candidates | candidate-only | Future extraction path |
|
||||||
|
| OpenClaw read/query flow | Active entities | human-review | Requires operator review |
|
||||||
|
| OpenClaw read/query flow | Trusted project_state | never-auto-write | Read/query flow must stay additive |
|
||||||
|
| OpenClaw read/query flow | Registry / refresh / ingest mutations | never-auto-write | Read/query automation must not mutate operator state |
|
||||||
|
| OpenClaw read/query flow | Review actions | never-auto-write | Read automation cannot silently promote or reject |
|
||||||
|
| OpenClaw approved operator action | Evidence artifacts | auto-write | May create operator or audit artifacts |
|
||||||
|
| OpenClaw approved operator action | Memory candidates | human-review | Candidate persistence is itself an approved operator action |
|
||||||
|
| OpenClaw approved operator action | Active memories | human-review | Promotion allowed only through reviewed operator action |
|
||||||
|
| OpenClaw approved operator action | Entity candidates | human-review | Same rule for future entities |
|
||||||
|
| OpenClaw approved operator action | Active entities | human-review | Promotion allowed only through reviewed operator action |
|
||||||
|
| OpenClaw approved operator action | Trusted project_state | human-review | Allowed only as explicit curation |
|
||||||
|
| OpenClaw approved operator action | Registry / refresh / ingest mutations | human-review | Explicit approval required |
|
||||||
|
| OpenClaw approved operator action | Review actions | human-review | Explicit review required |
|
||||||
|
| PKM note | Evidence artifacts | human-review | Snapshotting into evidence is optional, not the primary path |
|
||||||
|
| PKM note | Memory candidates | candidate-only | Extraction from PKM is allowed into the candidate lane |
|
||||||
|
| PKM note | Active memories | human-review | Promotion required |
|
||||||
|
| PKM note | Entity candidates | candidate-only | Extract structured signal into the candidate lane |
|
||||||
|
| PKM note | Active entities | human-review | Promotion required |
|
||||||
|
| PKM note | Trusted project_state | human-review | Only via explicit curation of current truth |
|
||||||
|
| PKM note | Registry / refresh / ingest mutations | human-review | A human may choose to refresh based on PKM changes |
|
||||||
|
| PKM note | Review actions | human-review | PKM may support the decision, but not execute it automatically |
|
||||||
|
| Repo / KB source | Evidence artifacts | human-review | Optional audit or screener snapshot only |
|
||||||
|
| Repo / KB source | Memory candidates | candidate-only | Extract loose durable signal if useful |
|
||||||
|
| Repo / KB source | Active memories | human-review | Promotion required |
|
||||||
|
| Repo / KB source | Entity candidates | candidate-only | Strong future path for structured facts |
|
||||||
|
| Repo / KB source | Active entities | human-review | Promotion required |
|
||||||
|
| Repo / KB source | Trusted project_state | human-review | Explicit curation only |
|
||||||
|
| Repo / KB source | Registry / refresh / ingest mutations | human-review | A human may refresh or ingest based on source changes |
|
||||||
|
| Repo / KB source | Review actions | human-review | Source can justify review; it does not perform review |
|
||||||
|
| AtoCore active memory | Evidence artifacts | never-auto-write | Active memory is already above the evidence layer |
|
||||||
|
| AtoCore active memory | Memory candidates | never-auto-write | Do not recursively re-candidate active memory |
|
||||||
|
| AtoCore active memory | Active memories | never-auto-write | Already active |
|
||||||
|
| AtoCore active memory | Entity candidates | human-review | Graduation proposal only with review |
|
||||||
|
| AtoCore active memory | Active entities | human-review | Requires graduation plus promotion |
|
||||||
|
| AtoCore active memory | Trusted project_state | human-review | A human may explicitly curate current truth from memory |
|
||||||
|
| AtoCore active memory | Registry / refresh / ingest mutations | never-auto-write | Memory must not mutate registry or ingestion state |
|
||||||
|
| AtoCore active memory | Review actions | human-review | Human reviewer decides |
|
||||||
|
| AtoCore active entity | Evidence artifacts | never-auto-write | Already above the evidence layer |
|
||||||
|
| AtoCore active entity | Memory candidates | never-auto-write | Do not backflow structured truth into memory candidates automatically |
|
||||||
|
| AtoCore active entity | Active memories | never-auto-write | Canonical home is the entity, not a new memory |
|
||||||
|
| AtoCore active entity | Entity candidates | never-auto-write | Already active |
|
||||||
|
| AtoCore active entity | Active entities | never-auto-write | Already active |
|
||||||
|
| AtoCore active entity | Trusted project_state | human-review | Explicit curation may publish the current trusted answer |
|
||||||
|
| AtoCore active entity | Registry / refresh / ingest mutations | never-auto-write | Entities do not operate the registry |
|
||||||
|
| AtoCore active entity | Review actions | human-review | Human reviewer decides |
|
||||||
|
|
||||||
|
## Discord-originated trace examples
|
||||||
|
|
||||||
|
### Example 1 - conversational decision in Discord
|
||||||
|
|
||||||
|
Allowed path:
|
||||||
|
|
||||||
|
1. Discord live message -> Evidence artifacts (`auto-write`)
|
||||||
|
2. Evidence artifacts -> Memory candidates or Entity candidates (`candidate-only`)
|
||||||
|
3. Candidate -> Active memory or Active entity (`human-review`)
|
||||||
|
4. If it becomes the current trusted answer, a human may explicitly curate it into Trusted project_state (`human-review`)
|
||||||
|
|
||||||
|
There is no direct Discord -> project_state automatic path.
|
||||||
|
|
||||||
|
### Example 2 - archived Discord thread via Discrawl
|
||||||
|
|
||||||
|
Allowed path:
|
||||||
|
|
||||||
|
1. Discrawl result -> Evidence artifacts (`auto-write`)
|
||||||
|
2. Discrawl result -> Memory candidates or Entity candidates (`candidate-only`)
|
||||||
|
3. Human review decides promotion
|
||||||
|
4. Optional explicit curation into project_state later
|
||||||
|
|
||||||
|
Again, there is no direct archive -> trusted truth path.
|
||||||
|
|
||||||
|
### Example 3 - Discord request to refresh a project
|
||||||
|
|
||||||
|
Allowed path:
|
||||||
|
|
||||||
|
1. Discord message is evidence of requested operator intent
|
||||||
|
2. No mutation happens automatically
|
||||||
|
3. OpenClaw requires explicit approval in the current thread or session for `refresh-project`
|
||||||
|
4. Only then may OpenClaw perform the approved operator action
|
||||||
|
|
||||||
|
There is no direct Discord -> refresh path without explicit approval.
|
||||||
|
|
||||||
|
## V1 interpretation rules
|
||||||
|
|
||||||
|
1. Evidence can flow in broadly.
|
||||||
|
2. Truth can only rise through review.
|
||||||
|
3. project_state is the narrowest lane.
|
||||||
|
4. Registry and ingestion operations are operator actions, not evidence effects.
|
||||||
|
5. Discord-originated paths can inform operator actions, but they cannot silently execute them.
|
||||||
|
6. Deferred sources that are out of V1 scope have no automatic or manual role in this V1 matrix.
|
||||||
|
|
||||||
|
## Deferred from V1
|
||||||
|
|
||||||
|
Screenpipe is deferred and intentionally omitted from this V1 matrix.
|
||||||
|
|
||||||
|
## Bottom line
|
||||||
|
|
||||||
|
If a source is noisy, conversational, or archived, its maximum automatic privilege in V1 is:
|
||||||
|
|
||||||
|
- evidence capture, or
|
||||||
|
- candidate creation
|
||||||
|
|
||||||
|
Everything above that requires explicit human review or explicit human approval.
|
||||||
51
scripts/eval_data/candidate_queue_snapshot.jsonl
Normal file
51
scripts/eval_data/candidate_queue_snapshot.jsonl
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
{"id": "0dd85386-cace-4f9a-9098-c6732f3c64fa", "type": "project", "project": "atocore", "confidence": 0.5, "content": "AtoCore roadmap: (1) extractor improvement, (2) harness expansion, (3) Wave 2 ingestion, (4) OpenClaw finish; steps 1+2 are current mini-phase"}
|
||||||
|
{"id": "8939b875-152c-4c90-8614-3cfdc64cd1d6", "type": "knowledge", "project": "atocore", "confidence": 0.5, "content": "AtoCore is FastAPI (Python 3.12, SQLite + ChromaDB) on Dalidou home server (dalidou:8100), repo C:\\Users\\antoi\\ATOCore, data /srv/storage/atocore/, ingests Obsidian vault + Google Drive into vector memory system."}
|
||||||
|
{"id": "93e37d2a-b512-4a97-b230-e64ac913d087", "type": "knowledge", "project": "atocore", "confidence": 0.5, "content": "Deploy AtoCore: git push origin main, then ssh papa@dalidou and run /srv/storage/atocore/app/deploy/dalidou/deploy.sh"}
|
||||||
|
{"id": "4b82fe01-4393-464a-b935-9ad5d112d3d8", "type": "adaptation", "project": "atocore", "confidence": 0.5, "content": "Do not add memory extraction to interaction capture hot path; keep extraction as separate batch/manual step. Reason: latency and queue noise before review rhythm is comfortable."}
|
||||||
|
{"id": "c873ec00-063e-488c-ad32-1233290a3feb", "type": "project", "project": "atocore", "confidence": 0.5, "content": "As of 2026-04-11, approved roadmap in order: observe reinforcement, batch extraction, candidate triage, off-Dalidou backup, retrieval quality review."}
|
||||||
|
{"id": "665cdd27-0057-4e73-82f5-5d4f47189b5d", "type": "project", "project": "atocore", "confidence": 0.5, "content": "AtoCore adopts DEV-LEDGER.md as shared operating memory with stable headers; updated at session boundaries"}
|
||||||
|
{"id": "5f89c51d-7e8b-4fb9-830d-a35bb649f9f7", "type": "adaptation", "project": "atocore", "confidence": 0.5, "content": "Codex branches for AtoCore fork from main (never orphan); use naming pattern codex/<topic>"}
|
||||||
|
{"id": "25ac367c-8bbe-4ba4-8d8e-d533db33f2d9", "type": "adaptation", "project": "atocore", "confidence": 0.5, "content": "In AtoCore, Claude builds and Codex audits; never work in parallel on same files"}
|
||||||
|
{"id": "89446ebe-fd42-4177-80db-3657bc41d048", "type": "adaptation", "project": "atocore", "confidence": 0.5, "content": "In AtoCore, P1-severity findings in DEV-LEDGER.md block further main commits until acknowledged"}
|
||||||
|
{"id": "1f077e98-f945-4480-96ab-110b0671ebc6", "type": "adaptation", "project": "atocore", "confidence": 0.5, "content": "Every AtoCore session appends to DEV-LEDGER.md Session Log and updates Orientation before ending"}
|
||||||
|
{"id": "89f60018-c23b-4b2f-80ca-e6f7d02c5cd3", "type": "preference", "project": "atocore", "confidence": 0.5, "content": "User prefers receiving standalone testing prompts they can paste into Claude Code on target deployments rather than having the assistant run tests directly."}
|
||||||
|
{"id": "2f69a6ed-6de2-4565-87df-1ea3e8c42963", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "USB SSD on RPi is mandatory for polishing telemetry storage; must be independent of network for data integrity during runs."}
|
||||||
|
{"id": "6bcaebde-9e45-4de5-a220-65d9c4cd451e", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Use Tailscale mesh for RPi remote access to provide SSH, file transfer, and NAT traversal without port forwarding."}
|
||||||
|
{"id": "82f17880-92da-485e-a24a-0599ab1836e7", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Auto-sync telemetry data via rsync over Tailscale after runs complete; fire-and-forget pattern with automatic retry on network interruption."}
|
||||||
|
{"id": "2dd36f74-db47-4c72-a185-fec025d07d4f", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Real-time telemetry monitoring should target 10 Hz downsampling; full 100 Hz streaming over network is not necessary."}
|
||||||
|
{"id": "7519d82b-8065-41f0-812e-9c1a3573d7b9", "type": "knowledge", "project": "p06-polisher", "confidence": 0.5, "content": "Polishing telemetry data rate is approximately 29 MB per hour (100 Hz × 20 channels × 4 bytes = 8 KB/s)."}
|
||||||
|
{"id": "78678162-5754-478b-b1fc-e25f22e0ee03", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Machine spec (shareable) + Atomaste spec (internal) separate concerns. Machine spec hides program generation as 'separate scope' to protect IP/business strategy."}
|
||||||
|
{"id": "6657b4ae-d4ec-4fec-a66f-2975cdb10d13", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Firmware interface contract is invariant: controller-job.v1 input, run-log.v1 + telemetry output. No firmware changes needed regardless of program generation implementation."}
|
||||||
|
{"id": "6d6f4fe9-73e5-449f-a802-6dc0a974f87b", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Atomaste sim spec documents forward/return paths, calibration model (Preston k), translation loss, and service/IP strategy—details hidden from shareable machine spec."}
|
||||||
|
{"id": "932f38df-58f3-49c2-9968-8d422dc54b42", "type": "project", "project": "", "confidence": 0.5, "content": "USB SSD mandatory for storage (not SD card); directory structure /data/runs/{id}/, /data/manual/{id}/; status.json for machine state"}
|
||||||
|
{"id": "2b3178e8-fe38-4338-b2b0-75a01da18cea", "type": "project", "project": "", "confidence": 0.5, "content": "RPi joins Tailscale mesh for remote access over SSH VPN; no public IP or port forwarding; fully offline operation"}
|
||||||
|
{"id": "254c394d-3f80-4b34-a891-9f1cbfec74d7", "type": "project", "project": "", "confidence": 0.5, "content": "Data synchronization via rsync over Tailscale, failure-tolerant and non-blocking; USB stick as manual fallback"}
|
||||||
|
{"id": "ee626650-1ee0-439c-85c9-6d32a876f239", "type": "project", "project": "", "confidence": 0.5, "content": "Machine design principle: works fully offline and independently; network connection is for remote access only"}
|
||||||
|
{"id": "34add99d-8d2e-4586-b002-fc7b7d22bcb3", "type": "project", "project": "", "confidence": 0.5, "content": "No cloud, no real-time streaming, no remote control features in design scope"}
|
||||||
|
{"id": "993e0afe-9910-4984-b608-f5e9de7c0453", "type": "project", "project": "atocore", "confidence": 0.5, "content": "P1: Reflection loop integration incomplete—extraction remains manual (POST /interactions/{id}/extract), not auto-triggered with reinforcement. Live capture won't auto-populate candidate review queue."}
|
||||||
|
{"id": "bdf488d7-9200-441e-afbf-5335020ea78b", "type": "project", "project": "atocore", "confidence": 0.5, "content": "P1: Project memories excluded from context injection; build_context() requests [\"identity\", \"preference\"] only. Reinforcement signal doesn't reach assembled context packs."}
|
||||||
|
{"id": "188197af-a61d-4616-9e39-712aeaaadf61", "type": "project", "project": "atocore", "confidence": 0.5, "content": "Current batch-extract rules produce only 1 candidate from 42 real captures. Extractor needs conversational-cue detection or LLM-assisted path to improve yield."}
|
||||||
|
{"id": "acffcaa4-5966-4ec1-a0b2-3b8dcebe75bd", "type": "project", "project": "atocore", "confidence": 0.5, "content": "Next priority: extractor rule expansion (cheapest validation of reflection loop), then Wave 2 trusted operational ingestion (master-plan priority). Defer retrieval eval harness focus."}
|
||||||
|
{"id": "1b44a886-a5af-4426-bf10-a92baf3a6502", "type": "knowledge", "project": "atocore", "confidence": 0.5, "content": "Alias canonicalization fix (resolve_project_name() boundary) is consistently applied across project state, memories, interactions, and context lookup. Code review approved directionally."}
|
||||||
|
{"id": "e8f4e704-367b-4759-b20c-da0ccf06cf7d", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Machine capabilities now define z_type: engage_retract and cam_type: mechanical_with_encoder instead of actuator-driven setpoints."}
|
||||||
|
{"id": "ab2b607c-52b1-405f-a874-c6078393c21c", "type": "knowledge", "project": "", "confidence": 0.5, "content": "Codex is an audit agent; communicate with it via markdown prompts with numbered steps; it updates findings via commits to codex/* branches or direct messages."}
|
||||||
|
{"id": "5a5fd29d-291f-4e22-88fe-825cf55f745a", "type": "preference", "project": "", "confidence": 0.5, "content": "Audit-first workflow recommended: have codex audit DEV-LEDGER.md and recent commits before execution; validates round-trip, catches errors early."}
|
||||||
|
{"id": "4c238106-017e-4283-99a1-639497b6ddde", "type": "knowledge", "project": "", "confidence": 0.5, "content": "DEV-LEDGER.md at repo root is the shared coordination document with Orientation, Active Plan, and Open Review Findings sections."}
|
||||||
|
{"id": "83aed988-4257-4220-b612-6c725d6cd95a", "type": "project", "project": "atocore", "confidence": 0.5, "content": "Roadmap: Extractor improvement → Harness expansion → Wave 2 trusted operational ingestion → Finish OpenClaw integration (in that order)"}
|
||||||
|
{"id": "95d87d1a-5daa-414d-95ff-a344a62e0b6b", "type": "project", "project": "atocore", "confidence": 0.5, "content": "Phase 1 (Extractor): eval-driven loop—label captures, improve rules/add LLM mode, measure yield & FP, stop when queue reviewable (not coverage metrics)"}
|
||||||
|
{"id": "7aafb588-51b0-4536-a414-ebaaea924b98", "type": "project", "project": "atocore", "confidence": 0.5, "content": "Phases 1 & 2 (Extractor + Harness) are a mini-phase; without harness, extractor improvements are blind edits"}
|
||||||
|
{"id": "aa50c51a-27d7-4db9-b7a3-7ca75dba2118", "type": "knowledge", "project": "", "confidence": 0.5, "content": "Dalidou stores Claude Code interactions via a Stop hook that fires after each turn and POSTs to http://dalidou:8100/interactions with client=claude-code parameter"}
|
||||||
|
{"id": "5951108b-3a5e-49d0-9308-dfab449664d3", "type": "adaptation", "project": "", "confidence": 0.5, "content": "Interaction capture system is passive and automatic; no manual action required, interactions accumulate automatically during normal Claude Code usage"}
|
||||||
|
{"id": "9d2cbbe9-cf2e-4aab-9cb8-c4951da70826", "type": "project", "project": "", "confidence": 0.5, "content": "Session Log/Ledger system tracks work state across sessions so future sessions immediately know what is true and what is next; phases marked by git SHAs."}
|
||||||
|
{"id": "db88eecf-e31a-4fee-b07d-0b51db7e315e", "type": "project", "project": "atocore", "confidence": 0.5, "content": "atocore uses multi-model coordination: Claude and codex share DEV-LEDGER.md (current state / active plan / P1+P2 findings / recent decisions / commit log) read at session start, appended at session end"}
|
||||||
|
{"id": "8748f071-ff28-47a6-8504-65ca30a8336a", "type": "project", "project": "atocore", "confidence": 0.5, "content": "atocore starts with manual-event-loop (/audit or /status prompts) using DEV-LEDGER.md before upgrading to automated git hooks/CI review"}
|
||||||
|
{"id": "f9210883-67a8-4dae-9f27-6b5ae7bd8a6b", "type": "project", "project": "atocore", "confidence": 0.5, "content": "atocore development involves coordinating between Claude and codex models with shared plan/review strategy and counter-validation to improve system quality"}
|
||||||
|
{"id": "85f008b9-2d6d-49ad-81a1-e254dac2a2ac", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Z-axis is a binary engage/retract mechanism (z_engaged bool), not continuous position control; confirmation timeout z_engage_timeout_s required."}
|
||||||
|
{"id": "0cc417ed-ac38-4231-9786-a9582ac6a60f", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Cam amplitude and offset are mechanically set by operator and read via encoders; no actuators control them, controller receives encoder telemetry only."}
|
||||||
|
{"id": "2e001aaf-0c5c-4547-9b96-ebc4172b258d", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Cam parameters in controller are expected_cam_amplitude_deg and expected_cam_offset_deg (read-only reference for verification), not command setpoints."}
|
||||||
|
{"id": "47778126-b0cf-41d9-9e21-f2418f53e792", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Manual mode UI displays cam encoder readings (cam_amplitude_deg, cam_offset_deg) as read-only for operator verification of mechanical setting."}
|
||||||
|
{"id": "410e4a70-ae12-4de2-8f31-071ffee3cad4", "type": "project", "project": "p06-polisher", "confidence": 0.5, "content": "Manual session log records cam_setting measured at session start; run-log segment actual block includes cam_amplitude_deg_mean and cam_offset_deg_mean."}
|
||||||
|
{"id": "e94f94f0-3538-40dd-aef2-0189eacc7eb7", "type": "knowledge", "project": "atocore", "confidence": 0.5, "content": "AtoCore deployments to dalidou use the script /srv/storage/atocore/app/deploy/dalidou/deploy.sh instead of manual docker commands"}
|
||||||
|
{"id": "23fa6fdf-cfb9-4850-ad04-3ea56551c30a", "type": "project", "project": "", "confidence": 0.5, "content": "Retrieval/extraction evaluation follows 8-day mini-phase plan with hard gates to prevent scope drift. Preflight checks must validate git SHAs, baselines, and fixture stability before coding."}
|
||||||
|
{"id": "3e1fad28-031b-4670-a9d0-0af2e8ba1361", "type": "project", "project": "", "confidence": 0.5, "content": "Day 1: Create labeled extractor eval set from 30 captures (10 zero-candidate, 10 single-candidate, 10 ambiguous) with metadata; create scoring tool to measure precision/recall."}
|
||||||
|
{"id": "d49378a4-d03c-4730-be87-f0fcb2d199db", "type": "project", "project": "", "confidence": 0.5, "content": "Day 2: Measure current extractor against labeled set, recording yield, true/false positives, and false negatives by pattern."}
|
||||||
145
scripts/eval_data/extractor_labels_2026-04-11.json
Normal file
145
scripts/eval_data/extractor_labels_2026-04-11.json
Normal file
@@ -0,0 +1,145 @@
|
|||||||
|
{
|
||||||
|
"version": "0.1",
|
||||||
|
"frozen_at": "2026-04-11",
|
||||||
|
"snapshot_file": "scripts/eval_data/interactions_snapshot_2026-04-11.json",
|
||||||
|
"labeled_count": 20,
|
||||||
|
"plan_deviation": "Codex's plan called for 30 labeled interactions (10 zero / 10 plausible / 10 ambiguous). Actual corpus is heavily skewed toward instructional/status content; after reading 20 drawn by length-stratified random sample, the honest positive rate is ~25% (5/20). Labeling more would mostly add zeros; the Day 2 measurement is not bottlenecked on sample size.",
|
||||||
|
"positive_count": 5,
|
||||||
|
"labels": [
|
||||||
|
{
|
||||||
|
"id": "ab239158-d6ac-4c51-b6e4-dd4ccea384a2",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Instructional deploy guidance. No durable claim."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "da153f2a-b20a-4dee-8c72-431ebb71f08c",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "'Deploy still in progress.' Pure status."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "7d8371ee-c6d3-4dfe-a7b0-2d091f075c15",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Git command walkthrough. No durable claim."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "14bf3f90-e318-466e-81ac-d35522741ba5",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Ledger status update. Transient fact, not a durable memory candidate."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "8f855235-c38d-4c27-9f2b-8530ebe1a2d8",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Short-term recommendation ('merge to main and deploy'), not a standing decision."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "04a96eb5-cd00-4e9f-9252-b2cc919000a4",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Dev server config table. Operational detail, not a memory."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "79d606ed-8981-454a-83af-c25226b1b65c",
|
||||||
|
"expected_count": 1,
|
||||||
|
"expected_type": "adaptation",
|
||||||
|
"expected_project": "",
|
||||||
|
"expected_snippet": "shared DEV-LEDGER as operating memory",
|
||||||
|
"miss_class": "recommendation_prose",
|
||||||
|
"notes": "A recommendation that later became a ratified decision. Rule extractor would need a 'simplest version that could work today' / 'I'd start with' cue class."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "a6b0d279-c564-4bce-a703-e476f4a148ad",
|
||||||
|
"expected_count": 2,
|
||||||
|
"expected_type": "project",
|
||||||
|
"expected_project": "p06-polisher",
|
||||||
|
"expected_snippet": "z_engaged bool; cam amplitude set mechanically and read by encoders",
|
||||||
|
"miss_class": "architectural_change_summary",
|
||||||
|
"notes": "Two durable architectural facts about the polisher machine (Z-axis is engage/retract, cam is read-only). Extractor would need to recognize 'A is now B' / 'X removed, Y added' patterns."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "4e00e398-2e89-4653-8ee5-3f65c7f4d2d3",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Clarification question to user."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "a6a7816a-7590-4616-84f4-49d9054c2a91",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Instructional response offering two next moves."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "03527502-316a-4a3e-989c-00719392c7d1",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Troubleshooting a paste failure. Ephemeral."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "1fff59fc-545f-42df-9dd1-a0e6dec1b7ee",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Agreement + follow-up question. No durable claim."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "eb65dc18-0030-4720-ace7-f55af9df719d",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Explanation of how the capture hook works. Instructional."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "52c8c0f3-32fb-4b48-9065-73c778a08417",
|
||||||
|
"expected_count": 1,
|
||||||
|
"expected_type": "project",
|
||||||
|
"expected_project": "p06-polisher",
|
||||||
|
"expected_snippet": "USB SSD mandatory on RPi; Tailscale for remote access",
|
||||||
|
"miss_class": "spec_update_announcement",
|
||||||
|
"notes": "Concrete architectural commitments just added to the polisher spec. Phrased as '§17.1 Local Storage - USB SSD mandatory, not SD card.' The '§' section markers could be a new cue."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "32d40414-15af-47ee-944b-2cceae9574b8",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Session recap. Historical summary, not a durable memory."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "b6d2cdfc-37fb-459a-96bd-caefb9beaab4",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Deployment prompt for Dalidou. Operational, not a memory."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "ee03d823-931b-4d4e-9258-88b4ed5eeb07",
|
||||||
|
"expected_count": 2,
|
||||||
|
"expected_type": "knowledge",
|
||||||
|
"expected_project": "p06-polisher",
|
||||||
|
"expected_snippet": "USB SSD is non-negotiable for local storage; Tailscale mesh for SSH/file transfer",
|
||||||
|
"miss_class": "layered_recommendation",
|
||||||
|
"notes": "Layered infra recommendation with 'non-negotiable' / 'strongly recommended' strength markers. The 'non-negotiable' token could be a new cue class."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "dd234d9f-0d1c-47e8-b01c-eebcb568c7e7",
|
||||||
|
"expected_count": 1,
|
||||||
|
"expected_type": "project",
|
||||||
|
"expected_project": "p06-polisher",
|
||||||
|
"expected_snippet": "interface contract is identical regardless of who generates the programs; machine is a standalone box",
|
||||||
|
"miss_class": "alignment_assertion",
|
||||||
|
"notes": "Architectural invariant assertion. '**Alignment verified**' / 'nothing changes for X' style. Likely too subtle for rule matching without LLM assistance."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "1f95891a-cf37-400e-9d68-4fad8e04dcbb",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Huge session handoff prompt. Informational only."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "5580950f-d010-4544-be4b-b3071271a698",
|
||||||
|
"expected_count": 0,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Ledger schema sketch. Structural design proposal, later ratified — but the same idea was already captured as a ratified decision in the recent decisions section, so not worth re-extracting from this conversational form."
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
518
scripts/eval_data/extractor_llm_baseline_2026-04-11.json
Normal file
518
scripts/eval_data/extractor_llm_baseline_2026-04-11.json
Normal file
@@ -0,0 +1,518 @@
|
|||||||
|
{
|
||||||
|
"summary": {
|
||||||
|
"total": 20,
|
||||||
|
"exact_match": 6,
|
||||||
|
"positive_expected": 5,
|
||||||
|
"total_expected_candidates": 7,
|
||||||
|
"total_actual_candidates": 51,
|
||||||
|
"yield_rate": 2.55,
|
||||||
|
"recall": 1.0,
|
||||||
|
"precision": 0.357,
|
||||||
|
"false_positive_interactions": 9,
|
||||||
|
"false_negative_interactions": 0,
|
||||||
|
"miss_classes": {},
|
||||||
|
"mode": "llm"
|
||||||
|
},
|
||||||
|
"results": [
|
||||||
|
{
|
||||||
|
"id": "ab239158-d6ac-4c51-b6e4-dd4ccea384a2",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 1,
|
||||||
|
"ok": false,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Instructional deploy guidance. No durable claim.",
|
||||||
|
"actual_candidates": [
|
||||||
|
{
|
||||||
|
"memory_type": "knowledge",
|
||||||
|
"content": "AtoCore deployments to dalidou use the script /srv/storage/atocore/app/deploy/dalidou/deploy.sh instead of manual docker commands",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "da153f2a-b20a-4dee-8c72-431ebb71f08c",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 0,
|
||||||
|
"ok": true,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "'Deploy still in progress.' Pure status.",
|
||||||
|
"actual_candidates": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "7d8371ee-c6d3-4dfe-a7b0-2d091f075c15",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 0,
|
||||||
|
"ok": true,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Git command walkthrough. No durable claim.",
|
||||||
|
"actual_candidates": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "14bf3f90-e318-466e-81ac-d35522741ba5",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 4,
|
||||||
|
"ok": false,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Ledger status update. Transient fact, not a durable memory candidate.",
|
||||||
|
"actual_candidates": [
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Retrieval/extraction evaluation follows 8-day mini-phase plan with hard gates to prevent scope drift. Preflight checks must validate git SHAs, baselines, and fixture stability before coding.",
|
||||||
|
"project": "",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Day 1: Create labeled extractor eval set from 30 captures (10 zero-candidate, 10 single-candidate, 10 ambiguous) with metadata; create scoring tool to measure precision/recall.",
|
||||||
|
"project": "",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Day 2: Measure current extractor against labeled set, recording yield, true/false positives, and false negatives by pattern.",
|
||||||
|
"project": "",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Session Log/Ledger system tracks work state across sessions so future sessions immediately know what is true and what is next; phases marked by git SHAs.",
|
||||||
|
"project": "",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "8f855235-c38d-4c27-9f2b-8530ebe1a2d8",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 0,
|
||||||
|
"ok": true,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Short-term recommendation ('merge to main and deploy'), not a standing decision.",
|
||||||
|
"actual_candidates": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "04a96eb5-cd00-4e9f-9252-b2cc919000a4",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 0,
|
||||||
|
"ok": true,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Dev server config table. Operational detail, not a memory.",
|
||||||
|
"actual_candidates": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "79d606ed-8981-454a-83af-c25226b1b65c",
|
||||||
|
"expected_count": 1,
|
||||||
|
"actual_count": 3,
|
||||||
|
"ok": false,
|
||||||
|
"miss_class": "recommendation_prose",
|
||||||
|
"notes": "A recommendation that later became a ratified decision. Rule extractor would need a 'simplest version that could work today' / 'I'd start with' cue class.",
|
||||||
|
"actual_candidates": [
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "atocore uses multi-model coordination: Claude and codex share DEV-LEDGER.md (current state / active plan / P1+P2 findings / recent decisions / commit log) read at session start, appended at session end",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "atocore starts with manual-event-loop (/audit or /status prompts) using DEV-LEDGER.md before upgrading to automated git hooks/CI review",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "atocore development involves coordinating between Claude and codex models with shared plan/review strategy and counter-validation to improve system quality",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "a6b0d279-c564-4bce-a703-e476f4a148ad",
|
||||||
|
"expected_count": 2,
|
||||||
|
"actual_count": 6,
|
||||||
|
"ok": false,
|
||||||
|
"miss_class": "architectural_change_summary",
|
||||||
|
"notes": "Two durable architectural facts about the polisher machine (Z-axis is engage/retract, cam is read-only). Extractor would need to recognize 'A is now B' / 'X removed, Y added' patterns.",
|
||||||
|
"actual_candidates": [
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Z-axis is a binary engage/retract mechanism (z_engaged bool), not continuous position control; confirmation timeout z_engage_timeout_s required.",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Cam amplitude and offset are mechanically set by operator and read via encoders; no actuators control them, controller receives encoder telemetry only.",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Cam parameters in controller are expected_cam_amplitude_deg and expected_cam_offset_deg (read-only reference for verification), not command setpoints.",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Manual mode UI displays cam encoder readings (cam_amplitude_deg, cam_offset_deg) as read-only for operator verification of mechanical setting.",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Manual session log records cam_setting measured at session start; run-log segment actual block includes cam_amplitude_deg_mean and cam_offset_deg_mean.",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Machine capabilities now define z_type: engage_retract and cam_type: mechanical_with_encoder instead of actuator-driven setpoints.",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "4e00e398-2e89-4653-8ee5-3f65c7f4d2d3",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 0,
|
||||||
|
"ok": true,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Clarification question to user.",
|
||||||
|
"actual_candidates": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "a6a7816a-7590-4616-84f4-49d9054c2a91",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 3,
|
||||||
|
"ok": false,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Instructional response offering two next moves.",
|
||||||
|
"actual_candidates": [
|
||||||
|
{
|
||||||
|
"memory_type": "knowledge",
|
||||||
|
"content": "Codex is an audit agent; communicate with it via markdown prompts with numbered steps; it updates findings via commits to codex/* branches or direct messages.",
|
||||||
|
"project": "",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "preference",
|
||||||
|
"content": "Audit-first workflow recommended: have codex audit DEV-LEDGER.md and recent commits before execution; validates round-trip, catches errors early.",
|
||||||
|
"project": "",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "knowledge",
|
||||||
|
"content": "DEV-LEDGER.md at repo root is the shared coordination document with Orientation, Active Plan, and Open Review Findings sections.",
|
||||||
|
"project": "",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "03527502-316a-4a3e-989c-00719392c7d1",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 0,
|
||||||
|
"ok": true,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Troubleshooting a paste failure. Ephemeral.",
|
||||||
|
"actual_candidates": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "1fff59fc-545f-42df-9dd1-a0e6dec1b7ee",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 3,
|
||||||
|
"ok": false,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Agreement + follow-up question. No durable claim.",
|
||||||
|
"actual_candidates": [
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Roadmap: Extractor improvement → Harness expansion → Wave 2 trusted operational ingestion → Finish OpenClaw integration (in that order)",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Phase 1 (Extractor): eval-driven loop—label captures, improve rules/add LLM mode, measure yield & FP, stop when queue reviewable (not coverage metrics)",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Phases 1 & 2 (Extractor + Harness) are a mini-phase; without harness, extractor improvements are blind edits",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "eb65dc18-0030-4720-ace7-f55af9df719d",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 2,
|
||||||
|
"ok": false,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Explanation of how the capture hook works. Instructional.",
|
||||||
|
"actual_candidates": [
|
||||||
|
{
|
||||||
|
"memory_type": "knowledge",
|
||||||
|
"content": "Dalidou stores Claude Code interactions via a Stop hook that fires after each turn and POSTs to http://dalidou:8100/interactions with client=claude-code parameter",
|
||||||
|
"project": "",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "adaptation",
|
||||||
|
"content": "Interaction capture system is passive and automatic; no manual action required, interactions accumulate automatically during normal Claude Code usage",
|
||||||
|
"project": "",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "52c8c0f3-32fb-4b48-9065-73c778a08417",
|
||||||
|
"expected_count": 1,
|
||||||
|
"actual_count": 5,
|
||||||
|
"ok": false,
|
||||||
|
"miss_class": "spec_update_announcement",
|
||||||
|
"notes": "Concrete architectural commitments just added to the polisher spec. Phrased as '§17.1 Local Storage - USB SSD mandatory, not SD card.' The '§' section markers could be a new cue.",
|
||||||
|
"actual_candidates": [
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "USB SSD mandatory for storage (not SD card); directory structure /data/runs/{id}/, /data/manual/{id}/; status.json for machine state",
|
||||||
|
"project": "",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "RPi joins Tailscale mesh for remote access over SSH VPN; no public IP or port forwarding; fully offline operation",
|
||||||
|
"project": "",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Data synchronization via rsync over Tailscale, failure-tolerant and non-blocking; USB stick as manual fallback",
|
||||||
|
"project": "",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Machine design principle: works fully offline and independently; network connection is for remote access only",
|
||||||
|
"project": "",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "No cloud, no real-time streaming, no remote control features in design scope",
|
||||||
|
"project": "",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "32d40414-15af-47ee-944b-2cceae9574b8",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 5,
|
||||||
|
"ok": false,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Session recap. Historical summary, not a durable memory.",
|
||||||
|
"actual_candidates": [
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "P1: Reflection loop integration incomplete—extraction remains manual (POST /interactions/{id}/extract), not auto-triggered with reinforcement. Live capture won't auto-populate candidate review queue.",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "P1: Project memories excluded from context injection; build_context() requests [\"identity\", \"preference\"] only. Reinforcement signal doesn't reach assembled context packs.",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Current batch-extract rules produce only 1 candidate from 42 real captures. Extractor needs conversational-cue detection or LLM-assisted path to improve yield.",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Next priority: extractor rule expansion (cheapest validation of reflection loop), then Wave 2 trusted operational ingestion (master-plan priority). Defer retrieval eval harness focus.",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "knowledge",
|
||||||
|
"content": "Alias canonicalization fix (resolve_project_name() boundary) is consistently applied across project state, memories, interactions, and context lookup. Code review approved directionally.",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "b6d2cdfc-37fb-459a-96bd-caefb9beaab4",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 1,
|
||||||
|
"ok": false,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Deployment prompt for Dalidou. Operational, not a memory.",
|
||||||
|
"actual_candidates": [
|
||||||
|
{
|
||||||
|
"memory_type": "preference",
|
||||||
|
"content": "User prefers receiving standalone testing prompts they can paste into Claude Code on target deployments rather than having the assistant run tests directly.",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "ee03d823-931b-4d4e-9258-88b4ed5eeb07",
|
||||||
|
"expected_count": 2,
|
||||||
|
"actual_count": 5,
|
||||||
|
"ok": false,
|
||||||
|
"miss_class": "layered_recommendation",
|
||||||
|
"notes": "Layered infra recommendation with 'non-negotiable' / 'strongly recommended' strength markers. The 'non-negotiable' token could be a new cue class.",
|
||||||
|
"actual_candidates": [
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "USB SSD on RPi is mandatory for polishing telemetry storage; must be independent of network for data integrity during runs.",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Use Tailscale mesh for RPi remote access to provide SSH, file transfer, and NAT traversal without port forwarding.",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Auto-sync telemetry data via rsync over Tailscale after runs complete; fire-and-forget pattern with automatic retry on network interruption.",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Real-time telemetry monitoring should target 10 Hz downsampling; full 100 Hz streaming over network is not necessary.",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "knowledge",
|
||||||
|
"content": "Polishing telemetry data rate is approximately 29 MB per hour (100 Hz × 20 channels × 4 bytes = 8 KB/s).",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "dd234d9f-0d1c-47e8-b01c-eebcb568c7e7",
|
||||||
|
"expected_count": 1,
|
||||||
|
"actual_count": 3,
|
||||||
|
"ok": false,
|
||||||
|
"miss_class": "alignment_assertion",
|
||||||
|
"notes": "Architectural invariant assertion. '**Alignment verified**' / 'nothing changes for X' style. Likely too subtle for rule matching without LLM assistance.",
|
||||||
|
"actual_candidates": [
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Machine spec (shareable) + Atomaste spec (internal) separate concerns. Machine spec hides program generation as 'separate scope' to protect IP/business strategy.",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Firmware interface contract is invariant: controller-job.v1 input, run-log.v1 + telemetry output. No firmware changes needed regardless of program generation implementation.",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "Atomaste sim spec documents forward/return paths, calibration model (Preston k), translation loss, and service/IP strategy—details hidden from shareable machine spec.",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "1f95891a-cf37-400e-9d68-4fad8e04dcbb",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 4,
|
||||||
|
"ok": false,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Huge session handoff prompt. Informational only.",
|
||||||
|
"actual_candidates": [
|
||||||
|
{
|
||||||
|
"memory_type": "knowledge",
|
||||||
|
"content": "AtoCore is FastAPI (Python 3.12, SQLite + ChromaDB) on Dalidou home server (dalidou:8100), repo C:\\Users\\antoi\\ATOCore, data /srv/storage/atocore/, ingests Obsidian vault + Google Drive into vector memory system.",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "knowledge",
|
||||||
|
"content": "Deploy AtoCore: git push origin main, then ssh papa@dalidou and run /srv/storage/atocore/app/deploy/dalidou/deploy.sh",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "adaptation",
|
||||||
|
"content": "Do not add memory extraction to interaction capture hot path; keep extraction as separate batch/manual step. Reason: latency and queue noise before review rhythm is comfortable.",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "As of 2026-04-11, approved roadmap in order: observe reinforcement, batch extraction, candidate triage, off-Dalidou backup, retrieval quality review.",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "5580950f-d010-4544-be4b-b3071271a698",
|
||||||
|
"expected_count": 0,
|
||||||
|
"actual_count": 6,
|
||||||
|
"ok": false,
|
||||||
|
"miss_class": "n/a",
|
||||||
|
"notes": "Ledger schema sketch. Structural design proposal, later ratified — but the same idea was already captured as a ratified decision in the recent decisions section, so not worth re-extracting from this conversational form.",
|
||||||
|
"actual_candidates": [
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "AtoCore adopts DEV-LEDGER.md as shared operating memory with stable headers; updated at session boundaries",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "adaptation",
|
||||||
|
"content": "Codex branches for AtoCore fork from main (never orphan); use naming pattern codex/<topic>",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "adaptation",
|
||||||
|
"content": "In AtoCore, Claude builds and Codex audits; never work in parallel on same files",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "adaptation",
|
||||||
|
"content": "In AtoCore, P1-severity findings in DEV-LEDGER.md block further main commits until acknowledged",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "adaptation",
|
||||||
|
"content": "Every AtoCore session appends to DEV-LEDGER.md Session Log and updates Orientation before ending",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"memory_type": "project",
|
||||||
|
"content": "AtoCore roadmap: (1) extractor improvement, (2) harness expansion, (3) Wave 2 ingestion, (4) OpenClaw finish; steps 1+2 are current mini-phase",
|
||||||
|
"project": "atocore",
|
||||||
|
"rule": "llm_extraction"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
1
scripts/eval_data/interactions_snapshot_2026-04-11.json
Normal file
1
scripts/eval_data/interactions_snapshot_2026-04-11.json
Normal file
File diff suppressed because one or more lines are too long
1
scripts/eval_data/triage_verdict_2026-04-12.json
Normal file
1
scripts/eval_data/triage_verdict_2026-04-12.json
Normal file
@@ -0,0 +1 @@
|
|||||||
|
{"promote": ["4b82fe01-4393-464a-b935-9ad5d112d3d8", "665cdd27-0057-4e73-82f5-5d4f47189b5d", "5f89c51d-7e8b-4fb9-830d-a35bb649f9f7", "25ac367c-8bbe-4ba4-8d8e-d533db33f2d9", "2f69a6ed-6de2-4565-87df-1ea3e8c42963", "6bcaebde-9e45-4de5-a220-65d9c4cd451e", "2dd36f74-db47-4c72-a185-fec025d07d4f", "7519d82b-8065-41f0-812e-9c1a3573d7b9", "78678162-5754-478b-b1fc-e25f22e0ee03", "6657b4ae-d4ec-4fec-a66f-2975cdb10d13", "ee626650-1ee0-439c-85c9-6d32a876f239", "1b44a886-a5af-4426-bf10-a92baf3a6502", "aa50c51a-27d7-4db9-b7a3-7ca75dba2118", "5951108b-3a5e-49d0-9308-dfab449664d3", "85f008b9-2d6d-49ad-81a1-e254dac2a2ac", "0cc417ed-ac38-4231-9786-a9582ac6a60f"], "reject": ["0dd85386-cace-4f9a-9098-c6732f3c64fa", "8939b875-152c-4c90-8614-3cfdc64cd1d6", "93e37d2a-b512-4a97-b230-e64ac913d087", "c873ec00-063e-488c-ad32-1233290a3feb", "89446ebe-fd42-4177-80db-3657bc41d048", "1f077e98-f945-4480-96ab-110b0671ebc6", "89f60018-c23b-4b2f-80ca-e6f7d02c5cd3", "82f17880-92da-485e-a24a-0599ab1836e7", "6d6f4fe9-73e5-449f-a802-6dc0a974f87b", "932f38df-58f3-49c2-9968-8d422dc54b42", "2b3178e8-fe38-4338-b2b0-75a01da18cea", "254c394d-3f80-4b34-a891-9f1cbfec74d7", "34add99d-8d2e-4586-b002-fc7b7d22bcb3", "993e0afe-9910-4984-b608-f5e9de7c0453", "bdf488d7-9200-441e-afbf-5335020ea78b", "188197af-a61d-4616-9e39-712aeaaadf61", "acffcaa4-5966-4ec1-a0b2-3b8dcebe75bd", "e8f4e704-367b-4759-b20c-da0ccf06cf7d", "ab2b607c-52b1-405f-a874-c6078393c21c", "5a5fd29d-291f-4e22-88fe-825cf55f745a", "4c238106-017e-4283-99a1-639497b6ddde", "83aed988-4257-4220-b612-6c725d6cd95a", "95d87d1a-5daa-414d-95ff-a344a62e0b6b", "7aafb588-51b0-4536-a414-ebaaea924b98", "9d2cbbe9-cf2e-4aab-9cb8-c4951da70826", "db88eecf-e31a-4fee-b07d-0b51db7e315e", "8748f071-ff28-47a6-8504-65ca30a8336a", "f9210883-67a8-4dae-9f27-6b5ae7bd8a6b", "2e001aaf-0c5c-4547-9b96-ebc4172b258d", "47778126-b0cf-41d9-9e21-f2418f53e792", "410e4a70-ae12-4de2-8f31-071ffee3cad4", "e94f94f0-3538-40dd-aef2-0189eacc7eb7", "23fa6fdf-cfb9-4850-ad04-3ea56551c30a", "3e1fad28-031b-4670-a9d0-0af2e8ba1361", "d49378a4-d03c-4730-be87-f0fcb2d199db"]}
|
||||||
274
scripts/extractor_eval.py
Normal file
274
scripts/extractor_eval.py
Normal file
@@ -0,0 +1,274 @@
|
|||||||
|
"""Extractor eval runner — scores the rule-based extractor against a
|
||||||
|
labeled interaction corpus.
|
||||||
|
|
||||||
|
Pulls full interaction content from a frozen snapshot, runs each through
|
||||||
|
``extract_candidates_from_interaction``, and compares the output to the
|
||||||
|
expected counts from a labels file. Produces a per-label scorecard plus
|
||||||
|
aggregate precision / recall / yield numbers.
|
||||||
|
|
||||||
|
This harness deliberately stays file-based: snapshot + labels + this
|
||||||
|
runner. No Dalidou HTTP dependency once the snapshot is frozen, so the
|
||||||
|
eval is reproducible run-to-run even as live captures drift.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
|
||||||
|
python scripts/extractor_eval.py # human report
|
||||||
|
python scripts/extractor_eval.py --json # machine-readable
|
||||||
|
python scripts/extractor_eval.py \\
|
||||||
|
--snapshot scripts/eval_data/interactions_snapshot_2026-04-11.json \\
|
||||||
|
--labels scripts/eval_data/extractor_labels_2026-04-11.json
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import io
|
||||||
|
import json
|
||||||
|
import sys
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
# Force UTF-8 on stdout so real LLM output (arrows, em-dashes, CJK)
|
||||||
|
# doesn't crash the human report on Windows cp1252 consoles.
|
||||||
|
if hasattr(sys.stdout, "buffer"):
|
||||||
|
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding="utf-8", errors="replace", line_buffering=True)
|
||||||
|
|
||||||
|
# Make src/ importable without requiring an install.
|
||||||
|
_REPO_ROOT = Path(__file__).resolve().parent.parent
|
||||||
|
sys.path.insert(0, str(_REPO_ROOT / "src"))
|
||||||
|
|
||||||
|
from atocore.interactions.service import Interaction # noqa: E402
|
||||||
|
from atocore.memory.extractor import extract_candidates_from_interaction # noqa: E402
|
||||||
|
from atocore.memory.extractor_llm import extract_candidates_llm # noqa: E402
|
||||||
|
|
||||||
|
DEFAULT_SNAPSHOT = _REPO_ROOT / "scripts" / "eval_data" / "interactions_snapshot_2026-04-11.json"
|
||||||
|
DEFAULT_LABELS = _REPO_ROOT / "scripts" / "eval_data" / "extractor_labels_2026-04-11.json"
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class LabelResult:
|
||||||
|
id: str
|
||||||
|
expected_count: int
|
||||||
|
actual_count: int
|
||||||
|
ok: bool
|
||||||
|
miss_class: str
|
||||||
|
notes: str
|
||||||
|
actual_candidates: list[dict] = field(default_factory=list)
|
||||||
|
|
||||||
|
|
||||||
|
def load_snapshot(path: Path) -> dict[str, dict]:
|
||||||
|
data = json.loads(path.read_text(encoding="utf-8"))
|
||||||
|
return {item["id"]: item for item in data.get("interactions", [])}
|
||||||
|
|
||||||
|
|
||||||
|
def load_labels(path: Path) -> dict:
|
||||||
|
return json.loads(path.read_text(encoding="utf-8"))
|
||||||
|
|
||||||
|
|
||||||
|
def interaction_from_snapshot(snap: dict) -> Interaction:
|
||||||
|
return Interaction(
|
||||||
|
id=snap["id"],
|
||||||
|
prompt=snap.get("prompt", "") or "",
|
||||||
|
response=snap.get("response", "") or "",
|
||||||
|
response_summary="",
|
||||||
|
project=snap.get("project", "") or "",
|
||||||
|
client=snap.get("client", "") or "",
|
||||||
|
session_id=snap.get("session_id", "") or "",
|
||||||
|
created_at=snap.get("created_at", "") or "",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def score(snapshot: dict[str, dict], labels_doc: dict, mode: str = "rule") -> list[LabelResult]:
|
||||||
|
results: list[LabelResult] = []
|
||||||
|
for label in labels_doc["labels"]:
|
||||||
|
iid = label["id"]
|
||||||
|
snap = snapshot.get(iid)
|
||||||
|
if snap is None:
|
||||||
|
results.append(
|
||||||
|
LabelResult(
|
||||||
|
id=iid,
|
||||||
|
expected_count=int(label.get("expected_count", 0)),
|
||||||
|
actual_count=-1,
|
||||||
|
ok=False,
|
||||||
|
miss_class="not_in_snapshot",
|
||||||
|
notes=label.get("notes", ""),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
continue
|
||||||
|
interaction = interaction_from_snapshot(snap)
|
||||||
|
if mode == "llm":
|
||||||
|
candidates = extract_candidates_llm(interaction)
|
||||||
|
else:
|
||||||
|
candidates = extract_candidates_from_interaction(interaction)
|
||||||
|
actual_count = len(candidates)
|
||||||
|
expected_count = int(label.get("expected_count", 0))
|
||||||
|
results.append(
|
||||||
|
LabelResult(
|
||||||
|
id=iid,
|
||||||
|
expected_count=expected_count,
|
||||||
|
actual_count=actual_count,
|
||||||
|
ok=(actual_count == expected_count),
|
||||||
|
miss_class=label.get("miss_class", "n/a"),
|
||||||
|
notes=label.get("notes", ""),
|
||||||
|
actual_candidates=[
|
||||||
|
{
|
||||||
|
"memory_type": c.memory_type,
|
||||||
|
"content": c.content,
|
||||||
|
"project": c.project,
|
||||||
|
"rule": c.rule,
|
||||||
|
}
|
||||||
|
for c in candidates
|
||||||
|
],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
return results
|
||||||
|
|
||||||
|
|
||||||
|
def aggregate(results: list[LabelResult]) -> dict:
|
||||||
|
total = len(results)
|
||||||
|
exact_match = sum(1 for r in results if r.ok)
|
||||||
|
true_positive = sum(1 for r in results if r.expected_count > 0 and r.actual_count > 0)
|
||||||
|
false_positive_interactions = sum(
|
||||||
|
1 for r in results if r.expected_count == 0 and r.actual_count > 0
|
||||||
|
)
|
||||||
|
false_negative_interactions = sum(
|
||||||
|
1 for r in results if r.expected_count > 0 and r.actual_count == 0
|
||||||
|
)
|
||||||
|
positive_expected = sum(1 for r in results if r.expected_count > 0)
|
||||||
|
total_expected_candidates = sum(r.expected_count for r in results)
|
||||||
|
total_actual_candidates = sum(max(r.actual_count, 0) for r in results)
|
||||||
|
yield_rate = total_actual_candidates / total if total else 0.0
|
||||||
|
# Recall over interaction count that had at least one expected candidate:
|
||||||
|
recall = true_positive / positive_expected if positive_expected else 0.0
|
||||||
|
# Precision over interaction count that produced any candidate:
|
||||||
|
precision_denom = true_positive + false_positive_interactions
|
||||||
|
precision = true_positive / precision_denom if precision_denom else 0.0
|
||||||
|
# Miss class breakdown
|
||||||
|
miss_classes: dict[str, int] = {}
|
||||||
|
for r in results:
|
||||||
|
if r.expected_count > 0 and r.actual_count == 0:
|
||||||
|
key = r.miss_class or "unlabeled"
|
||||||
|
miss_classes[key] = miss_classes.get(key, 0) + 1
|
||||||
|
return {
|
||||||
|
"total": total,
|
||||||
|
"exact_match": exact_match,
|
||||||
|
"positive_expected": positive_expected,
|
||||||
|
"total_expected_candidates": total_expected_candidates,
|
||||||
|
"total_actual_candidates": total_actual_candidates,
|
||||||
|
"yield_rate": round(yield_rate, 3),
|
||||||
|
"recall": round(recall, 3),
|
||||||
|
"precision": round(precision, 3),
|
||||||
|
"false_positive_interactions": false_positive_interactions,
|
||||||
|
"false_negative_interactions": false_negative_interactions,
|
||||||
|
"miss_classes": miss_classes,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def print_human(results: list[LabelResult], summary: dict) -> None:
|
||||||
|
print("=== Extractor eval ===")
|
||||||
|
print(
|
||||||
|
f"labeled={summary['total']} "
|
||||||
|
f"exact_match={summary['exact_match']} "
|
||||||
|
f"positive_expected={summary['positive_expected']}"
|
||||||
|
)
|
||||||
|
print(
|
||||||
|
f"yield={summary['yield_rate']} "
|
||||||
|
f"recall={summary['recall']} "
|
||||||
|
f"precision={summary['precision']}"
|
||||||
|
)
|
||||||
|
print(
|
||||||
|
f"false_positives={summary['false_positive_interactions']} "
|
||||||
|
f"false_negatives={summary['false_negative_interactions']}"
|
||||||
|
)
|
||||||
|
print()
|
||||||
|
print("miss class breakdown (FN):")
|
||||||
|
if summary["miss_classes"]:
|
||||||
|
for k, v in sorted(summary["miss_classes"].items(), key=lambda kv: -kv[1]):
|
||||||
|
print(f" {v:3d} {k}")
|
||||||
|
else:
|
||||||
|
print(" (none)")
|
||||||
|
print()
|
||||||
|
print("per-interaction:")
|
||||||
|
for r in results:
|
||||||
|
marker = "OK " if r.ok else "MISS"
|
||||||
|
iid_short = r.id[:8]
|
||||||
|
print(f" {marker} {iid_short} expected={r.expected_count} actual={r.actual_count} class={r.miss_class}")
|
||||||
|
if r.actual_candidates:
|
||||||
|
for c in r.actual_candidates:
|
||||||
|
preview = (c["content"] or "")[:80]
|
||||||
|
print(f" [{c['memory_type']}] {preview}")
|
||||||
|
|
||||||
|
|
||||||
|
def print_json(results: list[LabelResult], summary: dict) -> None:
|
||||||
|
payload = {
|
||||||
|
"summary": summary,
|
||||||
|
"results": [
|
||||||
|
{
|
||||||
|
"id": r.id,
|
||||||
|
"expected_count": r.expected_count,
|
||||||
|
"actual_count": r.actual_count,
|
||||||
|
"ok": r.ok,
|
||||||
|
"miss_class": r.miss_class,
|
||||||
|
"notes": r.notes,
|
||||||
|
"actual_candidates": r.actual_candidates,
|
||||||
|
}
|
||||||
|
for r in results
|
||||||
|
],
|
||||||
|
}
|
||||||
|
json.dump(payload, sys.stdout, indent=2)
|
||||||
|
sys.stdout.write("\n")
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> int:
|
||||||
|
parser = argparse.ArgumentParser(description="AtoCore extractor eval")
|
||||||
|
parser.add_argument("--snapshot", type=Path, default=DEFAULT_SNAPSHOT)
|
||||||
|
parser.add_argument("--labels", type=Path, default=DEFAULT_LABELS)
|
||||||
|
parser.add_argument("--json", action="store_true", help="emit machine-readable JSON")
|
||||||
|
parser.add_argument(
|
||||||
|
"--output",
|
||||||
|
type=Path,
|
||||||
|
default=None,
|
||||||
|
help="write JSON result to this file (bypasses log/stdout interleaving)",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--mode",
|
||||||
|
choices=["rule", "llm"],
|
||||||
|
default="rule",
|
||||||
|
help="which extractor to score (default: rule)",
|
||||||
|
)
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
snapshot = load_snapshot(args.snapshot)
|
||||||
|
labels = load_labels(args.labels)
|
||||||
|
results = score(snapshot, labels, mode=args.mode)
|
||||||
|
summary = aggregate(results)
|
||||||
|
summary["mode"] = args.mode
|
||||||
|
|
||||||
|
if args.output is not None:
|
||||||
|
payload = {
|
||||||
|
"summary": summary,
|
||||||
|
"results": [
|
||||||
|
{
|
||||||
|
"id": r.id,
|
||||||
|
"expected_count": r.expected_count,
|
||||||
|
"actual_count": r.actual_count,
|
||||||
|
"ok": r.ok,
|
||||||
|
"miss_class": r.miss_class,
|
||||||
|
"notes": r.notes,
|
||||||
|
"actual_candidates": r.actual_candidates,
|
||||||
|
}
|
||||||
|
for r in results
|
||||||
|
],
|
||||||
|
}
|
||||||
|
args.output.write_text(json.dumps(payload, indent=2, ensure_ascii=False), encoding="utf-8")
|
||||||
|
print(f"wrote {args.output} ({summary['mode']}: recall={summary['recall']} precision={summary['precision']})")
|
||||||
|
elif args.json:
|
||||||
|
print_json(results, summary)
|
||||||
|
else:
|
||||||
|
print_human(results, summary)
|
||||||
|
|
||||||
|
return 0 if summary["false_negative_interactions"] == 0 and summary["false_positive_interactions"] == 0 else 1
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
raise SystemExit(main())
|
||||||
89
scripts/persist_llm_candidates.py
Normal file
89
scripts/persist_llm_candidates.py
Normal file
@@ -0,0 +1,89 @@
|
|||||||
|
"""Persist LLM-extracted candidates from a baseline JSON to Dalidou.
|
||||||
|
|
||||||
|
One-shot script: reads a saved extractor eval output file, filters to
|
||||||
|
candidates the LLM actually produced, and POSTs each to the Dalidou
|
||||||
|
memory API with ``status=candidate``. Deduplicates against already-
|
||||||
|
existing candidate content so the script is safe to re-run.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
|
||||||
|
python scripts/persist_llm_candidates.py \\
|
||||||
|
scripts/eval_data/extractor_llm_baseline_2026-04-11.json
|
||||||
|
|
||||||
|
Then triage via:
|
||||||
|
|
||||||
|
python scripts/atocore_client.py triage
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import urllib.error
|
||||||
|
import urllib.parse
|
||||||
|
import urllib.request
|
||||||
|
|
||||||
|
BASE_URL = os.environ.get("ATOCORE_BASE_URL", "http://dalidou:8100")
|
||||||
|
TIMEOUT = int(os.environ.get("ATOCORE_TIMEOUT_SECONDS", "10"))
|
||||||
|
|
||||||
|
|
||||||
|
def post_json(path: str, body: dict) -> dict:
|
||||||
|
data = json.dumps(body).encode("utf-8")
|
||||||
|
req = urllib.request.Request(
|
||||||
|
url=f"{BASE_URL}{path}",
|
||||||
|
method="POST",
|
||||||
|
headers={"Content-Type": "application/json"},
|
||||||
|
data=data,
|
||||||
|
)
|
||||||
|
with urllib.request.urlopen(req, timeout=TIMEOUT) as resp:
|
||||||
|
return json.loads(resp.read().decode("utf-8"))
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> int:
|
||||||
|
if len(sys.argv) < 2:
|
||||||
|
print(f"usage: {sys.argv[0]} <baseline_json>", file=sys.stderr)
|
||||||
|
return 1
|
||||||
|
|
||||||
|
data = json.loads(open(sys.argv[1], encoding="utf-8").read())
|
||||||
|
results = data.get("results", [])
|
||||||
|
|
||||||
|
persisted = 0
|
||||||
|
skipped = 0
|
||||||
|
errors = 0
|
||||||
|
|
||||||
|
for r in results:
|
||||||
|
for c in r.get("actual_candidates", []):
|
||||||
|
content = (c.get("content") or "").strip()
|
||||||
|
if not content:
|
||||||
|
continue
|
||||||
|
mem_type = c.get("memory_type", "knowledge")
|
||||||
|
project = c.get("project", "")
|
||||||
|
confidence = c.get("confidence", 0.5)
|
||||||
|
|
||||||
|
try:
|
||||||
|
resp = post_json("/memory", {
|
||||||
|
"memory_type": mem_type,
|
||||||
|
"content": content,
|
||||||
|
"project": project,
|
||||||
|
"confidence": float(confidence),
|
||||||
|
"status": "candidate",
|
||||||
|
})
|
||||||
|
persisted += 1
|
||||||
|
print(f" + {resp.get('id','?')[:8]} [{mem_type}] {content[:80]}")
|
||||||
|
except urllib.error.HTTPError as exc:
|
||||||
|
if exc.code == 400:
|
||||||
|
skipped += 1
|
||||||
|
else:
|
||||||
|
errors += 1
|
||||||
|
print(f" ! error {exc.code}: {content[:60]}", file=sys.stderr)
|
||||||
|
except Exception as exc:
|
||||||
|
errors += 1
|
||||||
|
print(f" ! {exc}: {content[:60]}", file=sys.stderr)
|
||||||
|
|
||||||
|
print(f"\npersisted={persisted} skipped={skipped} errors={errors}")
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
raise SystemExit(main())
|
||||||
@@ -13,7 +13,7 @@
|
|||||||
"p06-polisher",
|
"p06-polisher",
|
||||||
"folded-beam"
|
"folded-beam"
|
||||||
],
|
],
|
||||||
"notes": "Canonical p04 decision — should surface both Trusted Project State (selected_mirror_architecture) and the project-memory band with the Option B memory"
|
"notes": "Canonical p04 decision — should surface both Trusted Project State and the project-memory band"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "p04-constraints",
|
"name": "p04-constraints",
|
||||||
@@ -27,7 +27,17 @@
|
|||||||
"expect_absent": [
|
"expect_absent": [
|
||||||
"polisher suite"
|
"polisher suite"
|
||||||
],
|
],
|
||||||
"notes": "Key constraints are in Trusted Project State (key_constraints) and in the mission-framing memory"
|
"notes": "Key constraints are in Trusted Project State and in the mission-framing memory"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "p04-short-ambiguous",
|
||||||
|
"project": "p04-gigabit",
|
||||||
|
"prompt": "current status",
|
||||||
|
"expect_present": [
|
||||||
|
"--- Trusted Project State ---"
|
||||||
|
],
|
||||||
|
"expect_absent": [],
|
||||||
|
"notes": "Short ambiguous prompt — at minimum project state should surface. Hard case: the prompt is generic enough that chunks may not rank well."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "p05-configuration",
|
"name": "p05-configuration",
|
||||||
@@ -42,7 +52,7 @@
|
|||||||
"conical back",
|
"conical back",
|
||||||
"polisher suite"
|
"polisher suite"
|
||||||
],
|
],
|
||||||
"notes": "P05 architecture memory covers folded-beam + CGH. GigaBIT M1 is the mirror under test and legitimately appears in p05 source docs (the interferometer measures it), so we only flag genuinely p04-only decisions like the mirror architecture choice."
|
"notes": "P05 architecture memory covers folded-beam + CGH. GigaBIT M1 legitimately appears in p05 source docs."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "p05-vendor-signal",
|
"name": "p05-vendor-signal",
|
||||||
@@ -57,6 +67,19 @@
|
|||||||
],
|
],
|
||||||
"notes": "Vendor memory mentions 4D as strongest technical candidate and Zygo Verifire SV as value path"
|
"notes": "Vendor memory mentions 4D as strongest technical candidate and Zygo Verifire SV as value path"
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"name": "p05-cgh-calibration",
|
||||||
|
"project": "p05-interferometer",
|
||||||
|
"prompt": "how does CGH calibration work for the interferometer",
|
||||||
|
"expect_present": [
|
||||||
|
"CGH"
|
||||||
|
],
|
||||||
|
"expect_absent": [
|
||||||
|
"polisher-sim",
|
||||||
|
"polisher-post"
|
||||||
|
],
|
||||||
|
"notes": "CGH is a core p05 concept. Should surface via chunks and possibly the architecture memory. Must not bleed p06 polisher-suite terms."
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "p06-suite-split",
|
"name": "p06-suite-split",
|
||||||
"project": "p06-polisher",
|
"project": "p06-polisher",
|
||||||
@@ -69,7 +92,7 @@
|
|||||||
"expect_absent": [
|
"expect_absent": [
|
||||||
"GigaBIT"
|
"GigaBIT"
|
||||||
],
|
],
|
||||||
"notes": "The three-layer split is in multiple p06 memories; check all three names surface together"
|
"notes": "The three-layer split is in multiple p06 memories"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "p06-control-rule",
|
"name": "p06-control-rule",
|
||||||
@@ -82,5 +105,121 @@
|
|||||||
"interferometer"
|
"interferometer"
|
||||||
],
|
],
|
||||||
"notes": "Control design rule memory mentions interlocks and state transitions"
|
"notes": "Control design rule memory mentions interlocks and state transitions"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "p06-firmware-interface",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"prompt": "what is the firmware interface contract for the polisher machine",
|
||||||
|
"expect_present": [
|
||||||
|
"controller-job"
|
||||||
|
],
|
||||||
|
"expect_absent": [
|
||||||
|
"interferometer",
|
||||||
|
"GigaBIT"
|
||||||
|
],
|
||||||
|
"notes": "New p06 memory from the first triage: firmware interface contract is invariant controller-job.v1 in, run-log.v1 out"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "p06-z-axis",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"prompt": "how does the polisher Z-axis work",
|
||||||
|
"expect_present": [
|
||||||
|
"engage"
|
||||||
|
],
|
||||||
|
"expect_absent": [
|
||||||
|
"interferometer"
|
||||||
|
],
|
||||||
|
"notes": "New p06 memory: Z-axis is binary engage/retract, not continuous position. The word 'engage' should appear."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "p06-cam-mechanism",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"prompt": "how is cam amplitude controlled on the polisher",
|
||||||
|
"expect_present": [
|
||||||
|
"encoder"
|
||||||
|
],
|
||||||
|
"expect_absent": [
|
||||||
|
"GigaBIT"
|
||||||
|
],
|
||||||
|
"notes": "New p06 memory: cam set mechanically by operator, read by encoders. The word 'encoder' should appear."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "p06-telemetry-rate",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"prompt": "what is the expected polishing telemetry data rate",
|
||||||
|
"expect_present": [
|
||||||
|
"29 MB"
|
||||||
|
],
|
||||||
|
"expect_absent": [
|
||||||
|
"interferometer"
|
||||||
|
],
|
||||||
|
"notes": "New p06 knowledge memory: approximately 29 MB per hour at 100 Hz"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "p06-offline-design",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"prompt": "does the polisher machine need network to operate",
|
||||||
|
"expect_present": [
|
||||||
|
"offline"
|
||||||
|
],
|
||||||
|
"expect_absent": [
|
||||||
|
"CGH"
|
||||||
|
],
|
||||||
|
"notes": "New p06 memory: machine works fully offline and independently; network is for remote access only"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "p06-short-ambiguous",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"prompt": "current status",
|
||||||
|
"expect_present": [
|
||||||
|
"--- Trusted Project State ---"
|
||||||
|
],
|
||||||
|
"expect_absent": [],
|
||||||
|
"notes": "Short ambiguous prompt — project state should surface at minimum"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "cross-project-no-bleed",
|
||||||
|
"project": "p04-gigabit",
|
||||||
|
"prompt": "what telemetry rate should we target",
|
||||||
|
"expect_present": [],
|
||||||
|
"expect_absent": [
|
||||||
|
"29 MB",
|
||||||
|
"polisher"
|
||||||
|
],
|
||||||
|
"notes": "Adversarial: telemetry rate is a p06 fact. A p04 query for 'telemetry rate' must NOT surface p06 memories. Tests cross-project gating."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "no-project-hint",
|
||||||
|
"project": "",
|
||||||
|
"prompt": "tell me about the current projects",
|
||||||
|
"expect_present": [],
|
||||||
|
"expect_absent": [
|
||||||
|
"--- Project Memories ---"
|
||||||
|
],
|
||||||
|
"notes": "Without a project hint, project memories must not appear (cross-project bleed guard). Chunks may appear if any match."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "p06-usb-ssd",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"prompt": "what storage solution is specified for the polisher RPi",
|
||||||
|
"expect_present": [
|
||||||
|
"USB SSD"
|
||||||
|
],
|
||||||
|
"expect_absent": [
|
||||||
|
"interferometer"
|
||||||
|
],
|
||||||
|
"notes": "New p06 memory from triage: USB SSD mandatory, not SD card"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "p06-tailscale",
|
||||||
|
"project": "p06-polisher",
|
||||||
|
"prompt": "how do we access the polisher machine remotely",
|
||||||
|
"expect_present": [
|
||||||
|
"Tailscale"
|
||||||
|
],
|
||||||
|
"expect_absent": [
|
||||||
|
"GigaBIT"
|
||||||
|
],
|
||||||
|
"notes": "New p06 memory: Tailscale mesh for RPi remote access"
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -141,6 +141,7 @@ class MemoryCreateRequest(BaseModel):
|
|||||||
content: str
|
content: str
|
||||||
project: str = ""
|
project: str = ""
|
||||||
confidence: float = 1.0
|
confidence: float = 1.0
|
||||||
|
status: str = "active"
|
||||||
|
|
||||||
|
|
||||||
class MemoryUpdateRequest(BaseModel):
|
class MemoryUpdateRequest(BaseModel):
|
||||||
@@ -344,6 +345,7 @@ def api_create_memory(req: MemoryCreateRequest) -> dict:
|
|||||||
content=req.content,
|
content=req.content,
|
||||||
project=req.project,
|
project=req.project,
|
||||||
confidence=req.confidence,
|
confidence=req.confidence,
|
||||||
|
status=req.status,
|
||||||
)
|
)
|
||||||
except ValueError as e:
|
except ValueError as e:
|
||||||
raise HTTPException(status_code=400, detail=str(e))
|
raise HTTPException(status_code=400, detail=str(e))
|
||||||
|
|||||||
283
src/atocore/memory/extractor_llm.py
Normal file
283
src/atocore/memory/extractor_llm.py
Normal file
@@ -0,0 +1,283 @@
|
|||||||
|
"""LLM-assisted candidate-memory extraction via the Claude Code CLI.
|
||||||
|
|
||||||
|
Day 4 of the 2026-04-11 mini-phase: the rule-based extractor hit 0%
|
||||||
|
recall against real conversational claude-code captures (Day 2 baseline
|
||||||
|
scorecard in ``scripts/eval_data/extractor_labels_2026-04-11.json``),
|
||||||
|
with false negatives spread across 5 distinct miss classes. A single
|
||||||
|
rule expansion cannot close that gap, so this module adds an optional
|
||||||
|
LLM-assisted mode that shells out to the ``claude -p`` (Claude Code
|
||||||
|
non-interactive) CLI with a focused extraction system prompt. That
|
||||||
|
path reuses the user's existing Claude.ai OAuth credentials — no API
|
||||||
|
key anywhere, per the 2026-04-11 decision.
|
||||||
|
|
||||||
|
Trust rules carried forward from the rule-based extractor:
|
||||||
|
|
||||||
|
- Candidates are NEVER auto-promoted. Caller persists with
|
||||||
|
``status="candidate"`` and a human reviews via the triage CLI.
|
||||||
|
- This path is additive. The rule-based extractor keeps working
|
||||||
|
exactly as before; callers opt in by importing this module.
|
||||||
|
- Extraction stays off the capture hot path — this is batch / manual
|
||||||
|
only, per the 2026-04-11 decision.
|
||||||
|
- Failure is silent. Missing CLI, non-zero exit, malformed JSON,
|
||||||
|
timeout — all return an empty list and log an error. Never raises
|
||||||
|
into the caller; the capture audit trail must not break on an
|
||||||
|
optional side effect.
|
||||||
|
|
||||||
|
Configuration:
|
||||||
|
|
||||||
|
- Requires the ``claude`` CLI on PATH (``claude --version`` should work).
|
||||||
|
- ``ATOCORE_LLM_EXTRACTOR_MODEL`` overrides the model alias (default
|
||||||
|
``sonnet``).
|
||||||
|
- ``ATOCORE_LLM_EXTRACTOR_TIMEOUT_S`` overrides the per-call timeout
|
||||||
|
(default 90 seconds — first invocation is slow because Node.js
|
||||||
|
startup plus OAuth check is non-trivial).
|
||||||
|
|
||||||
|
Implementation notes:
|
||||||
|
|
||||||
|
- We run ``claude -p`` with ``--model <alias>``,
|
||||||
|
``--append-system-prompt`` for the extraction instructions,
|
||||||
|
``--no-session-persistence`` so we don't pollute session history,
|
||||||
|
and ``--disable-slash-commands`` so stray ``/foo`` in an extracted
|
||||||
|
response never triggers something.
|
||||||
|
- The CLI is invoked from a temp working directory so it does not
|
||||||
|
auto-discover ``CLAUDE.md`` / ``DEV-LEDGER.md`` / ``AGENTS.md``
|
||||||
|
from the repo root. We want a bare extraction context, not the
|
||||||
|
full project briefing. We can't use ``--bare`` because that
|
||||||
|
forces API-key auth; the temp-cwd trick is the lightest way to
|
||||||
|
keep OAuth auth while skipping project context loading.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import shutil
|
||||||
|
import subprocess
|
||||||
|
import tempfile
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from functools import lru_cache
|
||||||
|
|
||||||
|
from atocore.interactions.service import Interaction
|
||||||
|
from atocore.memory.extractor import MemoryCandidate
|
||||||
|
from atocore.memory.service import MEMORY_TYPES
|
||||||
|
from atocore.observability.logger import get_logger
|
||||||
|
|
||||||
|
log = get_logger("extractor_llm")
|
||||||
|
|
||||||
|
LLM_EXTRACTOR_VERSION = "llm-0.2.0"
|
||||||
|
DEFAULT_MODEL = os.environ.get("ATOCORE_LLM_EXTRACTOR_MODEL", "sonnet")
|
||||||
|
DEFAULT_TIMEOUT_S = float(os.environ.get("ATOCORE_LLM_EXTRACTOR_TIMEOUT_S", "90"))
|
||||||
|
MAX_RESPONSE_CHARS = 8000
|
||||||
|
MAX_PROMPT_CHARS = 2000
|
||||||
|
|
||||||
|
_SYSTEM_PROMPT = """You extract durable memory candidates from LLM conversation turns for a personal context engine called AtoCore.
|
||||||
|
|
||||||
|
Your job is to read one user prompt plus the assistant's response and decide which durable facts, decisions, preferences, architectural rules, or project invariants should be remembered across future sessions.
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
|
||||||
|
1. Only surface durable claims. Skip transient status ("deploy is still running"), instructional guidance ("here is how to run the command"), troubleshooting tactics, ephemeral recommendations ("merge this PR now"), and session recaps.
|
||||||
|
2. A candidate is durable when a reader coming back in two weeks would still need to know it. Architectural choices, named rules, ratified decisions, invariants, procurement commitments, and project-level constraints qualify. Conversational fillers and step-by-step instructions do not.
|
||||||
|
3. Each candidate must stand alone. Rewrite the claim in one sentence under 200 characters with enough context that a reader without the conversation understands it.
|
||||||
|
4. Each candidate must have a type from this closed set: project, knowledge, preference, adaptation.
|
||||||
|
5. If the conversation is clearly scoped to a project (p04-gigabit, p05-interferometer, p06-polisher, atocore), set ``project`` to that id. Otherwise leave ``project`` empty.
|
||||||
|
6. If the response makes no durable claim, return an empty list. It is correct and expected to return [] on most conversational turns.
|
||||||
|
7. Confidence should be 0.5 by default so human review workload is honest. Raise to 0.6 only when the response states the claim in an unambiguous, committed form (e.g. "the decision is X", "the selected approach is Y", "X is non-negotiable").
|
||||||
|
8. Output must be a raw JSON array and nothing else. No prose before or after. No markdown fences. No explanations.
|
||||||
|
|
||||||
|
Each array element has exactly this shape:
|
||||||
|
|
||||||
|
{"type": "project|knowledge|preference|adaptation", "content": "...", "project": "...", "confidence": 0.5}
|
||||||
|
|
||||||
|
Return [] when there is nothing to extract."""
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class LLMExtractionResult:
|
||||||
|
candidates: list[MemoryCandidate]
|
||||||
|
raw_output: str
|
||||||
|
error: str = ""
|
||||||
|
|
||||||
|
|
||||||
|
@lru_cache(maxsize=1)
|
||||||
|
def _sandbox_cwd() -> str:
|
||||||
|
"""Return a stable temp directory for ``claude -p`` invocations.
|
||||||
|
|
||||||
|
We want the CLI to run from a directory that does NOT contain
|
||||||
|
``CLAUDE.md`` / ``DEV-LEDGER.md`` / ``AGENTS.md``, so every
|
||||||
|
extraction call starts with a clean context instead of the full
|
||||||
|
AtoCore project briefing. Cached so the directory persists for
|
||||||
|
the lifetime of the process.
|
||||||
|
"""
|
||||||
|
return tempfile.mkdtemp(prefix="ato-llm-extract-")
|
||||||
|
|
||||||
|
|
||||||
|
def _cli_available() -> bool:
|
||||||
|
return shutil.which("claude") is not None
|
||||||
|
|
||||||
|
|
||||||
|
def extract_candidates_llm(
|
||||||
|
interaction: Interaction,
|
||||||
|
model: str | None = None,
|
||||||
|
timeout_s: float | None = None,
|
||||||
|
) -> list[MemoryCandidate]:
|
||||||
|
"""Run the LLM-assisted extractor against one interaction.
|
||||||
|
|
||||||
|
Returns a list of ``MemoryCandidate`` objects, empty on any
|
||||||
|
failure path. The caller is responsible for persistence.
|
||||||
|
"""
|
||||||
|
return extract_candidates_llm_verbose(
|
||||||
|
interaction,
|
||||||
|
model=model,
|
||||||
|
timeout_s=timeout_s,
|
||||||
|
).candidates
|
||||||
|
|
||||||
|
|
||||||
|
def extract_candidates_llm_verbose(
|
||||||
|
interaction: Interaction,
|
||||||
|
model: str | None = None,
|
||||||
|
timeout_s: float | None = None,
|
||||||
|
) -> LLMExtractionResult:
|
||||||
|
"""Like ``extract_candidates_llm`` but also returns the raw
|
||||||
|
subprocess output and any error encountered, for eval / debugging.
|
||||||
|
"""
|
||||||
|
if not _cli_available():
|
||||||
|
return LLMExtractionResult(
|
||||||
|
candidates=[],
|
||||||
|
raw_output="",
|
||||||
|
error="claude_cli_missing",
|
||||||
|
)
|
||||||
|
|
||||||
|
response_text = (interaction.response or "").strip()
|
||||||
|
if not response_text:
|
||||||
|
return LLMExtractionResult(candidates=[], raw_output="", error="empty_response")
|
||||||
|
|
||||||
|
prompt_excerpt = (interaction.prompt or "")[:MAX_PROMPT_CHARS]
|
||||||
|
response_excerpt = response_text[:MAX_RESPONSE_CHARS]
|
||||||
|
user_message = (
|
||||||
|
f"PROJECT HINT (may be empty): {interaction.project or ''}\n\n"
|
||||||
|
f"USER PROMPT:\n{prompt_excerpt}\n\n"
|
||||||
|
f"ASSISTANT RESPONSE:\n{response_excerpt}\n\n"
|
||||||
|
"Return the JSON array now."
|
||||||
|
)
|
||||||
|
|
||||||
|
args = [
|
||||||
|
"claude",
|
||||||
|
"-p",
|
||||||
|
"--model",
|
||||||
|
model or DEFAULT_MODEL,
|
||||||
|
"--append-system-prompt",
|
||||||
|
_SYSTEM_PROMPT,
|
||||||
|
"--no-session-persistence",
|
||||||
|
"--disable-slash-commands",
|
||||||
|
user_message,
|
||||||
|
]
|
||||||
|
|
||||||
|
try:
|
||||||
|
completed = subprocess.run(
|
||||||
|
args,
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
timeout=timeout_s or DEFAULT_TIMEOUT_S,
|
||||||
|
cwd=_sandbox_cwd(),
|
||||||
|
encoding="utf-8",
|
||||||
|
errors="replace",
|
||||||
|
)
|
||||||
|
except subprocess.TimeoutExpired:
|
||||||
|
log.error("llm_extractor_timeout", interaction_id=interaction.id)
|
||||||
|
return LLMExtractionResult(candidates=[], raw_output="", error="timeout")
|
||||||
|
except Exception as exc: # pragma: no cover - unexpected subprocess failure
|
||||||
|
log.error("llm_extractor_subprocess_failed", error=str(exc))
|
||||||
|
return LLMExtractionResult(candidates=[], raw_output="", error=f"subprocess_error: {exc}")
|
||||||
|
|
||||||
|
if completed.returncode != 0:
|
||||||
|
log.error(
|
||||||
|
"llm_extractor_nonzero_exit",
|
||||||
|
interaction_id=interaction.id,
|
||||||
|
returncode=completed.returncode,
|
||||||
|
stderr_prefix=(completed.stderr or "")[:200],
|
||||||
|
)
|
||||||
|
return LLMExtractionResult(
|
||||||
|
candidates=[],
|
||||||
|
raw_output=completed.stdout or "",
|
||||||
|
error=f"exit_{completed.returncode}",
|
||||||
|
)
|
||||||
|
|
||||||
|
raw_output = (completed.stdout or "").strip()
|
||||||
|
candidates = _parse_candidates(raw_output, interaction)
|
||||||
|
log.info(
|
||||||
|
"llm_extractor_done",
|
||||||
|
interaction_id=interaction.id,
|
||||||
|
candidate_count=len(candidates),
|
||||||
|
model=model or DEFAULT_MODEL,
|
||||||
|
)
|
||||||
|
return LLMExtractionResult(candidates=candidates, raw_output=raw_output)
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_candidates(raw_output: str, interaction: Interaction) -> list[MemoryCandidate]:
|
||||||
|
"""Parse the model's JSON output into MemoryCandidate objects.
|
||||||
|
|
||||||
|
Tolerates common model glitches: surrounding whitespace, stray
|
||||||
|
markdown fences, leading/trailing prose. Silently drops malformed
|
||||||
|
array elements rather than raising.
|
||||||
|
"""
|
||||||
|
text = raw_output.strip()
|
||||||
|
if text.startswith("```"):
|
||||||
|
text = text.strip("`")
|
||||||
|
first_newline = text.find("\n")
|
||||||
|
if first_newline >= 0:
|
||||||
|
text = text[first_newline + 1 :]
|
||||||
|
if text.endswith("```"):
|
||||||
|
text = text[:-3]
|
||||||
|
text = text.strip()
|
||||||
|
|
||||||
|
if not text or text == "[]":
|
||||||
|
return []
|
||||||
|
|
||||||
|
if not text.lstrip().startswith("["):
|
||||||
|
start = text.find("[")
|
||||||
|
end = text.rfind("]")
|
||||||
|
if start >= 0 and end > start:
|
||||||
|
text = text[start : end + 1]
|
||||||
|
|
||||||
|
try:
|
||||||
|
parsed = json.loads(text)
|
||||||
|
except json.JSONDecodeError as exc:
|
||||||
|
log.error("llm_extractor_parse_failed", error=str(exc), raw_prefix=raw_output[:120])
|
||||||
|
return []
|
||||||
|
|
||||||
|
if not isinstance(parsed, list):
|
||||||
|
return []
|
||||||
|
|
||||||
|
results: list[MemoryCandidate] = []
|
||||||
|
for item in parsed:
|
||||||
|
if not isinstance(item, dict):
|
||||||
|
continue
|
||||||
|
mem_type = str(item.get("type") or "").strip().lower()
|
||||||
|
content = str(item.get("content") or "").strip()
|
||||||
|
project = str(item.get("project") or "").strip()
|
||||||
|
if not project and interaction.project:
|
||||||
|
project = interaction.project
|
||||||
|
confidence_raw = item.get("confidence", 0.5)
|
||||||
|
if mem_type not in MEMORY_TYPES:
|
||||||
|
continue
|
||||||
|
if not content:
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
confidence = float(confidence_raw)
|
||||||
|
except (TypeError, ValueError):
|
||||||
|
confidence = 0.5
|
||||||
|
confidence = max(0.0, min(1.0, confidence))
|
||||||
|
results.append(
|
||||||
|
MemoryCandidate(
|
||||||
|
memory_type=mem_type,
|
||||||
|
content=content[:1000],
|
||||||
|
rule="llm_extraction",
|
||||||
|
source_span=content[:200],
|
||||||
|
project=project,
|
||||||
|
confidence=confidence,
|
||||||
|
source_interaction_id=interaction.id,
|
||||||
|
extractor_version=LLM_EXTRACTOR_VERSION,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
return results
|
||||||
@@ -413,8 +413,17 @@ def get_memories_for_context(
|
|||||||
if query_tokens is not None:
|
if query_tokens is not None:
|
||||||
pool = _rank_memories_for_query(pool, query_tokens)
|
pool = _rank_memories_for_query(pool, query_tokens)
|
||||||
|
|
||||||
|
# Per-entry cap prevents a single long memory from monopolizing
|
||||||
|
# the band. With 16 p06 memories competing for ~700 chars, an
|
||||||
|
# uncapped 530-char overview memory fills the entire budget before
|
||||||
|
# a query-relevant 150-char memory gets a slot. The cap ensures at
|
||||||
|
# least 2-3 entries fit regardless of individual memory length.
|
||||||
|
max_entry_chars = 250
|
||||||
for mem in pool:
|
for mem in pool:
|
||||||
entry = f"[{mem.memory_type}] {mem.content}"
|
content = mem.content
|
||||||
|
if len(content) > max_entry_chars:
|
||||||
|
content = content[:max_entry_chars - 3].rstrip() + "..."
|
||||||
|
entry = f"[{mem.memory_type}] {content}"
|
||||||
entry_len = len(entry) + 1
|
entry_len = len(entry) + 1
|
||||||
if entry_len > available - used:
|
if entry_len > available - used:
|
||||||
continue
|
continue
|
||||||
|
|||||||
177
tests/test_extractor_llm.py
Normal file
177
tests/test_extractor_llm.py
Normal file
@@ -0,0 +1,177 @@
|
|||||||
|
"""Tests for the LLM-assisted extractor path.
|
||||||
|
|
||||||
|
Focused on the parser and failure-mode contracts — the actual network
|
||||||
|
call is exercised out of band by running
|
||||||
|
``python scripts/extractor_eval.py --mode llm`` against the frozen
|
||||||
|
labeled corpus with ``ANTHROPIC_API_KEY`` set. These tests only
|
||||||
|
exercise the pieces that don't need network.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import os
|
||||||
|
from unittest.mock import patch
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from atocore.interactions.service import Interaction
|
||||||
|
from atocore.memory.extractor_llm import (
|
||||||
|
LLM_EXTRACTOR_VERSION,
|
||||||
|
_parse_candidates,
|
||||||
|
extract_candidates_llm,
|
||||||
|
extract_candidates_llm_verbose,
|
||||||
|
)
|
||||||
|
import atocore.memory.extractor_llm as extractor_llm
|
||||||
|
|
||||||
|
|
||||||
|
def _make_interaction(prompt: str = "p", response: str = "r") -> Interaction:
|
||||||
|
return Interaction(
|
||||||
|
id="test-id",
|
||||||
|
prompt=prompt,
|
||||||
|
response=response,
|
||||||
|
response_summary="",
|
||||||
|
project="",
|
||||||
|
client="test",
|
||||||
|
session_id="",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_parser_handles_empty_array():
|
||||||
|
result = _parse_candidates("[]", _make_interaction())
|
||||||
|
assert result == []
|
||||||
|
|
||||||
|
|
||||||
|
def test_parser_handles_malformed_json():
|
||||||
|
result = _parse_candidates("{ not valid json", _make_interaction())
|
||||||
|
assert result == []
|
||||||
|
|
||||||
|
|
||||||
|
def test_parser_strips_markdown_fences():
|
||||||
|
raw = "```json\n[{\"type\": \"knowledge\", \"content\": \"x is y\", \"project\": \"\", \"confidence\": 0.5}]\n```"
|
||||||
|
result = _parse_candidates(raw, _make_interaction())
|
||||||
|
assert len(result) == 1
|
||||||
|
assert result[0].memory_type == "knowledge"
|
||||||
|
assert result[0].content == "x is y"
|
||||||
|
|
||||||
|
|
||||||
|
def test_parser_strips_surrounding_prose():
|
||||||
|
raw = "Here are the candidates:\n[{\"type\": \"project\", \"content\": \"foo\", \"project\": \"p04\", \"confidence\": 0.6}]\nThat's it."
|
||||||
|
result = _parse_candidates(raw, _make_interaction())
|
||||||
|
assert len(result) == 1
|
||||||
|
assert result[0].memory_type == "project"
|
||||||
|
assert result[0].project == "p04"
|
||||||
|
|
||||||
|
|
||||||
|
def test_parser_drops_invalid_memory_types():
|
||||||
|
raw = '[{"type": "nonsense", "content": "x"}, {"type": "project", "content": "y"}]'
|
||||||
|
result = _parse_candidates(raw, _make_interaction())
|
||||||
|
assert len(result) == 1
|
||||||
|
assert result[0].memory_type == "project"
|
||||||
|
|
||||||
|
|
||||||
|
def test_parser_drops_empty_content():
|
||||||
|
raw = '[{"type": "knowledge", "content": " "}, {"type": "knowledge", "content": "real"}]'
|
||||||
|
result = _parse_candidates(raw, _make_interaction())
|
||||||
|
assert len(result) == 1
|
||||||
|
assert result[0].content == "real"
|
||||||
|
|
||||||
|
|
||||||
|
def test_parser_clamps_confidence_to_unit_interval():
|
||||||
|
raw = '[{"type": "knowledge", "content": "c1", "confidence": 2.5}, {"type": "knowledge", "content": "c2", "confidence": -0.4}]'
|
||||||
|
result = _parse_candidates(raw, _make_interaction())
|
||||||
|
assert result[0].confidence == 1.0
|
||||||
|
assert result[1].confidence == 0.0
|
||||||
|
|
||||||
|
|
||||||
|
def test_parser_defaults_confidence_on_missing_field():
|
||||||
|
raw = '[{"type": "knowledge", "content": "c1"}]'
|
||||||
|
result = _parse_candidates(raw, _make_interaction())
|
||||||
|
assert result[0].confidence == 0.5
|
||||||
|
|
||||||
|
|
||||||
|
def test_parser_tags_version_and_rule():
|
||||||
|
raw = '[{"type": "project", "content": "c1"}]'
|
||||||
|
result = _parse_candidates(raw, _make_interaction())
|
||||||
|
assert result[0].rule == "llm_extraction"
|
||||||
|
assert result[0].extractor_version == LLM_EXTRACTOR_VERSION
|
||||||
|
assert result[0].source_interaction_id == "test-id"
|
||||||
|
|
||||||
|
|
||||||
|
def test_parser_falls_back_to_interaction_project():
|
||||||
|
"""R6: when the model returns empty project but the interaction
|
||||||
|
has one, the candidate should inherit the interaction's project."""
|
||||||
|
raw = '[{"type": "project", "content": "machine works offline"}]'
|
||||||
|
interaction = _make_interaction()
|
||||||
|
interaction.project = "p06-polisher"
|
||||||
|
result = _parse_candidates(raw, interaction)
|
||||||
|
assert result[0].project == "p06-polisher"
|
||||||
|
|
||||||
|
|
||||||
|
def test_parser_keeps_model_project_when_provided():
|
||||||
|
"""Model-supplied project takes precedence over interaction."""
|
||||||
|
raw = '[{"type": "project", "content": "x", "project": "p04-gigabit"}]'
|
||||||
|
interaction = _make_interaction()
|
||||||
|
interaction.project = "p06-polisher"
|
||||||
|
result = _parse_candidates(raw, interaction)
|
||||||
|
assert result[0].project == "p04-gigabit"
|
||||||
|
|
||||||
|
|
||||||
|
def test_missing_cli_returns_empty(monkeypatch):
|
||||||
|
"""If ``claude`` is not on PATH the extractor returns empty, never raises."""
|
||||||
|
monkeypatch.setattr(extractor_llm, "_cli_available", lambda: False)
|
||||||
|
result = extract_candidates_llm_verbose(_make_interaction("p", "some real response"))
|
||||||
|
assert result.candidates == []
|
||||||
|
assert result.error == "claude_cli_missing"
|
||||||
|
|
||||||
|
|
||||||
|
def test_empty_response_returns_empty(monkeypatch):
|
||||||
|
monkeypatch.setattr(extractor_llm, "_cli_available", lambda: True)
|
||||||
|
result = extract_candidates_llm_verbose(_make_interaction("p", ""))
|
||||||
|
assert result.candidates == []
|
||||||
|
assert result.error == "empty_response"
|
||||||
|
|
||||||
|
|
||||||
|
def test_subprocess_timeout_returns_empty(monkeypatch):
|
||||||
|
"""A subprocess timeout must not raise into the caller."""
|
||||||
|
monkeypatch.setattr(extractor_llm, "_cli_available", lambda: True)
|
||||||
|
|
||||||
|
import subprocess as _sp
|
||||||
|
|
||||||
|
def _boom(*a, **kw):
|
||||||
|
raise _sp.TimeoutExpired(cmd=a[0] if a else "claude", timeout=1)
|
||||||
|
|
||||||
|
monkeypatch.setattr(extractor_llm.subprocess, "run", _boom)
|
||||||
|
result = extract_candidates_llm_verbose(_make_interaction("p", "real response"))
|
||||||
|
assert result.candidates == []
|
||||||
|
assert result.error == "timeout"
|
||||||
|
|
||||||
|
|
||||||
|
def test_subprocess_nonzero_exit_returns_empty(monkeypatch):
|
||||||
|
"""A non-zero CLI exit (auth failure, etc.) must not raise."""
|
||||||
|
monkeypatch.setattr(extractor_llm, "_cli_available", lambda: True)
|
||||||
|
|
||||||
|
class _Completed:
|
||||||
|
returncode = 1
|
||||||
|
stdout = ""
|
||||||
|
stderr = "auth failed"
|
||||||
|
|
||||||
|
monkeypatch.setattr(extractor_llm.subprocess, "run", lambda *a, **kw: _Completed())
|
||||||
|
result = extract_candidates_llm_verbose(_make_interaction("p", "real response"))
|
||||||
|
assert result.candidates == []
|
||||||
|
assert result.error == "exit_1"
|
||||||
|
|
||||||
|
|
||||||
|
def test_happy_path_parses_stdout(monkeypatch):
|
||||||
|
monkeypatch.setattr(extractor_llm, "_cli_available", lambda: True)
|
||||||
|
|
||||||
|
class _Completed:
|
||||||
|
returncode = 0
|
||||||
|
stdout = '[{"type": "project", "content": "p04 selected Option B", "project": "p04-gigabit", "confidence": 0.6}]'
|
||||||
|
stderr = ""
|
||||||
|
|
||||||
|
monkeypatch.setattr(extractor_llm.subprocess, "run", lambda *a, **kw: _Completed())
|
||||||
|
result = extract_candidates_llm_verbose(_make_interaction("p", "r"))
|
||||||
|
assert len(result.candidates) == 1
|
||||||
|
assert result.candidates[0].memory_type == "project"
|
||||||
|
assert result.candidates[0].project == "p04-gigabit"
|
||||||
|
assert abs(result.candidates[0].confidence - 0.6) < 1e-9
|
||||||
Reference in New Issue
Block a user