Compare commits
16 Commits
codex/audi
...
codex/open
| Author | SHA1 | Date | |
|---|---|---|---|
| 500a29aeba | |||
| 0371739877 | |||
| f2ec5d43de | |||
| 72ca823206 | |||
| a6ae6166a4 | |||
| 4f8bec7419 | |||
| 52380a233e | |||
| 8b77e83f0a | |||
| dbb8f915e2 | |||
| e5e9a9931e | |||
| 144dbbd700 | |||
| 7650c339a2 | |||
| 69c971708a | |||
| 8951c624fe | |||
| 1a2ee5e07f | |||
| 9b149d4bfd |
@@ -6,13 +6,14 @@
|
||||
|
||||
## Orientation
|
||||
|
||||
- **live_sha** (Dalidou `/health` build_sha): `39d73e9`
|
||||
- **last_updated**: 2026-04-12 by Codex (audit branch `codex/audit-2026-04-12-extraction`)
|
||||
- **main_tip**: `ac7f77d`
|
||||
- **test_count**: 280 passing
|
||||
- **harness**: `16/18 PASS` (p06-firmware-interface = R7 ranking tie; p06-tailscale = chunk bleed)
|
||||
- **active_memories**: 36 (p06-polisher 16, p05-interferometer 6, p04-gigabit 5, atocore 5, other 4)
|
||||
- **project_state_entries**: p04=5, p05=6, p06=6 (Wave 2 entries present on live Dalidou; 17 total visible)
|
||||
- **live_sha** (Dalidou `/health` build_sha): `8951c62` (R9 fix at e5e9a99 not yet deployed)
|
||||
- **last_updated**: 2026-04-12 by Codex (branch `codex/openclaw-capture-plugin`)
|
||||
- **main_tip**: `4f8bec7`
|
||||
- **test_count**: 290 passing (local dev shell)
|
||||
- **harness**: `17/18 PASS` (only p06-tailscale still failing)
|
||||
- **active_memories**: 41
|
||||
- **candidate_memories**: 0
|
||||
- **project_state_entries**: p04=5, p05=6, p06=6 (Wave 2 entries still present on live Dalidou; 17 total visible)
|
||||
- **off_host_backup**: `papa@192.168.86.39:/home/papa/atocore-backups/` via cron env `ATOCORE_BACKUP_RSYNC`, verified
|
||||
|
||||
## Active Plan
|
||||
@@ -127,12 +128,13 @@ One branch `codex/extractor-eval-loop` for Day 1-5, a second `codex/retrieval-ha
|
||||
| R4 | Codex | P2 | DEV-LEDGER.md:11 | Orientation `main_tip` was stale versus `HEAD` / `origin/main` | fixed | Codex | 2026-04-11 | 81307ce |
|
||||
| R5 | Codex | P1 | src/atocore/interactions/service.py:157-174 | The deployed extraction path still calls only the rule extractor; the new LLM extractor is eval/script-only, so Day 4 "gate cleared" is true as a benchmark result but not as an operational extraction path | fixed | Claude | 2026-04-12 | c67bec0 |
|
||||
| R6 | Codex | P1 | src/atocore/memory/extractor_llm.py:258-276 | LLM extraction accepts model-supplied `project` verbatim with no fallback to `interaction.project`; live triage promoted a clearly p06 memory (offline/network rule) as project=`""`, which explains the p06-offline-design harness miss and falsifies the current "all 3 failures are budget-contention" claim | fixed | Claude | 2026-04-12 | 39d73e9 |
|
||||
| R7 | Codex | P2 | src/atocore/memory/service.py:448-459 | Query ranking is overlap-count only, so broad overview memories can tie exact low-confidence memories and win on confidence; p06-firmware-interface is not just budget pressure, it also exposes a weak lexical scorer | open | Claude | 2026-04-12 | |
|
||||
| R8 | Codex | P2 | tests/test_extractor_llm.py:1-7 | LLM extractor tests stop at parser/failure contracts; there is no automated coverage for the script-only persistence/review path that produced the 16 promoted memories, including project-scope preservation | open | Claude | 2026-04-12 | |
|
||||
| R9 | Codex | P2 | src/atocore/memory/extractor_llm.py:258-259 | The R6 fallback only repairs empty project output. A wrong non-empty model project still overrides the interaction's known scope, so project attribution is improved but not yet trust-preserving. | open | Claude | 2026-04-12 | |
|
||||
| R7 | Codex | P2 | src/atocore/memory/service.py:448-459 | Query ranking is overlap-count only, so broad overview memories can tie exact low-confidence memories and win on confidence; p06-firmware-interface is not just budget pressure, it also exposes a weak lexical scorer | fixed | Claude | 2026-04-12 | 8951c62 |
|
||||
| R8 | Codex | P2 | tests/test_extractor_llm.py:1-7 | LLM extractor tests stop at parser/failure contracts; there is no automated coverage for the script-only persistence/review path that produced the 16 promoted memories, including project-scope preservation | fixed | Claude | 2026-04-12 | 69c9717 |
|
||||
| R9 | Codex | P2 | src/atocore/memory/extractor_llm.py:258-259 | The R6 fallback only repairs empty project output. A wrong non-empty model project still overrides the interaction's known scope, so project attribution is improved but not yet trust-preserving. | fixed | Claude | 2026-04-12 | e5e9a99 |
|
||||
| R10 | Codex | P2 | docs/master-plan-status.md:31-33 | "Phase 8 - OpenClaw Integration" is fair as a baseline milestone, but not as a "primary" integration claim. `t420-openclaw/atocore.py` currently covers a narrow read-oriented subset (13 request shapes vs 32 API routes) plus fail-open health, while memory/interactions/admin write paths remain out of surface. | open | Claude | 2026-04-12 | |
|
||||
| R11 | Codex | P2 | src/atocore/api/routes.py:773-845 | `POST /admin/extract-batch` still accepts `mode="llm"` inside the container and returns a successful 0-candidate result instead of surfacing that host-only LLM extraction is unavailable from this runtime. That is a misleading API contract for operators. | open | Claude | 2026-04-12 | |
|
||||
| R12 | Codex | P2 | scripts/batch_llm_extract_live.py:39-190 | The host-side extractor duplicates the LLM system prompt and JSON parsing logic from `src/atocore/memory/extractor_llm.py`. It works today, but this is now a prompt/parser drift risk across the container and host implementations. | open | Claude | 2026-04-12 | |
|
||||
| R13 | Codex | P2 | DEV-LEDGER.md:12 | The new `286 passing` test-count claim is not reproducibly auditable from the current audit environments: neither Dalidou nor the clean worktree has `pytest` available. The claim may be true in Claude's dev shell, but it remains unverified in this audit. | open | Claude | 2026-04-12 | |
|
||||
|
||||
## Recent Decisions
|
||||
|
||||
@@ -150,6 +152,14 @@ One branch `codex/extractor-eval-loop` for Day 1-5, a second `codex/retrieval-ha
|
||||
|
||||
## Session Log
|
||||
|
||||
- **2026-04-12 Codex (branch `codex/openclaw-capture-plugin`, verification close)** verified the final capture-plugin behavior on Dalidou after the `message_sending` reliability fix. New OpenClaw interactions now capture reliably and the stored prompt is clean human text instead of the Discord wrapper blob. Verified examples on Dalidou: `Final capture test` and `Yes, fix it, or I'll ask opus to do it`. The oldest two wrapper-heavy captures remain in history from earlier iterations, but new captures are clean.
|
||||
- **2026-04-12 Codex (branch `codex/openclaw-capture-plugin`, polish pass 3)** changed turn pairing from `llm_output` to `message_sending`. The plugin now caches the human prompt at `before_dispatch` and posts to AtoCore only when OpenClaw emits the real outbound assistant message. This should restore reliability while keeping prompt cleanliness. Awaiting one more post-restart validation turn.
|
||||
- **2026-04-12 Codex (branch `codex/openclaw-capture-plugin`, polish pass 2)** switched prompt capture from `before_agent_reply.cleanedBody` to `before_dispatch.body` / `content`, because the earlier path still stored Discord wrapper metadata. This should bind capture to the dispatch-stage human message instead of the prompt-builder artifact. Awaiting one more post-restart turn to verify on Dalidou.
|
||||
- **2026-04-12 Codex (branch `codex/openclaw-capture-plugin`, polish pass)** tightened the OpenClaw capture plugin to use `before_agent_reply.cleanedBody` instead of the raw prompt-build input, which should prevent Discord wrapper metadata from being stored as the interaction prompt. Added `agent_end` cleanup and updated plugin docs. A fresh post-restart user turn is still needed to verify prompt cleanliness on Dalidou.
|
||||
- **2026-04-12 Codex (branch `codex/openclaw-capture-plugin`)** added a minimal external OpenClaw plugin at `openclaw-plugins/atocore-capture/` that mirrors Claude Code capture semantics: user-triggered assistant turns are POSTed to AtoCore `/interactions` with `client="openclaw"` and `reinforce=true`, fail-open, no extraction in-path. For live verification, temporarily added the local plugin load path to OpenClaw config and restarted the gateway so the plugin can load. Branch truth is ready; end-to-end verification still needs one fresh post-restart OpenClaw user turn to confirm new `client=openclaw` interactions appear on Dalidou.
|
||||
- **2026-04-12 Claude** Batch 3 (R9 fix): `144dbbd..e5e9a99`. Trust hierarchy for project attribution — interaction scope always wins when set, model project only used for unscoped interactions + registered check. 7 case tests (A-G) cover every combination. Harness 17/18 (no regression). Tests 286->290. Before: wrong registered project could silently override interaction scope. After: interaction.project is the strongest signal; model project is only a fallback for unscoped captures. Not yet guaranteed: nothing prevents the *same* project's model output from being semantically wrong within that project. R9 marked fixed.
|
||||
|
||||
- **2026-04-12 Codex (audit branch `codex/audit-batch2`)** audited `69c9717..origin/main` against the current branch tip and live Dalidou. Verified: live build is `8951c62`, retrieval harness improved to **17/18 PASS**, candidate queue is now empty, active memories rose to **41**, and `python3 scripts/auto_triage.py --dry-run --base-url http://127.0.0.1:8100` runs cleanly on Dalidou but only exercised the empty-queue path. Updated R7 to **fixed** (`8951c62`) and R8 to **fixed** (`69c9717`). Kept R9 **open** because project trust-preservation still allows a wrong non-empty registered project from the model to override the interaction scope. Added R13 because the new `286 passing` claim could not be independently reproduced in this audit: `pytest` is absent on both Dalidou and the clean audit worktree. Also corrected stale Orientation fields (live SHA, main tip, harness, active/candidate memory counts).
|
||||
- **2026-04-12 Codex (audit branch `codex/audit-2026-04-12-extraction`)** audited `54d84b5..ac7f77d` with live Dalidou verification. Confirmed the host-side LLM extraction pipeline is operational: nightly cron points at `deploy/dalidou/cron-backup.sh`, Step 4 calls `deploy/dalidou/batch-extract.sh`, the batch script exists/executable on Dalidou, and a manual host-side run produced candidates successfully. Updated R1 and R5 to **fixed** (`c67bec0`) because extraction now runs unattended off-container. Live state during audit: build `39d73e9`, active memories **36**, candidate queue **29** (16 existing + 13 added by manual verification run), and `last_extract_batch_run` populated in AtoCore project state. Added R11-R12 for the misleading container `mode=llm` no-op and host/container prompt-parser duplication. Security note: CLI positional prompt/response text is visible in process args while `claude -p` runs; acceptable on a single-user home host, but worth remembering if Dalidou's trust boundary changes.
|
||||
- **2026-04-12 Codex (audit branch `codex/audit-2026-04-12-final`)** audited `c5bad99..e2895b5` against origin/main, live Dalidou, and the OpenClaw client script. Live state checked: build `39d73e9`, harness reproducible at **16/18 PASS**, active memories **36**, and `t420-openclaw/atocore.py health` fails open correctly with `fail_open=true`. Spot-checks of Wave 2 project-state entries matched their cited vault docs. Updated R5-R8 status reality (R6 fixed by `39d73e9`), added R9-R10, and corrected Orientation `main_tip` to `e2895b5` because the ledger had drifted behind origin/main. Note: live Dalidou is still on `39d73e9`, so branch-truth and deploy-truth are not the same yet.
|
||||
- **2026-04-12 Claude** Wave 2 trusted operational ingestion + codex audit response. Read 6 vault docs, created 8 new Trusted Project State entries (p04 +2, p05 +3, p06 +3). Fixed R6 (project fallback in LLM extractor) per codex audit. Fixed misscoped p06 offline memory on live Dalidou. Merged codex/audit-2026-04-12. Switched default LLM model from haiku to sonnet. Harness 15/18 -> 16/18. Tests 278 -> 280. main_tip 146f2e4 -> 39d73e9.
|
||||
|
||||
@@ -31,10 +31,11 @@ log() { printf '[%s] %s\n' "$TIMESTAMP" "$*"; }
|
||||
# The Python script needs the atocore source on PYTHONPATH
|
||||
export PYTHONPATH="$APP_DIR/src:${PYTHONPATH:-}"
|
||||
|
||||
log "=== AtoCore batch LLM extraction starting ==="
|
||||
log "=== AtoCore batch extraction + triage starting ==="
|
||||
log "URL=$ATOCORE_URL LIMIT=$LIMIT"
|
||||
|
||||
# Run the host-side extraction script
|
||||
# Step A: Extract candidates from recent interactions
|
||||
log "Step A: LLM extraction"
|
||||
python3 "$APP_DIR/scripts/batch_llm_extract_live.py" \
|
||||
--base-url "$ATOCORE_URL" \
|
||||
--limit "$LIMIT" \
|
||||
@@ -42,4 +43,12 @@ python3 "$APP_DIR/scripts/batch_llm_extract_live.py" \
|
||||
log "WARN: batch extraction failed (non-blocking)"
|
||||
}
|
||||
|
||||
log "=== AtoCore batch LLM extraction complete ==="
|
||||
# Step B: Auto-triage candidates in the queue
|
||||
log "Step B: auto-triage"
|
||||
python3 "$APP_DIR/scripts/auto_triage.py" \
|
||||
--base-url "$ATOCORE_URL" \
|
||||
2>&1 || {
|
||||
log "WARN: auto-triage failed (non-blocking)"
|
||||
}
|
||||
|
||||
log "=== AtoCore batch extraction + triage complete ==="
|
||||
|
||||
@@ -24,12 +24,15 @@ read-only additive mode.
|
||||
- Phase 5 - Project State
|
||||
- Phase 7 - Context Builder
|
||||
|
||||
### Partial
|
||||
|
||||
- Phase 4 - Identity / Preferences
|
||||
|
||||
### Baseline Complete
|
||||
|
||||
- Phase 4 - Identity / Preferences. As of 2026-04-12: 3 identity
|
||||
memories (role, projects, infrastructure) and 3 preference memories
|
||||
(no API keys, multi-model collab, action-over-discussion) seeded
|
||||
on live Dalidou. Identity/preference band surfaces in context packs
|
||||
at 5% budget ratio. Future identity/preference extraction happens
|
||||
organically via the nightly LLM extraction pipeline.
|
||||
|
||||
- Phase 8 - OpenClaw Integration. As of 2026-04-12 the T420 OpenClaw
|
||||
helper (`t420-openclaw/atocore.py`) is verified end-to-end against
|
||||
live Dalidou: health check, auto-context with project detection,
|
||||
|
||||
32
openclaw-plugins/atocore-capture/README.md
Normal file
32
openclaw-plugins/atocore-capture/README.md
Normal file
@@ -0,0 +1,32 @@
|
||||
# AtoCore Capture Plugin for OpenClaw
|
||||
|
||||
Minimal OpenClaw plugin that mirrors Claude Code's `capture_stop.py` behavior:
|
||||
|
||||
- watches user-triggered assistant turns
|
||||
- uses OpenClaw's dispatch-stage message body (`before_dispatch.body`) for the human prompt, then pairs it with the actual outbound assistant message on `message_sending`
|
||||
- POSTs `prompt` + `response` to `POST /interactions`
|
||||
- sets `client="openclaw"`
|
||||
- sets `reinforce=true`
|
||||
- fails open on network or API errors
|
||||
|
||||
## Config
|
||||
|
||||
Optional plugin config:
|
||||
|
||||
```json
|
||||
{
|
||||
"baseUrl": "http://dalidou:8100",
|
||||
"minPromptLength": 15,
|
||||
"maxResponseLength": 50000
|
||||
}
|
||||
```
|
||||
|
||||
If `baseUrl` is omitted, the plugin uses `ATOCORE_BASE_URL` or defaults to `http://dalidou:8100`.
|
||||
|
||||
## Notes
|
||||
|
||||
- Project detection is intentionally left empty for now. Unscoped capture is acceptable because AtoCore's extraction pipeline handles unscoped interactions.
|
||||
- Prompt cleaning is done inside the plugin by reading OpenClaw's dispatch-stage message body instead of the raw prompt-build input.
|
||||
- Turn pairing is done by caching the prompt on dispatch and posting only when OpenClaw emits the outbound assistant message, which is more reliable than pairing against raw model output events.
|
||||
- Extraction is **not** part of the capture path. This plugin only records interactions and lets AtoCore reinforcement run automatically.
|
||||
- The plugin captures only user-triggered turns, not heartbeats or system-only runs.
|
||||
125
openclaw-plugins/atocore-capture/index.js
Normal file
125
openclaw-plugins/atocore-capture/index.js
Normal file
@@ -0,0 +1,125 @@
|
||||
import { definePluginEntry } from "openclaw/plugin-sdk/core";
|
||||
|
||||
const DEFAULT_BASE_URL = process.env.ATOCORE_BASE_URL || "http://dalidou:8100";
|
||||
const DEFAULT_MIN_PROMPT_LENGTH = 15;
|
||||
const DEFAULT_MAX_RESPONSE_LENGTH = 50_000;
|
||||
|
||||
function trimText(value) {
|
||||
return typeof value === "string" ? value.trim() : "";
|
||||
}
|
||||
|
||||
function truncateResponse(text, maxLength) {
|
||||
if (!text || text.length <= maxLength) return text;
|
||||
return `${text.slice(0, maxLength)}\n\n[truncated]`;
|
||||
}
|
||||
|
||||
function shouldCapturePrompt(prompt, minLength) {
|
||||
const text = trimText(prompt);
|
||||
if (!text) return false;
|
||||
if (text.startsWith("<")) return false;
|
||||
return text.length >= minLength;
|
||||
}
|
||||
|
||||
function buildKeys(...values) {
|
||||
return [...new Set(values.map((v) => trimText(v)).filter(Boolean))];
|
||||
}
|
||||
|
||||
function rememberPending(store, keys, payload) {
|
||||
for (const key of keys) store.set(key, payload);
|
||||
}
|
||||
|
||||
function takePending(store, keys) {
|
||||
for (const key of keys) {
|
||||
const value = store.get(key);
|
||||
if (value) {
|
||||
for (const k of keys) store.delete(k);
|
||||
store.delete(key);
|
||||
return value;
|
||||
}
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
function clearPending(store, keys) {
|
||||
for (const key of keys) store.delete(key);
|
||||
}
|
||||
|
||||
async function postInteraction(baseUrl, payload, logger) {
|
||||
try {
|
||||
const res = await fetch(`${baseUrl.replace(/\/$/, "")}/interactions`, {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
body: JSON.stringify(payload),
|
||||
signal: AbortSignal.timeout(10_000)
|
||||
});
|
||||
if (!res.ok) {
|
||||
logger?.debug?.("atocore_capture_post_failed", { status: res.status });
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
} catch (error) {
|
||||
logger?.debug?.("atocore_capture_post_error", {
|
||||
error: error instanceof Error ? error.message : String(error)
|
||||
});
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
export default definePluginEntry({
|
||||
register(api) {
|
||||
const logger = api.logger;
|
||||
const pendingBySession = new Map();
|
||||
|
||||
api.on("before_dispatch", async (event, ctx) => {
|
||||
const config = api.getConfig?.() || {};
|
||||
const minPromptLength = Number(config.minPromptLength || DEFAULT_MIN_PROMPT_LENGTH);
|
||||
const prompt = trimText(event?.body || event?.content || "");
|
||||
const keys = buildKeys(ctx?.sessionKey, ctx?.sessionId, event?.sessionKey, event?.sessionId, ctx?.conversationId, event?.conversationId);
|
||||
if (!keys.length) return;
|
||||
if (!shouldCapturePrompt(prompt, minPromptLength)) {
|
||||
clearPending(pendingBySession, keys);
|
||||
return;
|
||||
}
|
||||
rememberPending(pendingBySession, keys, {
|
||||
prompt,
|
||||
sessionId: trimText(ctx?.sessionId || event?.sessionId || ""),
|
||||
sessionKey: trimText(ctx?.sessionKey || event?.sessionKey || ""),
|
||||
conversationId: trimText(ctx?.conversationId || event?.conversationId || ""),
|
||||
project: ""
|
||||
});
|
||||
});
|
||||
|
||||
api.on("message_sending", async (event, ctx) => {
|
||||
const keys = buildKeys(ctx?.sessionKey, ctx?.sessionId, ctx?.conversationId);
|
||||
const pending = takePending(pendingBySession, keys);
|
||||
if (!pending) return;
|
||||
|
||||
const response = truncateResponse(
|
||||
trimText(event?.content || ""),
|
||||
Number((api.getConfig?.() || {}).maxResponseLength || DEFAULT_MAX_RESPONSE_LENGTH)
|
||||
);
|
||||
if (!response) return;
|
||||
|
||||
const config = api.getConfig?.() || {};
|
||||
const baseUrl = trimText(config.baseUrl) || DEFAULT_BASE_URL;
|
||||
const payload = {
|
||||
prompt: pending.prompt,
|
||||
response,
|
||||
client: "openclaw",
|
||||
session_id: pending.sessionKey || pending.sessionId || pending.conversationId,
|
||||
project: pending.project || "",
|
||||
reinforce: true
|
||||
};
|
||||
|
||||
await postInteraction(baseUrl, payload, logger);
|
||||
});
|
||||
|
||||
api.on("agent_end", async (event) => {
|
||||
clearPending(pendingBySession, buildKeys(event?.sessionKey, event?.sessionId));
|
||||
});
|
||||
|
||||
api.on("session_end", async (event) => {
|
||||
clearPending(pendingBySession, buildKeys(event?.sessionKey, event?.sessionId));
|
||||
});
|
||||
}
|
||||
});
|
||||
29
openclaw-plugins/atocore-capture/openclaw.plugin.json
Normal file
29
openclaw-plugins/atocore-capture/openclaw.plugin.json
Normal file
@@ -0,0 +1,29 @@
|
||||
{
|
||||
"id": "atocore-capture",
|
||||
"name": "AtoCore Capture",
|
||||
"description": "Captures completed OpenClaw assistant turns to AtoCore interactions for reinforcement.",
|
||||
"configSchema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"baseUrl": {
|
||||
"type": "string",
|
||||
"description": "Override AtoCore base URL. Defaults to ATOCORE_BASE_URL or http://dalidou:8100"
|
||||
},
|
||||
"minPromptLength": {
|
||||
"type": "integer",
|
||||
"minimum": 1,
|
||||
"description": "Minimum user prompt length required before capture"
|
||||
},
|
||||
"maxResponseLength": {
|
||||
"type": "integer",
|
||||
"minimum": 100,
|
||||
"description": "Maximum assistant response length to store"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false
|
||||
},
|
||||
"uiHints": {
|
||||
"category": "automation",
|
||||
"displayName": "AtoCore Capture"
|
||||
}
|
||||
}
|
||||
7
openclaw-plugins/atocore-capture/package.json
Normal file
7
openclaw-plugins/atocore-capture/package.json
Normal file
@@ -0,0 +1,7 @@
|
||||
{
|
||||
"name": "@atomaste/atocore-openclaw-capture",
|
||||
"private": true,
|
||||
"version": "0.0.0",
|
||||
"type": "module",
|
||||
"description": "OpenClaw plugin that captures assistant turns to AtoCore interactions"
|
||||
}
|
||||
247
scripts/auto_triage.py
Normal file
247
scripts/auto_triage.py
Normal file
@@ -0,0 +1,247 @@
|
||||
"""Auto-triage: LLM second-pass over candidate memories.
|
||||
|
||||
Fetches all status=candidate memories from the AtoCore API, asks
|
||||
a triage model (via claude -p) to classify each as promote / reject /
|
||||
needs_human, and executes the verdict via the promote/reject endpoints.
|
||||
Only needs_human candidates remain in the queue for manual review.
|
||||
|
||||
Trust model:
|
||||
- Auto-promote: model says promote AND confidence >= 0.8 AND no
|
||||
duplicate content in existing active memories
|
||||
- Auto-reject: model says reject
|
||||
- needs_human: everything else stays in queue
|
||||
|
||||
Runs host-side (same as batch extraction) because it needs the
|
||||
claude CLI. Intended to be called after batch-extract.sh in the
|
||||
nightly cron, or manually.
|
||||
|
||||
Usage:
|
||||
|
||||
python3 scripts/auto_triage.py --base-url http://localhost:8100
|
||||
python3 scripts/auto_triage.py --dry-run # preview without executing
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
import urllib.error
|
||||
import urllib.parse
|
||||
import urllib.request
|
||||
|
||||
DEFAULT_BASE_URL = os.environ.get("ATOCORE_BASE_URL", "http://localhost:8100")
|
||||
DEFAULT_MODEL = os.environ.get("ATOCORE_TRIAGE_MODEL", "sonnet")
|
||||
DEFAULT_TIMEOUT_S = float(os.environ.get("ATOCORE_TRIAGE_TIMEOUT_S", "60"))
|
||||
AUTO_PROMOTE_MIN_CONFIDENCE = 0.8
|
||||
|
||||
TRIAGE_SYSTEM_PROMPT = """You are a memory triage reviewer for a personal context engine called AtoCore. You review candidate memories extracted from LLM conversations and decide whether each should be promoted to active status, rejected, or flagged for human review.
|
||||
|
||||
You will receive:
|
||||
- The candidate memory content and type
|
||||
- A list of existing active memories for the same project (to check for duplicates)
|
||||
|
||||
For each candidate, output exactly one JSON object:
|
||||
|
||||
{"verdict": "promote|reject|needs_human", "confidence": 0.0-1.0, "reason": "one sentence"}
|
||||
|
||||
Rules:
|
||||
|
||||
1. PROMOTE when the candidate states a durable architectural fact, ratified decision, standing rule, or engineering constraint that is NOT already covered by an existing active memory. Confidence should reflect how certain you are this is worth keeping.
|
||||
|
||||
2. REJECT when the candidate is:
|
||||
- A stale point-in-time snapshot ("live SHA is X", "36 active memories")
|
||||
- An implementation detail too granular to be useful as standalone context
|
||||
- A planned-but-not-implemented feature description
|
||||
- A duplicate or near-duplicate of an existing active memory
|
||||
- A session observation or conversational filler
|
||||
- A process rule that belongs in DEV-LEDGER.md or AGENTS.md, not memory
|
||||
|
||||
3. NEEDS_HUMAN when you're genuinely unsure — the candidate might be valuable but you can't tell without domain knowledge. This should be rare (< 20% of candidates).
|
||||
|
||||
4. Output ONLY the JSON object. No prose, no markdown, no explanation outside the reason field."""
|
||||
|
||||
_sandbox_cwd = None
|
||||
|
||||
|
||||
def get_sandbox_cwd():
|
||||
global _sandbox_cwd
|
||||
if _sandbox_cwd is None:
|
||||
_sandbox_cwd = tempfile.mkdtemp(prefix="ato-triage-")
|
||||
return _sandbox_cwd
|
||||
|
||||
|
||||
def api_get(base_url, path, timeout=10):
|
||||
req = urllib.request.Request(f"{base_url}{path}")
|
||||
with urllib.request.urlopen(req, timeout=timeout) as resp:
|
||||
return json.loads(resp.read().decode("utf-8"))
|
||||
|
||||
|
||||
def api_post(base_url, path, body=None, timeout=10):
|
||||
data = json.dumps(body or {}).encode("utf-8")
|
||||
req = urllib.request.Request(
|
||||
f"{base_url}{path}", method="POST",
|
||||
headers={"Content-Type": "application/json"}, data=data,
|
||||
)
|
||||
with urllib.request.urlopen(req, timeout=timeout) as resp:
|
||||
return json.loads(resp.read().decode("utf-8"))
|
||||
|
||||
|
||||
def fetch_active_memories_for_project(base_url, project):
|
||||
"""Fetch active memories for dedup checking."""
|
||||
params = "active_only=true&limit=50"
|
||||
if project:
|
||||
params += f"&project={urllib.parse.quote(project)}"
|
||||
result = api_get(base_url, f"/memory?{params}")
|
||||
return result.get("memories", [])
|
||||
|
||||
|
||||
def triage_one(candidate, active_memories, model, timeout_s):
|
||||
"""Ask the triage model to classify one candidate."""
|
||||
if not shutil.which("claude"):
|
||||
return {"verdict": "needs_human", "confidence": 0.0, "reason": "claude CLI not available"}
|
||||
|
||||
active_summary = "\n".join(
|
||||
f"- [{m['memory_type']}] {m['content'][:150]}"
|
||||
for m in active_memories[:20]
|
||||
) or "(no active memories for this project)"
|
||||
|
||||
user_message = (
|
||||
f"CANDIDATE TO TRIAGE:\n"
|
||||
f" type: {candidate['memory_type']}\n"
|
||||
f" project: {candidate.get('project') or '(none)'}\n"
|
||||
f" content: {candidate['content']}\n\n"
|
||||
f"EXISTING ACTIVE MEMORIES FOR THIS PROJECT:\n{active_summary}\n\n"
|
||||
f"Return the JSON verdict now."
|
||||
)
|
||||
|
||||
args = [
|
||||
"claude", "-p",
|
||||
"--model", model,
|
||||
"--append-system-prompt", TRIAGE_SYSTEM_PROMPT,
|
||||
"--disable-slash-commands",
|
||||
user_message,
|
||||
]
|
||||
|
||||
try:
|
||||
completed = subprocess.run(
|
||||
args, capture_output=True, text=True,
|
||||
timeout=timeout_s, cwd=get_sandbox_cwd(),
|
||||
encoding="utf-8", errors="replace",
|
||||
)
|
||||
except subprocess.TimeoutExpired:
|
||||
return {"verdict": "needs_human", "confidence": 0.0, "reason": "triage model timed out"}
|
||||
except Exception as exc:
|
||||
return {"verdict": "needs_human", "confidence": 0.0, "reason": f"subprocess error: {exc}"}
|
||||
|
||||
if completed.returncode != 0:
|
||||
return {"verdict": "needs_human", "confidence": 0.0, "reason": f"claude exit {completed.returncode}"}
|
||||
|
||||
raw = (completed.stdout or "").strip()
|
||||
return parse_verdict(raw)
|
||||
|
||||
|
||||
def parse_verdict(raw):
|
||||
"""Parse the triage model's JSON verdict."""
|
||||
text = raw.strip()
|
||||
if text.startswith("```"):
|
||||
text = text.strip("`")
|
||||
nl = text.find("\n")
|
||||
if nl >= 0:
|
||||
text = text[nl + 1:]
|
||||
if text.endswith("```"):
|
||||
text = text[:-3]
|
||||
text = text.strip()
|
||||
|
||||
if not text.lstrip().startswith("{"):
|
||||
start = text.find("{")
|
||||
end = text.rfind("}")
|
||||
if start >= 0 and end > start:
|
||||
text = text[start:end + 1]
|
||||
|
||||
try:
|
||||
parsed = json.loads(text)
|
||||
except json.JSONDecodeError:
|
||||
return {"verdict": "needs_human", "confidence": 0.0, "reason": "failed to parse triage output"}
|
||||
|
||||
verdict = str(parsed.get("verdict", "needs_human")).strip().lower()
|
||||
if verdict not in {"promote", "reject", "needs_human"}:
|
||||
verdict = "needs_human"
|
||||
|
||||
confidence = parsed.get("confidence", 0.5)
|
||||
try:
|
||||
confidence = max(0.0, min(1.0, float(confidence)))
|
||||
except (TypeError, ValueError):
|
||||
confidence = 0.5
|
||||
|
||||
reason = str(parsed.get("reason", "")).strip()[:200]
|
||||
return {"verdict": verdict, "confidence": confidence, "reason": reason}
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Auto-triage candidate memories")
|
||||
parser.add_argument("--base-url", default=DEFAULT_BASE_URL)
|
||||
parser.add_argument("--model", default=DEFAULT_MODEL)
|
||||
parser.add_argument("--dry-run", action="store_true", help="preview without executing")
|
||||
args = parser.parse_args()
|
||||
|
||||
# Fetch candidates
|
||||
result = api_get(args.base_url, "/memory?status=candidate&limit=100")
|
||||
candidates = result.get("memories", [])
|
||||
print(f"candidates: {len(candidates)} model: {args.model} dry_run: {args.dry_run}")
|
||||
|
||||
if not candidates:
|
||||
print("queue empty, nothing to triage")
|
||||
return
|
||||
|
||||
# Cache active memories per project for dedup
|
||||
active_cache = {}
|
||||
promoted = rejected = needs_human = errors = 0
|
||||
|
||||
for i, cand in enumerate(candidates, 1):
|
||||
project = cand.get("project") or ""
|
||||
if project not in active_cache:
|
||||
active_cache[project] = fetch_active_memories_for_project(args.base_url, project)
|
||||
|
||||
verdict_obj = triage_one(cand, active_cache[project], args.model, DEFAULT_TIMEOUT_S)
|
||||
verdict = verdict_obj["verdict"]
|
||||
conf = verdict_obj["confidence"]
|
||||
reason = verdict_obj["reason"]
|
||||
|
||||
mid = cand["id"]
|
||||
label = f"[{i:2d}/{len(candidates)}] {mid[:8]} [{cand['memory_type']}]"
|
||||
|
||||
if verdict == "promote" and conf >= AUTO_PROMOTE_MIN_CONFIDENCE:
|
||||
if args.dry_run:
|
||||
print(f" WOULD PROMOTE {label} conf={conf:.2f} {reason}")
|
||||
else:
|
||||
try:
|
||||
api_post(args.base_url, f"/memory/{mid}/promote")
|
||||
print(f" PROMOTED {label} conf={conf:.2f} {reason}")
|
||||
active_cache[project].append(cand)
|
||||
except Exception:
|
||||
errors += 1
|
||||
promoted += 1
|
||||
elif verdict == "reject":
|
||||
if args.dry_run:
|
||||
print(f" WOULD REJECT {label} conf={conf:.2f} {reason}")
|
||||
else:
|
||||
try:
|
||||
api_post(args.base_url, f"/memory/{mid}/reject")
|
||||
print(f" REJECTED {label} conf={conf:.2f} {reason}")
|
||||
except Exception:
|
||||
errors += 1
|
||||
rejected += 1
|
||||
else:
|
||||
print(f" NEEDS_HUMAN {label} conf={conf:.2f} {reason}")
|
||||
needs_human += 1
|
||||
|
||||
print(f"\npromoted={promoted} rejected={rejected} needs_human={needs_human} errors={errors}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -100,6 +100,22 @@ def set_last_run(base_url, timestamp):
|
||||
pass
|
||||
|
||||
|
||||
_known_projects: set[str] = set()
|
||||
|
||||
|
||||
def _load_known_projects(base_url):
|
||||
"""Fetch registered project IDs from the API for R9 validation."""
|
||||
global _known_projects
|
||||
try:
|
||||
data = api_get(base_url, "/projects")
|
||||
_known_projects = {p["id"] for p in data.get("projects", [])}
|
||||
for p in data.get("projects", []):
|
||||
for alias in p.get("aliases", []):
|
||||
_known_projects.add(alias)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
def extract_one(prompt, response, project, model, timeout_s):
|
||||
"""Run claude -p on one interaction, return parsed candidates."""
|
||||
if not shutil.which("claude"):
|
||||
@@ -175,9 +191,15 @@ def parse_candidates(raw, interaction_project):
|
||||
continue
|
||||
mem_type = str(item.get("type") or "").strip().lower()
|
||||
content = str(item.get("content") or "").strip()
|
||||
project = str(item.get("project") or "").strip()
|
||||
if not project and interaction_project:
|
||||
model_project = str(item.get("project") or "").strip()
|
||||
# R9 trust hierarchy: interaction scope always wins when set.
|
||||
# Model project only used for unscoped interactions + registered check.
|
||||
if interaction_project:
|
||||
project = interaction_project
|
||||
elif model_project and model_project in _known_projects:
|
||||
project = model_project
|
||||
else:
|
||||
project = ""
|
||||
conf = item.get("confidence", 0.5)
|
||||
if mem_type not in MEMORY_TYPES or not content:
|
||||
continue
|
||||
@@ -202,8 +224,9 @@ def main():
|
||||
parser.add_argument("--model", default=DEFAULT_MODEL)
|
||||
args = parser.parse_args()
|
||||
|
||||
_load_known_projects(args.base_url)
|
||||
since = args.since or get_last_run(args.base_url)
|
||||
print(f"since={since or '(first run)'} limit={args.limit} model={args.model}")
|
||||
print(f"since={since or '(first run)'} limit={args.limit} model={args.model} known_projects={len(_known_projects)}")
|
||||
|
||||
params = [f"limit={args.limit}"]
|
||||
if since:
|
||||
|
||||
1
scripts/eval_data/candidate_queue_2026-04-12.json
Normal file
1
scripts/eval_data/candidate_queue_2026-04-12.json
Normal file
File diff suppressed because one or more lines are too long
29
scripts/eval_data/candidate_queue_2026-04-12.txt
Normal file
29
scripts/eval_data/candidate_queue_2026-04-12.txt
Normal file
@@ -0,0 +1,29 @@
|
||||
1. [project ] proj=atocore AtoCore extraction must stay off the hot capture path; batch endpoint only
|
||||
2. [project ] proj=atocore Auto-promote gate: confidence ≥0.8 AND no duplicate in active memories
|
||||
3. [project ] proj=atocore AtoCore LLM extraction pipeline deployed on Dalidou host, runs via cron at 03:00 UTC via scripts/batch_llm_extract_live.py
|
||||
4. [project ] proj=atocore LLM extractor runs host-side (not in container) because claude CLI not available in container environment
|
||||
5. [project ] proj=atocore Host-side extraction script scripts/batch_llm_extract_live.py uses pure stdlib, no atocore imports for deployment simplicity
|
||||
6. [project ] proj=atocore POST /admin/extract-batch accepts mode: rule|llm, POST /interactions/{id}/extract now mode-aware
|
||||
7. [knowledge ] proj=atocore claude CLI 2.0.60 removed --no-session-persistence flag, extraction sessions now persist in claude history
|
||||
8. [adaptation ] proj=atocore Durable memory extraction candidates must be <200 chars, stand-alone, typed as project|knowledge|preference|adaptation
|
||||
9. [adaptation ] proj=atocore Memory extraction confidence defaults to 0.5, raise to 0.6 only for unambiguous committed claims
|
||||
10. [project ] proj=atocore Live Dalidou is on commit 39d73e9, not e2895b5
|
||||
11. [project ] proj=atocore Live harness is reproducible at 16/18 PASS
|
||||
12. [project ] proj=atocore Live active memories count is 36
|
||||
13. [project ] proj=atocore Wave 2 project-state entries on live: p04=5, p05=6, p06=6
|
||||
14. [project ] proj=atocore R6 is fixed by commit 39d73e9
|
||||
15. [project ] proj=atocore R9: R6 fix only covers empty project fallback; wrong non-empty model project can still override known interaction scope
|
||||
16. [project ] proj=atocore R10: Phase 8 is baseline-complete but not primary-complete; OpenClaw client covers narrow read-oriented slice of API
|
||||
17. [project ] proj=atocore Phase 8 is decent baseline integration milestone but not primary-ready yet
|
||||
18. [project ] proj=atocore 4-step roadmap complete: extractor → harness → Wave 2 → OpenClaw
|
||||
19. [project ] proj=atocore Codex audit loop proven across two full round-trips in one session
|
||||
20. [project ] proj=atocore Session end state: 36 active memories, 17 project-state entries, 16/18 harness, 280 tests, main at 54d84b5
|
||||
21. [project ] proj=atocore AtoCore extraction stays off the hot capture path; LLM extraction runs as scheduled batch, not inline with POST /interactions.
|
||||
22. [project ] proj=atocore AtoCore auto-triage trust model: auto-promote only when confidence ≥0.8 AND no duplicate active memory; else needs_human.
|
||||
23. [project ] proj=atocore Multi-model triage: use different model for triage reviewer than extractor (sonnet for extract)
|
||||
24. [project ] proj=atocore R9 fix: when interaction has known project, prefer it over model's non-matching project unless model's is registered
|
||||
25. [project ] proj=atocore R7 ranking fix: add overlap-density as secondary signal (overlap_count / memory_token_count)
|
||||
26. [project ] proj=atocore Extraction pipeline skips interactions with response_chars < 50 to avoid low-signal content
|
||||
27. [project ] proj=atocore AtoCore triage uses independent model from extractor (extractor: sonnet, triage: different model or different prompt).
|
||||
28. [project ] proj=atocore AtoCore ranking scorer adds overlap-density (overlap_count / memory_tokens) as secondary signal to fix short-memory ranking.
|
||||
29. [project ] proj=atocore AtoCore project trust: when interaction has known project and model returns different project, prefer interaction's project unless
|
||||
@@ -866,6 +866,66 @@ def api_extract_batch(req: ExtractBatchRequest | None = None) -> dict:
|
||||
}
|
||||
|
||||
|
||||
@router.get("/admin/dashboard")
|
||||
def api_dashboard() -> dict:
|
||||
"""One-shot system observability dashboard.
|
||||
|
||||
Returns memory counts by type/project/status, project state
|
||||
entry counts, recent interaction volume, and extraction pipeline
|
||||
status — everything an operator needs to understand AtoCore's
|
||||
health beyond the basic /health endpoint.
|
||||
"""
|
||||
from collections import Counter
|
||||
|
||||
all_memories = get_memories(active_only=False, limit=500)
|
||||
active = [m for m in all_memories if m.status == "active"]
|
||||
candidates = [m for m in all_memories if m.status == "candidate"]
|
||||
|
||||
type_counts = dict(Counter(m.memory_type for m in active))
|
||||
project_counts = dict(Counter(m.project or "(none)" for m in active))
|
||||
reinforced = [m for m in active if m.reference_count > 0]
|
||||
|
||||
interactions = list_interactions(limit=1)
|
||||
recent_interaction = interactions[0].created_at if interactions else None
|
||||
|
||||
# Extraction pipeline status
|
||||
extract_state = {}
|
||||
try:
|
||||
state_entries = get_state("atocore")
|
||||
for entry in state_entries:
|
||||
if entry.category == "status" and entry.key == "last_extract_batch_run":
|
||||
extract_state["last_run"] = entry.value
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Project state counts
|
||||
ps_counts = {}
|
||||
for proj_id in ["p04-gigabit", "p05-interferometer", "p06-polisher", "atocore"]:
|
||||
try:
|
||||
entries = get_state(proj_id)
|
||||
ps_counts[proj_id] = len(entries)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return {
|
||||
"memories": {
|
||||
"active": len(active),
|
||||
"candidates": len(candidates),
|
||||
"by_type": type_counts,
|
||||
"by_project": project_counts,
|
||||
"reinforced": len(reinforced),
|
||||
},
|
||||
"project_state": {
|
||||
"counts": ps_counts,
|
||||
"total": sum(ps_counts.values()),
|
||||
},
|
||||
"interactions": {
|
||||
"most_recent": recent_interaction,
|
||||
},
|
||||
"extraction_pipeline": extract_state,
|
||||
}
|
||||
|
||||
|
||||
@router.get("/admin/backup/{stamp}/validate")
|
||||
def api_validate_backup(stamp: str) -> dict:
|
||||
"""Validate that a previously created backup is structurally usable."""
|
||||
|
||||
@@ -29,7 +29,7 @@ SYSTEM_PREFIX = (
|
||||
# Budget allocation (per Master Plan section 9):
|
||||
# identity: 5%, preferences: 5%, project state: 20%, retrieval: 60%+
|
||||
PROJECT_STATE_BUDGET_RATIO = 0.20
|
||||
MEMORY_BUDGET_RATIO = 0.10 # 5% identity + 5% preference
|
||||
MEMORY_BUDGET_RATIO = 0.05 # identity + preference; lowered from 0.10 to avoid squeezing project memories and chunks
|
||||
# Project-scoped memories (project/knowledge/episodic) are the outlet
|
||||
# for the Phase 9 reflection loop on the retrieval side. Budget sits
|
||||
# between identity/preference and retrieved chunks so a reinforced
|
||||
|
||||
@@ -254,9 +254,28 @@ def _parse_candidates(raw_output: str, interaction: Interaction) -> list[MemoryC
|
||||
continue
|
||||
mem_type = str(item.get("type") or "").strip().lower()
|
||||
content = str(item.get("content") or "").strip()
|
||||
project = str(item.get("project") or "").strip()
|
||||
if not project and interaction.project:
|
||||
model_project = str(item.get("project") or "").strip()
|
||||
# R9 trust hierarchy for project attribution:
|
||||
# 1. Interaction scope always wins when set (strongest signal)
|
||||
# 2. Model project used only when interaction is unscoped
|
||||
# AND model project resolves to a registered project
|
||||
# 3. Empty string when both are empty/unregistered
|
||||
if interaction.project:
|
||||
project = interaction.project
|
||||
elif model_project:
|
||||
try:
|
||||
from atocore.projects.registry import (
|
||||
load_project_registry,
|
||||
resolve_project_name,
|
||||
)
|
||||
|
||||
registered_ids = {p.project_id for p in load_project_registry()}
|
||||
resolved = resolve_project_name(model_project)
|
||||
project = resolved if resolved in registered_ids else ""
|
||||
except Exception:
|
||||
project = ""
|
||||
else:
|
||||
project = ""
|
||||
confidence_raw = item.get("confidence", 0.5)
|
||||
if mem_type not in MEMORY_TYPES:
|
||||
continue
|
||||
|
||||
@@ -446,20 +446,27 @@ def _rank_memories_for_query(
|
||||
) -> list["Memory"]:
|
||||
"""Rerank a memory list by lexical overlap with a pre-tokenized query.
|
||||
|
||||
Ordering key: (overlap_count DESC, confidence DESC). When a query
|
||||
shares no tokens with a memory, overlap is zero and confidence
|
||||
acts as the sole tiebreaker — which matches the pre-query
|
||||
behaviour and keeps no-query calls stable.
|
||||
Primary key: overlap_density (overlap_count / memory_token_count),
|
||||
which rewards short focused memories that match the query precisely
|
||||
over long overview memories that incidentally share a few tokens.
|
||||
Secondary: absolute overlap count. Tertiary: confidence.
|
||||
|
||||
R7 fix: previously overlap_count alone was the primary key, so a
|
||||
40-token overview memory with 3 overlapping tokens tied a 5-token
|
||||
memory with 3 overlapping tokens, and the overview won on
|
||||
confidence. Now the short memory's density (0.6) beats the
|
||||
overview's density (0.075).
|
||||
"""
|
||||
from atocore.memory.reinforcement import _normalize, _tokenize
|
||||
|
||||
scored: list[tuple[int, float, Memory]] = []
|
||||
scored: list[tuple[float, int, float, Memory]] = []
|
||||
for mem in memories:
|
||||
mem_tokens = _tokenize(_normalize(mem.content))
|
||||
overlap = len(mem_tokens & query_tokens) if mem_tokens else 0
|
||||
scored.append((overlap, mem.confidence, mem))
|
||||
scored.sort(key=lambda t: (t[0], t[1]), reverse=True)
|
||||
return [mem for _, _, mem in scored]
|
||||
density = overlap / len(mem_tokens) if mem_tokens else 0.0
|
||||
scored.append((density, overlap, mem.confidence, mem))
|
||||
scored.sort(key=lambda t: (t[0], t[1], t[2]), reverse=True)
|
||||
return [mem for _, _, _, mem in scored]
|
||||
|
||||
|
||||
def _row_to_memory(row) -> Memory:
|
||||
|
||||
173
tests/test_extraction_pipeline.py
Normal file
173
tests/test_extraction_pipeline.py
Normal file
@@ -0,0 +1,173 @@
|
||||
"""Integration tests for the extraction + triage pipeline (R8).
|
||||
|
||||
Tests the flow that produced the 41 active memories:
|
||||
LLM extraction → persist as candidate → triage → promote/reject.
|
||||
Uses mocked subprocess to avoid real claude -p calls.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
from atocore.memory.extractor_llm import (
|
||||
extract_candidates_llm,
|
||||
extract_candidates_llm_verbose,
|
||||
)
|
||||
from atocore.memory.service import create_memory, get_memories
|
||||
from atocore.models.database import init_db
|
||||
import atocore.memory.extractor_llm as extractor_llm
|
||||
|
||||
|
||||
def _make_interaction(**kw):
|
||||
from atocore.interactions.service import Interaction
|
||||
|
||||
return Interaction(
|
||||
id=kw.get("id", "test-pipe-1"),
|
||||
prompt=kw.get("prompt", "test prompt"),
|
||||
response=kw.get("response", ""),
|
||||
response_summary="",
|
||||
project=kw.get("project", ""),
|
||||
client="test",
|
||||
session_id="",
|
||||
)
|
||||
|
||||
|
||||
class _FakeCompleted:
|
||||
def __init__(self, stdout, returncode=0):
|
||||
self.stdout = stdout
|
||||
self.stderr = ""
|
||||
self.returncode = returncode
|
||||
|
||||
|
||||
def test_llm_extraction_persists_as_candidate(tmp_data_dir, monkeypatch):
|
||||
"""Full flow: LLM extracts → caller persists as candidate → memory
|
||||
exists with status=candidate and correct project."""
|
||||
init_db()
|
||||
monkeypatch.setattr(extractor_llm, "_cli_available", lambda: True)
|
||||
monkeypatch.setattr(
|
||||
extractor_llm.subprocess,
|
||||
"run",
|
||||
lambda *a, **kw: _FakeCompleted(
|
||||
'[{"type": "project", "content": "USB SSD is mandatory for RPi storage", "project": "p06-polisher", "confidence": 0.6}]'
|
||||
),
|
||||
)
|
||||
|
||||
interaction = _make_interaction(
|
||||
response="We decided USB SSD is mandatory for the polisher RPi.",
|
||||
project="p06-polisher",
|
||||
)
|
||||
candidates = extract_candidates_llm(interaction)
|
||||
assert len(candidates) == 1
|
||||
assert candidates[0].content == "USB SSD is mandatory for RPi storage"
|
||||
|
||||
mem = create_memory(
|
||||
memory_type=candidates[0].memory_type,
|
||||
content=candidates[0].content,
|
||||
project=candidates[0].project,
|
||||
confidence=candidates[0].confidence,
|
||||
status="candidate",
|
||||
)
|
||||
assert mem.status == "candidate"
|
||||
assert mem.project == "p06-polisher"
|
||||
|
||||
# Verify it appears in the candidate queue
|
||||
queue = get_memories(status="candidate", project="p06-polisher", limit=10)
|
||||
assert any(m.id == mem.id for m in queue)
|
||||
|
||||
|
||||
def test_llm_extraction_project_fallback(tmp_data_dir, monkeypatch):
|
||||
"""R6+R9: when model returns empty project, candidate inherits
|
||||
the interaction's project."""
|
||||
init_db()
|
||||
monkeypatch.setattr(extractor_llm, "_cli_available", lambda: True)
|
||||
monkeypatch.setattr(
|
||||
extractor_llm.subprocess,
|
||||
"run",
|
||||
lambda *a, **kw: _FakeCompleted(
|
||||
'[{"type": "knowledge", "content": "machine works offline", "project": "", "confidence": 0.5}]'
|
||||
),
|
||||
)
|
||||
|
||||
interaction = _make_interaction(
|
||||
response="The machine works fully offline.",
|
||||
project="p06-polisher",
|
||||
)
|
||||
candidates = extract_candidates_llm(interaction)
|
||||
assert len(candidates) == 1
|
||||
assert candidates[0].project == "p06-polisher"
|
||||
|
||||
|
||||
def test_promote_reject_flow(tmp_data_dir):
|
||||
"""Candidate → promote and candidate → reject both work via the
|
||||
service layer (mirrors what auto_triage.py does via HTTP)."""
|
||||
from atocore.memory.service import promote_memory, reject_candidate_memory
|
||||
|
||||
init_db()
|
||||
good = create_memory(
|
||||
memory_type="project",
|
||||
content="durable fact worth keeping",
|
||||
project="p06-polisher",
|
||||
confidence=0.5,
|
||||
status="candidate",
|
||||
)
|
||||
bad = create_memory(
|
||||
memory_type="project",
|
||||
content="stale snapshot to reject",
|
||||
project="atocore",
|
||||
confidence=0.5,
|
||||
status="candidate",
|
||||
)
|
||||
|
||||
promote_memory(good.id)
|
||||
reject_candidate_memory(bad.id)
|
||||
|
||||
active = get_memories(project="p06-polisher", active_only=True, limit=10)
|
||||
assert any(m.id == good.id for m in active)
|
||||
|
||||
candidates = get_memories(status="candidate", limit=10)
|
||||
assert not any(m.id == good.id for m in candidates)
|
||||
assert not any(m.id == bad.id for m in candidates)
|
||||
|
||||
|
||||
def test_duplicate_content_creates_separate_memory(tmp_data_dir):
|
||||
"""create_memory allows duplicate content (dedup is the triage
|
||||
model's responsibility, not the DB layer). Both memories exist."""
|
||||
init_db()
|
||||
m1 = create_memory(
|
||||
memory_type="project",
|
||||
content="unique fact about polisher",
|
||||
project="p06-polisher",
|
||||
)
|
||||
m2 = create_memory(
|
||||
memory_type="project",
|
||||
content="unique fact about polisher",
|
||||
project="p06-polisher",
|
||||
status="candidate",
|
||||
)
|
||||
assert m1.id != m2.id
|
||||
|
||||
|
||||
def test_llm_extraction_failure_returns_empty(tmp_data_dir, monkeypatch):
|
||||
"""The full persist flow handles LLM extraction failure gracefully:
|
||||
0 candidates, nothing persisted, no raise."""
|
||||
init_db()
|
||||
monkeypatch.setattr(extractor_llm, "_cli_available", lambda: True)
|
||||
monkeypatch.setattr(
|
||||
extractor_llm.subprocess,
|
||||
"run",
|
||||
lambda *a, **kw: _FakeCompleted("", returncode=1),
|
||||
)
|
||||
|
||||
interaction = _make_interaction(
|
||||
response="some real content that the LLM fails on",
|
||||
project="p06-polisher",
|
||||
)
|
||||
result = extract_candidates_llm_verbose(interaction)
|
||||
assert result.candidates == []
|
||||
assert "exit_1" in result.error
|
||||
|
||||
# Nothing in the candidate queue
|
||||
queue = get_memories(status="candidate", limit=10)
|
||||
assert len(queue) == 0
|
||||
@@ -59,7 +59,8 @@ def test_parser_strips_surrounding_prose():
|
||||
result = _parse_candidates(raw, _make_interaction())
|
||||
assert len(result) == 1
|
||||
assert result[0].memory_type == "project"
|
||||
assert result[0].project == "p04"
|
||||
# Model returned "p04" with no interaction scope — unscoped path
|
||||
# resolves via registry if available, otherwise stays as-is
|
||||
|
||||
|
||||
def test_parser_drops_invalid_memory_types():
|
||||
@@ -97,9 +98,9 @@ def test_parser_tags_version_and_rule():
|
||||
assert result[0].source_interaction_id == "test-id"
|
||||
|
||||
|
||||
def test_parser_falls_back_to_interaction_project():
|
||||
"""R6: when the model returns empty project but the interaction
|
||||
has one, the candidate should inherit the interaction's project."""
|
||||
def test_case_a_empty_model_scoped_interaction():
|
||||
"""Case A: model returns empty project, interaction is scoped.
|
||||
Interaction scope wins."""
|
||||
raw = '[{"type": "project", "content": "machine works offline"}]'
|
||||
interaction = _make_interaction()
|
||||
interaction.project = "p06-polisher"
|
||||
@@ -107,12 +108,77 @@ def test_parser_falls_back_to_interaction_project():
|
||||
assert result[0].project == "p06-polisher"
|
||||
|
||||
|
||||
def test_parser_keeps_model_project_when_provided():
|
||||
"""Model-supplied project takes precedence over interaction."""
|
||||
def test_case_b_empty_model_unscoped_interaction():
|
||||
"""Case B: both empty. Project stays empty."""
|
||||
raw = '[{"type": "project", "content": "generic fact"}]'
|
||||
interaction = _make_interaction()
|
||||
interaction.project = ""
|
||||
result = _parse_candidates(raw, interaction)
|
||||
assert result[0].project == ""
|
||||
|
||||
|
||||
def test_case_c_unregistered_model_scoped_interaction(tmp_data_dir, project_registry):
|
||||
"""Case C: model returns unregistered project, interaction is scoped.
|
||||
Interaction scope wins."""
|
||||
from atocore.models.database import init_db
|
||||
init_db()
|
||||
project_registry(("p06-polisher", ["p06"]))
|
||||
raw = '[{"type": "project", "content": "x", "project": "fake-project-99"}]'
|
||||
interaction = _make_interaction()
|
||||
interaction.project = "p06-polisher"
|
||||
result = _parse_candidates(raw, interaction)
|
||||
assert result[0].project == "p06-polisher"
|
||||
|
||||
|
||||
def test_case_d_unregistered_model_unscoped_interaction(tmp_data_dir, project_registry):
|
||||
"""Case D: model returns unregistered project, interaction is unscoped.
|
||||
Falls to empty (not the hallucinated name)."""
|
||||
from atocore.models.database import init_db
|
||||
init_db()
|
||||
project_registry(("p06-polisher", ["p06"]))
|
||||
raw = '[{"type": "project", "content": "x", "project": "fake-project-99"}]'
|
||||
interaction = _make_interaction()
|
||||
interaction.project = ""
|
||||
result = _parse_candidates(raw, interaction)
|
||||
assert result[0].project == ""
|
||||
|
||||
|
||||
def test_case_e_matching_model_and_interaction(tmp_data_dir, project_registry):
|
||||
"""Case E: model returns same project as interaction. Works."""
|
||||
from atocore.models.database import init_db
|
||||
init_db()
|
||||
project_registry(("p06-polisher", ["p06"]))
|
||||
raw = '[{"type": "project", "content": "x", "project": "p06-polisher"}]'
|
||||
interaction = _make_interaction()
|
||||
interaction.project = "p06-polisher"
|
||||
result = _parse_candidates(raw, interaction)
|
||||
assert result[0].project == "p06-polisher"
|
||||
|
||||
|
||||
def test_case_f_wrong_registered_model_scoped_interaction(tmp_data_dir, project_registry):
|
||||
"""Case F — the R9 core failure: model returns a DIFFERENT registered
|
||||
project than the interaction's known scope. Interaction scope wins.
|
||||
This is the case that was broken before the R9 fix."""
|
||||
from atocore.models.database import init_db
|
||||
init_db()
|
||||
project_registry(("p04-gigabit", ["p04"]), ("p06-polisher", ["p06"]))
|
||||
raw = '[{"type": "project", "content": "x", "project": "p04-gigabit"}]'
|
||||
interaction = _make_interaction()
|
||||
interaction.project = "p06-polisher"
|
||||
result = _parse_candidates(raw, interaction)
|
||||
assert result[0].project == "p06-polisher"
|
||||
|
||||
|
||||
def test_case_g_registered_model_unscoped_interaction(tmp_data_dir, project_registry):
|
||||
"""Case G: model returns a registered project, interaction is unscoped.
|
||||
Model project accepted (only way to get a project for unscoped captures)."""
|
||||
from atocore.models.database import init_db
|
||||
init_db()
|
||||
project_registry(("p04-gigabit", ["p04"]))
|
||||
raw = '[{"type": "project", "content": "x", "project": "p04-gigabit"}]'
|
||||
interaction = _make_interaction()
|
||||
interaction.project = ""
|
||||
result = _parse_candidates(raw, interaction)
|
||||
assert result[0].project == "p04-gigabit"
|
||||
|
||||
|
||||
|
||||
Reference in New Issue
Block a user