Compare commits
9 Commits
codex/open
...
775960c8c8
| Author | SHA1 | Date | |
|---|---|---|---|
| 775960c8c8 | |||
| b687e7fa6f | |||
| 4d4d5f437a | |||
| 5b114baa87 | |||
| c2e7064238 | |||
| dc9fdd3a38 | |||
| 58ea21df80 | |||
| 8c0f1ff6f3 | |||
| 3db1dd99b5 |
@@ -6,19 +6,22 @@
|
||||
|
||||
## Orientation
|
||||
|
||||
- **live_sha** (Dalidou `/health` build_sha): `4f8bec7` (dashboard endpoint live)
|
||||
- **last_updated**: 2026-04-12 by Claude (full session docs sync)
|
||||
- **main_tip**: `4ac4e5c` (includes OpenClaw capture plugin merge)
|
||||
- **test_count**: 290 passing
|
||||
- **harness**: `17/18 PASS` (only p06-tailscale — chunk bleed, not a memory/ranking issue)
|
||||
- **vectors**: 33,253 (was 20,781; +12,472 from atomizer-v2 ingestion)
|
||||
- **active_memories**: 47 (16 project, 16 knowledge, 6 adaptation, 3 identity, 3 preference, 3 episodic)
|
||||
- **candidate_memories**: 0
|
||||
- **registered_projects**: p04-gigabit, p05-interferometer, p06-polisher, atomizer-v2, atocore
|
||||
- **project_state_entries**: p04=5, p05=9, p06=9, atocore=38 (61 total)
|
||||
- **live_sha** (Dalidou `/health` build_sha): `c2e7064` (verified 2026-04-15 via /health, build_time 2026-04-15T15:08:51Z)
|
||||
- **last_updated**: 2026-04-15 by Claude (deploy caught up; R10/R13 closed)
|
||||
- **main_tip**: `c2e7064` (plus one pending doc/ledger commit for this session)
|
||||
- **test_count**: 299 collected via `pytest --collect-only -q` on a clean checkout, 2026-04-15 (reproduction recipe in Quick Commands)
|
||||
- **harness**: `18/18 PASS`
|
||||
- **vectors**: 33,253
|
||||
- **active_memories**: 84 (31 project, 23 knowledge, 10 episodic, 8 adaptation, 7 preference, 5 identity)
|
||||
- **candidate_memories**: 2
|
||||
- **registered_projects**: atocore, p04-gigabit, p05-interferometer, p06-polisher, atomizer-v2, abb-space (aliased p08)
|
||||
- **project_state_entries**: 78 total (p04=9, p05=13, p06=13, atocore=43)
|
||||
- **entities**: 35 (engineering knowledge graph, Layer 2)
|
||||
- **off_host_backup**: `papa@192.168.86.39:/home/papa/atocore-backups/` via cron, verified
|
||||
- **nightly_pipeline**: backup → cleanup → rsync → LLM extraction (sonnet) → auto-triage (sonnet)
|
||||
- **capture_clients**: claude-code (Stop hook), openclaw (plugin)
|
||||
- **nightly_pipeline**: backup → cleanup → rsync → **OpenClaw import** (NEW) → vault refresh (NEW) → extract → auto-triage → weekly synth/lint Sundays
|
||||
- **capture_clients**: claude-code (Stop hook), openclaw (plugin + file importer)
|
||||
- **wiki**: http://dalidou:8100/wiki (browse), /wiki/projects/{id}, /wiki/entities/{id}, /wiki/search
|
||||
- **dashboard**: http://dalidou:8100/admin/dashboard
|
||||
|
||||
## Active Plan
|
||||
|
||||
@@ -128,17 +131,17 @@ One branch `codex/extractor-eval-loop` for Day 1-5, a second `codex/retrieval-ha
|
||||
|-----|--------|----------|------------------------------------|-------------------------------------------------------------------------|--------------|--------|------------|-------------|
|
||||
| R1 | Codex | P1 | deploy/hooks/capture_stop.py:76-85 | Live Claude capture still omits `extract`, so "loop closed both sides" remains overstated in practice even though the API supports it | fixed | Claude | 2026-04-11 | c67bec0 |
|
||||
| R2 | Codex | P1 | src/atocore/context/builder.py | Project memories excluded from pack | fixed | Claude | 2026-04-11 | 8ea53f4 |
|
||||
| R3 | Claude | P2 | src/atocore/memory/extractor.py | Rule cues (`## Decision:`) never fire on conversational LLM text | open | Claude | 2026-04-11 | |
|
||||
| R3 | Claude | P2 | src/atocore/memory/extractor.py | Rule cues (`## Decision:`) never fire on conversational LLM text | declined | Claude | 2026-04-11 | see 2026-04-14 session log |
|
||||
| R4 | Codex | P2 | DEV-LEDGER.md:11 | Orientation `main_tip` was stale versus `HEAD` / `origin/main` | fixed | Codex | 2026-04-11 | 81307ce |
|
||||
| R5 | Codex | P1 | src/atocore/interactions/service.py:157-174 | The deployed extraction path still calls only the rule extractor; the new LLM extractor is eval/script-only, so Day 4 "gate cleared" is true as a benchmark result but not as an operational extraction path | fixed | Claude | 2026-04-12 | c67bec0 |
|
||||
| R6 | Codex | P1 | src/atocore/memory/extractor_llm.py:258-276 | LLM extraction accepts model-supplied `project` verbatim with no fallback to `interaction.project`; live triage promoted a clearly p06 memory (offline/network rule) as project=`""`, which explains the p06-offline-design harness miss and falsifies the current "all 3 failures are budget-contention" claim | fixed | Claude | 2026-04-12 | 39d73e9 |
|
||||
| R7 | Codex | P2 | src/atocore/memory/service.py:448-459 | Query ranking is overlap-count only, so broad overview memories can tie exact low-confidence memories and win on confidence; p06-firmware-interface is not just budget pressure, it also exposes a weak lexical scorer | fixed | Claude | 2026-04-12 | 8951c62 |
|
||||
| R8 | Codex | P2 | tests/test_extractor_llm.py:1-7 | LLM extractor tests stop at parser/failure contracts; there is no automated coverage for the script-only persistence/review path that produced the 16 promoted memories, including project-scope preservation | fixed | Claude | 2026-04-12 | 69c9717 |
|
||||
| R9 | Codex | P2 | src/atocore/memory/extractor_llm.py:258-259 | The R6 fallback only repairs empty project output. A wrong non-empty model project still overrides the interaction's known scope, so project attribution is improved but not yet trust-preserving. | fixed | Claude | 2026-04-12 | e5e9a99 |
|
||||
| R10 | Codex | P2 | docs/master-plan-status.md:31-33 | "Phase 8 - OpenClaw Integration" is fair as a baseline milestone, but not as a "primary" integration claim. `t420-openclaw/atocore.py` currently covers a narrow read-oriented subset (13 request shapes vs 32 API routes) plus fail-open health, while memory/interactions/admin write paths remain out of surface. | open | Claude | 2026-04-12 | |
|
||||
| R11 | Codex | P2 | src/atocore/api/routes.py:773-845 | `POST /admin/extract-batch` still accepts `mode="llm"` inside the container and returns a successful 0-candidate result instead of surfacing that host-only LLM extraction is unavailable from this runtime. That is a misleading API contract for operators. | open | Claude | 2026-04-12 | |
|
||||
| R12 | Codex | P2 | scripts/batch_llm_extract_live.py:39-190 | The host-side extractor duplicates the LLM system prompt and JSON parsing logic from `src/atocore/memory/extractor_llm.py`. It works today, but this is now a prompt/parser drift risk across the container and host implementations. | open | Claude | 2026-04-12 | |
|
||||
| R13 | Codex | P2 | DEV-LEDGER.md:12 | The new `286 passing` test-count claim is not reproducibly auditable from the current audit environments: neither Dalidou nor the clean worktree has `pytest` available. The claim may be true in Claude's dev shell, but it remains unverified in this audit. | open | Claude | 2026-04-12 | |
|
||||
| R10 | Codex | P2 | docs/master-plan-status.md:31-33 | "Phase 8 - OpenClaw Integration" is fair as a baseline milestone, but not as a "primary" integration claim. `t420-openclaw/atocore.py` currently covers a narrow read-oriented subset (13 request shapes vs 32 API routes) plus fail-open health, while memory/interactions/admin write paths remain out of surface. | fixed | Claude | 2026-04-12 | (pending) |
|
||||
| R11 | Codex | P2 | src/atocore/api/routes.py:773-845 | `POST /admin/extract-batch` still accepts `mode="llm"` inside the container and returns a successful 0-candidate result instead of surfacing that host-only LLM extraction is unavailable from this runtime. That is a misleading API contract for operators. | fixed | Claude | 2026-04-12 | (pending) |
|
||||
| R12 | Codex | P2 | scripts/batch_llm_extract_live.py:39-190 | The host-side extractor duplicates the LLM system prompt and JSON parsing logic from `src/atocore/memory/extractor_llm.py`. It works today, but this is now a prompt/parser drift risk across the container and host implementations. | fixed | Claude | 2026-04-12 | (pending) |
|
||||
| R13 | Codex | P2 | DEV-LEDGER.md:12 | The new `286 passing` test-count claim is not reproducibly auditable from the current audit environments: neither Dalidou nor the clean worktree has `pytest` available. The claim may be true in Claude's dev shell, but it remains unverified in this audit. | fixed | Claude | 2026-04-12 | (pending) |
|
||||
|
||||
## Recent Decisions
|
||||
|
||||
@@ -156,6 +159,15 @@ One branch `codex/extractor-eval-loop` for Day 1-5, a second `codex/retrieval-ha
|
||||
|
||||
## Session Log
|
||||
|
||||
- **2026-04-15 Claude (pm)** Closed the last harness failure honestly. **p06-tailscale fixed: 18/18 PASS.** Root-caused: not a retrieval bug — the p06 `ARCHITECTURE.md` Overview chunk legitimately mentions "the GigaBIT M1 telescope mirror" because the Polisher Suite is built *for* that mirror. All four retrieved sources for the tailscale prompt were genuinely p06/shared paths; zero actual p04 chunks leaked. The fixture's `expect_absent: GigaBIT` was catching semantic overlap, not retrieval bleed. Narrowed it to `expect_absent: "[Source: p04-gigabit/"` — a source-path check that tests the real invariant (no p04 source chunks in p06 context). Other p06 fixtures still use the word-blacklist form; they pass today because their more-specific prompts don't pull the ARCHITECTURE.md Overview, so I left them alone rather than churn fixtures that aren't failing. Did NOT change retrieval/ranking — no code change, fixture-only fix. Tests unchanged at 299.
|
||||
|
||||
- **2026-04-15 Claude** Deploy + doc debt sweep. Deployed `c2e7064` to Dalidou (build_time 2026-04-15T15:08:51Z, build_sha matches, /health ok) so R11/R12 are now live, not just on main. **R11 verified on live**: `POST /admin/extract-batch {"mode":"llm"}` against http://127.0.0.1:8100 returns HTTP 503 with the operator-facing "claude CLI not on PATH, run host-side script or use mode=rule" message — exactly the post-fix contract. **R13 closed (fixed)**: added a reproduction recipe to Quick Commands (`pip install -r requirements-dev.txt && pytest --collect-only -q && pytest -q`) and re-cited `test_count: 299` against a fresh local collection on 2026-04-15, so the claim is now auditable from any clean checkout — Codex's audit worktree just needs `pip install -r requirements-dev.txt`. **R10 closed (fixed)**: rewrote the `docs/master-plan-status.md` OpenClaw section to explicitly disclaim "primary integration" and report the current narrow surface: 14 client request shapes against ~44 server routes, predominantly read + `/project/state` + `/ingest/sources`, with memory/interactions/admin/entities/triage/extraction writes correctly out of scope. Open findings now: none blocking. Next natural move: the last harness failure `p06-tailscale` (chunk bleed).
|
||||
|
||||
- **2026-04-14 Claude (pm)** Closed R11+R12, declined R3. **R11 (fixed):** `POST /admin/extract-batch` with `mode="llm"` now returns 503 when the `claude` CLI is not on PATH, with a message pointing at the host-side script. Previously it silently returned a success-0 payload, masking host-vs-container truth. 2 new tests in `test_extraction_pipeline.py` cover the 503 path and the rule-mode-still-works path. **R12 (fixed):** extracted shared `SYSTEM_PROMPT` + `parse_llm_json_array` + `normalize_candidate_item` + `build_user_message` into stdlib-only `src/atocore/memory/_llm_prompt.py`. Both `src/atocore/memory/extractor_llm.py` (container) and `scripts/batch_llm_extract_live.py` (host) now import from it. The host script uses `sys.path` to reach the stdlib-only module without needing the full atocore package. Project-attribution policy stays path-specific (container uses registry-check; host defers to server). **R3 (declined):** rule cues not firing on conversational LLM text is by design now — the LLM extractor (llm-0.4.0) is the production path for conversational content as of the Day 4 gate (2026-04-12). Expanding rules to match conversational prose risks the FP blowup Day 2 already showed. Rule extractor stays narrow for structural PKM text. Tests 297 → 299. Live `/health` still `58ea21d`; this session's changes need deploy.
|
||||
|
||||
- **2026-04-14 Claude** MAJOR session: Engineering knowledge layer V1 (Layer 2) built — entity + relationship tables, 15 types, 12 relationship kinds, 35 bootstrapped entities across p04/p05/p06. Human Mirror (Layer 3) — GET /projects/{name}/mirror.html + navigable wiki at /wiki with search. Karpathy-inspired upgrades: contradiction detection in triage, weekly lint pass, weekly synthesis pass producing "current state" paragraphs at top of project pages. Auto-detection of new projects from extraction. Registry persistence fix (ATOCORE_PROJECT_REGISTRY_DIR env var). abb-space/p08 aliases added, atomizer-v2 ingested (568 docs, +12,472 vectors). Identity/preference seed (6 new), signal-aggressive extractor rewrite (llm-0.4.0), auto vault refresh in cron. **OpenClaw one-way pull importer** built per codex proposal — reads /home/papa/clawd SOUL.md, USER.md, MEMORY.md, MODEL-ROUTING.md, memory/*.md via SSH, hash-delta import, pipeline triages. First import: 10 candidates → 10 promoted with lenient triage rule. Active memories 47→84. State entries 61→78. Tests 290→297. Dashboard at /admin/dashboard. Wiki at /wiki.
|
||||
|
||||
|
||||
- **2026-04-12 Claude** `4f8bec7..4ac4e5c` Session close. Merged OpenClaw capture plugin, ingested atomizer-v2 (568 docs, 12,472 new vectors → 33,253 total), seeded Phase 4 identity/preference memories (6 new, 47 total active), added deeper Wave 2 state entries (p05 +3, p06 +3), fixed R9 project trust hierarchy (7 case tests), built auto-triage pipeline, observability dashboard at /admin/dashboard. Updated master-plan-status.md and DEV-LEDGER.md to reflect full current state. 7/14 phases baseline complete. All P1s closed. Nightly pipeline runs unattended with both Claude Code and OpenClaw feeding the reflection loop.
|
||||
- **2026-04-12 Codex (branch `codex/openclaw-capture-plugin`)** added a minimal external OpenClaw plugin at `openclaw-plugins/atocore-capture/` that mirrors Claude Code capture semantics: user-triggered assistant turns are POSTed to AtoCore `/interactions` with `client="openclaw"` and `reinforce=true`, fail-open, no extraction in-path. For live verification, temporarily added the local plugin load path to OpenClaw config and restarted the gateway so the plugin can load. Branch truth is ready; end-to-end verification still needs one fresh post-restart OpenClaw user turn to confirm new `client=openclaw` interactions appear on Dalidou.
|
||||
- **2026-04-12 Claude** Batch 3 (R9 fix): `144dbbd..e5e9a99`. Trust hierarchy for project attribution — interaction scope always wins when set, model project only used for unscoped interactions + registered check. 7 case tests (A-G) cover every combination. Harness 17/18 (no regression). Tests 286->290. Before: wrong registered project could silently override interaction scope. After: interaction.project is the strongest signal; model project is only a fallback for unscoped captures. Not yet guaranteed: nothing prevents the *same* project's model output from being semantically wrong within that project. R9 marked fixed.
|
||||
@@ -201,4 +213,9 @@ git push origin main && ssh papa@dalidou "bash /srv/storage/atocore/app/deploy/d
|
||||
python scripts/atocore_client.py batch-extract '' '' 200 false # preview
|
||||
python scripts/atocore_client.py batch-extract '' '' 200 true # persist
|
||||
python scripts/atocore_client.py triage
|
||||
|
||||
# Reproduce the ledger's test_count claim from a clean checkout
|
||||
pip install -r requirements-dev.txt
|
||||
pytest --collect-only -q | tail -1 # -> "N tests collected"
|
||||
pytest -q # -> "N passed"
|
||||
```
|
||||
|
||||
@@ -38,7 +38,7 @@
|
||||
},
|
||||
{
|
||||
"id": "p06-polisher",
|
||||
"aliases": ["p06", "polisher"],
|
||||
"aliases": ["p06", "polisher", "p11", "polisher-fullum", "P11-Polisher-Fullum"],
|
||||
"description": "Active P06 polisher corpus from PKM, software-suite notes, and selected repo context.",
|
||||
"ingest_roots": [
|
||||
{
|
||||
@@ -47,6 +47,30 @@
|
||||
"label": "P06 staged project docs"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "abb-space",
|
||||
"aliases": ["abb", "abb-mirror", "p08", "p08-abb-space", "p08-abb-space-mirror"],
|
||||
"description": "ABB Space mirror - lead/proposition for Atomaste. Also tracked as P08.",
|
||||
"ingest_roots": [
|
||||
{
|
||||
"source": "vault",
|
||||
"subpath": "incoming/projects/abb-space",
|
||||
"label": "ABB Space docs"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "atomizer-v2",
|
||||
"aliases": ["atomizer", "aom", "aom-v2"],
|
||||
"description": "Atomizer V2 parametric optimization platform",
|
||||
"ingest_roots": [
|
||||
{
|
||||
"source": "vault",
|
||||
"subpath": "incoming/projects/atomizer-v2/repo",
|
||||
"label": "Atomizer V2 repo"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -34,22 +34,36 @@ export PYTHONPATH="$APP_DIR/src:${PYTHONPATH:-}"
|
||||
log "=== AtoCore batch extraction + triage starting ==="
|
||||
log "URL=$ATOCORE_URL LIMIT=$LIMIT"
|
||||
|
||||
# --- Pipeline stats accumulator ---
|
||||
EXTRACT_OUT=""
|
||||
TRIAGE_OUT=""
|
||||
HARNESS_OUT=""
|
||||
|
||||
# Step A: Extract candidates from recent interactions
|
||||
log "Step A: LLM extraction"
|
||||
python3 "$APP_DIR/scripts/batch_llm_extract_live.py" \
|
||||
EXTRACT_OUT=$(python3 "$APP_DIR/scripts/batch_llm_extract_live.py" \
|
||||
--base-url "$ATOCORE_URL" \
|
||||
--limit "$LIMIT" \
|
||||
2>&1 || {
|
||||
2>&1) || {
|
||||
log "WARN: batch extraction failed (non-blocking)"
|
||||
}
|
||||
echo "$EXTRACT_OUT"
|
||||
|
||||
# Step B: Auto-triage candidates in the queue
|
||||
log "Step B: auto-triage"
|
||||
python3 "$APP_DIR/scripts/auto_triage.py" \
|
||||
TRIAGE_OUT=$(python3 "$APP_DIR/scripts/auto_triage.py" \
|
||||
--base-url "$ATOCORE_URL" \
|
||||
2>&1 || {
|
||||
2>&1) || {
|
||||
log "WARN: auto-triage failed (non-blocking)"
|
||||
}
|
||||
echo "$TRIAGE_OUT"
|
||||
|
||||
# Step B2: Auto-promote reinforced candidates + expire stale ones
|
||||
log "Step B2: auto-promote + expire"
|
||||
python3 "$APP_DIR/scripts/auto_promote_reinforced.py" \
|
||||
2>&1 || {
|
||||
log "WARN: auto-promote/expire failed (non-blocking)"
|
||||
}
|
||||
|
||||
# Step C: Weekly synthesis (Sundays only)
|
||||
if [[ "$(date -u +%u)" == "7" ]]; then
|
||||
@@ -66,4 +80,73 @@ if [[ "$(date -u +%u)" == "7" ]]; then
|
||||
2>&1 || true
|
||||
fi
|
||||
|
||||
# Step E: Retrieval harness (daily)
|
||||
log "Step E: retrieval harness"
|
||||
HARNESS_OUT=$(python3 "$APP_DIR/scripts/retrieval_eval.py" \
|
||||
--json \
|
||||
--base-url "$ATOCORE_URL" \
|
||||
2>&1) || {
|
||||
log "WARN: retrieval harness failed (non-blocking)"
|
||||
}
|
||||
echo "$HARNESS_OUT"
|
||||
|
||||
# Step F: Persist pipeline summary to project state
|
||||
log "Step F: pipeline summary"
|
||||
python3 -c "
|
||||
import json, urllib.request, re, sys
|
||||
|
||||
base = '$ATOCORE_URL'
|
||||
ts = '$TIMESTAMP'
|
||||
|
||||
def post_state(key, value):
|
||||
body = json.dumps({
|
||||
'project': 'atocore', 'category': 'status',
|
||||
'key': key, 'value': value, 'source': 'nightly pipeline',
|
||||
}).encode()
|
||||
req = urllib.request.Request(
|
||||
f'{base}/project/state', data=body,
|
||||
headers={'Content-Type': 'application/json'}, method='POST',
|
||||
)
|
||||
try:
|
||||
urllib.request.urlopen(req, timeout=10)
|
||||
except Exception as e:
|
||||
print(f'WARN: failed to persist {key}: {e}', file=sys.stderr)
|
||||
|
||||
# Parse harness JSON
|
||||
harness = {}
|
||||
try:
|
||||
harness = json.loads('''$HARNESS_OUT''')
|
||||
post_state('retrieval_harness_result', json.dumps({
|
||||
'passed': harness.get('passed', 0),
|
||||
'total': harness.get('total', 0),
|
||||
'failures': [f['name'] for f in harness.get('fixtures', []) if not f.get('ok')],
|
||||
'run_at': ts,
|
||||
}))
|
||||
p, t = harness.get('passed', '?'), harness.get('total', '?')
|
||||
print(f'Harness: {p}/{t}')
|
||||
except Exception:
|
||||
print('WARN: could not parse harness output')
|
||||
|
||||
# Parse triage counts from stdout
|
||||
triage_out = '''$TRIAGE_OUT'''
|
||||
promoted = len(re.findall(r'promoted', triage_out, re.IGNORECASE))
|
||||
rejected = len(re.findall(r'rejected', triage_out, re.IGNORECASE))
|
||||
needs_human = len(re.findall(r'needs.human', triage_out, re.IGNORECASE))
|
||||
|
||||
# Build summary
|
||||
summary = {
|
||||
'run_at': ts,
|
||||
'harness_passed': harness.get('passed', -1),
|
||||
'harness_total': harness.get('total', -1),
|
||||
'triage_promoted': promoted,
|
||||
'triage_rejected': rejected,
|
||||
'triage_needs_human': needs_human,
|
||||
}
|
||||
post_state('pipeline_last_run', ts)
|
||||
post_state('pipeline_summary', json.dumps(summary))
|
||||
print(f'Pipeline summary persisted: {json.dumps(summary)}')
|
||||
" 2>&1 || {
|
||||
log "WARN: pipeline summary persistence failed (non-blocking)"
|
||||
}
|
||||
|
||||
log "=== AtoCore batch extraction + triage complete ==="
|
||||
|
||||
@@ -166,10 +166,19 @@ def _extract_last_user_prompt(transcript_path: str) -> str:
|
||||
# Project inference from working directory.
|
||||
# Maps known repo paths to AtoCore project IDs. The user can extend
|
||||
# this table or replace it with a registry lookup later.
|
||||
_VAULT = "C:\\Users\\antoi\\antoine\\My Libraries\\Antoine Brain Extension"
|
||||
|
||||
_PROJECT_PATH_MAP: dict[str, str] = {
|
||||
# Add mappings as needed, e.g.:
|
||||
# "C:\\Users\\antoi\\gigabit": "p04-gigabit",
|
||||
# "C:\\Users\\antoi\\interferometer": "p05-interferometer",
|
||||
f"{_VAULT}\\2-Projects\\P04-GigaBIT-M1": "p04-gigabit",
|
||||
f"{_VAULT}\\2-Projects\\P10-Interferometer": "p05-interferometer",
|
||||
f"{_VAULT}\\2-Projects\\P11-Polisher-Fullum": "p06-polisher",
|
||||
f"{_VAULT}\\2-Projects\\P08-ABB-Space-Mirror": "abb-space",
|
||||
f"{_VAULT}\\2-Projects\\I01-Atomizer": "atomizer-v2",
|
||||
f"{_VAULT}\\2-Projects\\I02-AtoCore": "atocore",
|
||||
"C:\\Users\\antoi\\ATOCore": "atocore",
|
||||
"C:\\Users\\antoi\\Polisher-Sim": "p06-polisher",
|
||||
"C:\\Users\\antoi\\Fullum-Interferometer": "p05-interferometer",
|
||||
"C:\\Users\\antoi\\Atomizer-V2": "atomizer-v2",
|
||||
}
|
||||
|
||||
|
||||
|
||||
@@ -33,15 +33,21 @@ read-only additive mode.
|
||||
at 5% budget ratio. Future identity/preference extraction happens
|
||||
organically via the nightly LLM extraction pipeline.
|
||||
|
||||
- Phase 8 - OpenClaw Integration. As of 2026-04-12 the T420 OpenClaw
|
||||
helper (`t420-openclaw/atocore.py`) is verified end-to-end against
|
||||
live Dalidou: health check, auto-context with project detection,
|
||||
Trusted Project State surfacing, project-memory band, fail-open on
|
||||
unreachable host. Tested from both the development machine and the
|
||||
T420 via SSH. The helper covers 15 of the 33 API endpoints — the
|
||||
excluded endpoints (memory management, interactions, backup) are
|
||||
correctly scoped to the operator client (`scripts/atocore_client.py`)
|
||||
per the read-only additive integration model.
|
||||
- Phase 8 - OpenClaw Integration (baseline only, not primary surface).
|
||||
As of 2026-04-15 the T420 OpenClaw helper (`t420-openclaw/atocore.py`)
|
||||
is verified end-to-end against live Dalidou: health check, auto-context
|
||||
with project detection, Trusted Project State surfacing, project-memory
|
||||
band, fail-open on unreachable host. Tested from both the development
|
||||
machine and the T420 via SSH. Scope is narrow: **14 request shapes
|
||||
against ~44 server routes**, predominantly read-oriented plus
|
||||
`POST/DELETE /project/state` and `POST /ingest/sources`. Memory
|
||||
management, interactions capture (covered separately by the OpenClaw
|
||||
capture plugin), admin/backup, entities, triage, and extraction write
|
||||
paths remain out of this client's surface by design — they are scoped
|
||||
to the operator client (`scripts/atocore_client.py`) per the
|
||||
read-heavy additive integration model. "Primary integration" is
|
||||
therefore overclaim; "baseline read + project-state write helper" is
|
||||
the accurate framing.
|
||||
|
||||
### Baseline Complete
|
||||
|
||||
|
||||
79
scripts/auto_promote_reinforced.py
Normal file
79
scripts/auto_promote_reinforced.py
Normal file
@@ -0,0 +1,79 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Auto-promote reinforced candidates + expire stale ones.
|
||||
|
||||
Phase 10: reinforcement-based auto-promotion. Candidates referenced
|
||||
by 3+ interactions with confidence >= 0.7 graduate to active.
|
||||
Candidates unreinforced for 14+ days are auto-rejected.
|
||||
|
||||
Usage:
|
||||
python3 scripts/auto_promote_reinforced.py [--base-url URL] [--dry-run]
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
|
||||
# Allow importing from src/ when run from repo root
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "src"))
|
||||
|
||||
from atocore.memory.service import auto_promote_reinforced, expire_stale_candidates
|
||||
|
||||
|
||||
def main() -> None:
|
||||
parser = argparse.ArgumentParser(description="Auto-promote + expire candidates")
|
||||
parser.add_argument("--dry-run", action="store_true", help="Report only, don't change anything")
|
||||
parser.add_argument("--min-refs", type=int, default=3, help="Min reference_count for promotion")
|
||||
parser.add_argument("--min-confidence", type=float, default=0.7, help="Min confidence for promotion")
|
||||
parser.add_argument("--expire-days", type=int, default=14, help="Days before unreinforced candidates expire")
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.dry_run:
|
||||
print("DRY RUN — no changes will be made")
|
||||
# For dry-run, query directly and report
|
||||
from atocore.models.database import get_connection
|
||||
from datetime import datetime, timedelta, timezone
|
||||
|
||||
cutoff_promote = (datetime.now(timezone.utc) - timedelta(days=args.expire_days)).strftime("%Y-%m-%d %H:%M:%S")
|
||||
cutoff_expire = cutoff_promote
|
||||
|
||||
with get_connection() as conn:
|
||||
promotable = conn.execute(
|
||||
"SELECT id, content, memory_type, project, confidence, reference_count "
|
||||
"FROM memories WHERE status = 'candidate' "
|
||||
"AND COALESCE(reference_count, 0) >= ? AND confidence >= ? "
|
||||
"AND last_referenced_at >= ?",
|
||||
(args.min_refs, args.min_confidence, cutoff_promote),
|
||||
).fetchall()
|
||||
expirable = conn.execute(
|
||||
"SELECT id, content, memory_type, project "
|
||||
"FROM memories WHERE status = 'candidate' "
|
||||
"AND COALESCE(reference_count, 0) = 0 AND created_at < ?",
|
||||
(cutoff_expire,),
|
||||
).fetchall()
|
||||
|
||||
print(f"\nWould promote {len(promotable)} candidates:")
|
||||
for r in promotable:
|
||||
print(f" [{r['memory_type']}] refs={r['reference_count']} conf={r['confidence']:.2f} | {r['content'][:80]}...")
|
||||
print(f"\nWould expire {len(expirable)} stale candidates:")
|
||||
for r in expirable:
|
||||
print(f" [{r['memory_type']}] {r['project'] or 'global'} | {r['content'][:80]}...")
|
||||
return
|
||||
|
||||
promoted = auto_promote_reinforced(
|
||||
min_reference_count=args.min_refs,
|
||||
min_confidence=args.min_confidence,
|
||||
)
|
||||
expired = expire_stale_candidates(max_age_days=args.expire_days)
|
||||
|
||||
print(f"promoted={len(promoted)} expired={len(expired)}")
|
||||
if promoted:
|
||||
print(f"Promoted IDs: {promoted}")
|
||||
if expired:
|
||||
print(f"Expired IDs: {expired}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -63,9 +63,11 @@ Rules:
|
||||
|
||||
3. CONTRADICTS when the candidate *conflicts* with an existing active memory (not a duplicate, but states something that can't both be true). Set `conflicts_with` to the existing memory id. This flags the tension for human review instead of silently rejecting or double-storing. Examples: "Option A selected" vs "Option B selected" for the same decision; "uses material X" vs "uses material Y" for the same component.
|
||||
|
||||
4. NEEDS_HUMAN when you're genuinely unsure — the candidate might be valuable but you can't tell without domain knowledge. This should be rare (< 20% of candidates).
|
||||
4. OPENCLAW-CURATED content (candidate content starts with "From OpenClaw/"): apply a MUCH LOWER bar. OpenClaw's SOUL.md, USER.md, MEMORY.md, MODEL-ROUTING.md, and dated memory/*.md files are ALREADY curated by OpenClaw as canonical continuity. Promote unless clearly wrong or a genuine duplicate. Do NOT reject OpenClaw content as "process rule belongs elsewhere" or "session log" — that's exactly what AtoCore wants to absorb. Session events, project updates, stakeholder notes, and decisions from OpenClaw daily memory files ARE valuable context and should promote.
|
||||
|
||||
5. Output ONLY the JSON object. No prose, no markdown, no explanation outside the reason field."""
|
||||
5. NEEDS_HUMAN when you're genuinely unsure — the candidate might be valuable but you can't tell without domain knowledge. This should be rare (< 20% of candidates).
|
||||
|
||||
6. Output ONLY the JSON object. No prose, no markdown, no explanation outside the reason field."""
|
||||
|
||||
_sandbox_cwd = None
|
||||
|
||||
|
||||
@@ -1,12 +1,15 @@
|
||||
"""Host-side LLM batch extraction — pure HTTP client, no atocore imports.
|
||||
"""Host-side LLM batch extraction — HTTP client + shared prompt module.
|
||||
|
||||
Fetches interactions from the AtoCore API, runs ``claude -p`` locally
|
||||
for each, and POSTs candidates back. Zero dependency on atocore source
|
||||
or Python packages — only uses stdlib + the ``claude`` CLI on PATH.
|
||||
for each, and POSTs candidates back. Uses stdlib + the ``claude`` CLI
|
||||
on PATH, plus the stdlib-only shared prompt/parser module at
|
||||
``atocore.memory._llm_prompt`` to eliminate prompt/parser drift
|
||||
against the in-container extractor (R12).
|
||||
|
||||
This is necessary because the ``claude`` CLI is on the Dalidou HOST
|
||||
but not inside the Docker container, and the host's Python doesn't
|
||||
have the container's dependencies (pydantic_settings, etc.).
|
||||
have the container's dependencies (pydantic_settings, etc.) — so we
|
||||
only import the one stdlib-only module, not the full atocore package.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
@@ -23,88 +26,26 @@ import urllib.parse
|
||||
import urllib.request
|
||||
from datetime import datetime, timezone
|
||||
|
||||
# R12: share the prompt + parser with the in-container extractor so
|
||||
# the two paths can't drift. The imported module is stdlib-only by
|
||||
# design; see src/atocore/memory/_llm_prompt.py.
|
||||
_SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
_SRC_DIR = os.path.abspath(os.path.join(_SCRIPT_DIR, "..", "src"))
|
||||
if _SRC_DIR not in sys.path:
|
||||
sys.path.insert(0, _SRC_DIR)
|
||||
|
||||
from atocore.memory._llm_prompt import ( # noqa: E402
|
||||
MEMORY_TYPES,
|
||||
SYSTEM_PROMPT,
|
||||
build_user_message,
|
||||
normalize_candidate_item,
|
||||
parse_llm_json_array,
|
||||
)
|
||||
|
||||
DEFAULT_BASE_URL = os.environ.get("ATOCORE_BASE_URL", "http://localhost:8100")
|
||||
DEFAULT_MODEL = os.environ.get("ATOCORE_LLM_EXTRACTOR_MODEL", "sonnet")
|
||||
DEFAULT_TIMEOUT_S = float(os.environ.get("ATOCORE_LLM_EXTRACTOR_TIMEOUT_S", "90"))
|
||||
MAX_RESPONSE_CHARS = 8000
|
||||
MAX_PROMPT_CHARS = 2000
|
||||
|
||||
MEMORY_TYPES = {"identity", "preference", "project", "episodic", "knowledge", "adaptation"}
|
||||
|
||||
SYSTEM_PROMPT = """You extract memory candidates from LLM conversation turns for a personal context engine called AtoCore.
|
||||
|
||||
AtoCore is the brain for Atomaste's engineering work. Known projects:
|
||||
p04-gigabit, p05-interferometer, p06-polisher, atomizer-v2, atocore,
|
||||
abb-space. Unknown project names — still tag them, the system auto-detects.
|
||||
|
||||
Your job is to emit SIGNALS that matter for future context. Be aggressive:
|
||||
err on the side of capturing useful signal. Triage filters noise downstream.
|
||||
|
||||
WHAT TO EMIT (in order of importance):
|
||||
|
||||
1. PROJECT ACTIVITY — any mention of a project with context worth remembering:
|
||||
- "Schott quote received for ABB-Space" (event + project)
|
||||
- "Cédric asked about p06 firmware timing" (stakeholder event)
|
||||
- "Still waiting on Zygo lead-time from Nabeel" (blocker status)
|
||||
- "p05 vendor decision needs to happen this week" (action item)
|
||||
|
||||
2. DECISIONS AND CHOICES — anything that commits to a direction:
|
||||
- "Going with Zygo Verifire SV for p05" (decision)
|
||||
- "Dropping stitching from primary workflow" (design choice)
|
||||
- "USB SSD mandatory, not SD card" (architectural commitment)
|
||||
|
||||
3. DURABLE ENGINEERING INSIGHT — earned knowledge that generalizes:
|
||||
- "CTE gradient dominates WFE at F/1.2" (materials insight)
|
||||
- "Preston model breaks below 5N because contact assumption fails"
|
||||
- "m=1 coma NOT correctable by force modulation" (controls insight)
|
||||
Test: would a competent engineer NEED experience to know this?
|
||||
If it's textbook/google-findable, skip it.
|
||||
|
||||
4. STAKEHOLDER AND VENDOR EVENTS:
|
||||
- "Email sent to Nabeel 2026-04-13 asking for lead time"
|
||||
- "Meeting with Jason on Table 7 next Tuesday"
|
||||
- "Starspec wants updated CAD by Friday"
|
||||
|
||||
5. PREFERENCES AND ADAPTATIONS that shape how Antoine works:
|
||||
- "Antoine prefers OAuth over API keys"
|
||||
- "Extraction stays off the capture hot path"
|
||||
|
||||
WHAT TO SKIP:
|
||||
|
||||
- Pure conversational filler ("ok thanks", "let me check")
|
||||
- Instructional help content ("run this command", "here's how to...")
|
||||
- Obvious textbook facts anyone can google in 30 seconds
|
||||
- Session meta-chatter ("let me commit this", "deploy running")
|
||||
- Transient system state snapshots ("36 active memories right now")
|
||||
|
||||
CANDIDATE TYPES — choose the best fit:
|
||||
|
||||
- project — a fact, decision, or event specific to one named project
|
||||
- knowledge — durable engineering insight (use domain, not project)
|
||||
- preference — how Antoine works / wants things done
|
||||
- adaptation — a standing rule or adjustment to behavior
|
||||
- episodic — a stakeholder event or milestone worth remembering
|
||||
|
||||
DOMAINS for knowledge candidates (required when type=knowledge and project is empty):
|
||||
physics, materials, optics, mechanics, manufacturing, metrology,
|
||||
controls, software, math, finance, business
|
||||
|
||||
TRUST HIERARCHY:
|
||||
|
||||
- project-specific: set project to the project id, leave domain empty
|
||||
- domain knowledge: set domain, leave project empty
|
||||
- events/activity: use project, type=project or episodic
|
||||
- one conversation can produce MULTIPLE candidates — emit them all
|
||||
|
||||
OUTPUT RULES:
|
||||
|
||||
- Each candidate content under 250 characters, stands alone
|
||||
- Default confidence 0.5. Raise to 0.7 only for ratified/committed claims.
|
||||
- Raw JSON array, no prose, no markdown fences
|
||||
- Empty array [] is fine when the conversation has no durable signal
|
||||
|
||||
Each element:
|
||||
{"type": "project|knowledge|preference|adaptation|episodic", "content": "...", "project": "...", "domain": "", "confidence": 0.5}"""
|
||||
|
||||
_sandbox_cwd = None
|
||||
|
||||
@@ -175,14 +116,7 @@ def extract_one(prompt, response, project, model, timeout_s):
|
||||
if not shutil.which("claude"):
|
||||
return [], "claude_cli_missing"
|
||||
|
||||
prompt_excerpt = prompt[:MAX_PROMPT_CHARS]
|
||||
response_excerpt = response[:MAX_RESPONSE_CHARS]
|
||||
user_message = (
|
||||
f"PROJECT HINT (may be empty): {project}\n\n"
|
||||
f"USER PROMPT:\n{prompt_excerpt}\n\n"
|
||||
f"ASSISTANT RESPONSE:\n{response_excerpt}\n\n"
|
||||
"Return the JSON array now."
|
||||
)
|
||||
user_message = build_user_message(prompt, response, project)
|
||||
|
||||
args = [
|
||||
"claude", "-p",
|
||||
@@ -211,66 +145,25 @@ def extract_one(prompt, response, project, model, timeout_s):
|
||||
|
||||
|
||||
def parse_candidates(raw, interaction_project):
|
||||
"""Parse model JSON output into candidate dicts."""
|
||||
text = raw.strip()
|
||||
if text.startswith("```"):
|
||||
text = text.strip("`")
|
||||
nl = text.find("\n")
|
||||
if nl >= 0:
|
||||
text = text[nl + 1:]
|
||||
if text.endswith("```"):
|
||||
text = text[:-3]
|
||||
text = text.strip()
|
||||
|
||||
if not text or text == "[]":
|
||||
return []
|
||||
|
||||
if not text.lstrip().startswith("["):
|
||||
start = text.find("[")
|
||||
end = text.rfind("]")
|
||||
if start >= 0 and end > start:
|
||||
text = text[start:end + 1]
|
||||
|
||||
try:
|
||||
parsed = json.loads(text)
|
||||
except json.JSONDecodeError:
|
||||
return []
|
||||
|
||||
if not isinstance(parsed, list):
|
||||
return []
|
||||
"""Parse model JSON output into candidate dicts.
|
||||
|
||||
Stripping + per-item normalization come from the shared
|
||||
``_llm_prompt`` module. Host-side project attribution: interaction
|
||||
scope wins, otherwise keep the model's tag (the API's own R9
|
||||
registry-check will happen server-side in the container on write;
|
||||
here we preserve the signal instead of dropping it).
|
||||
"""
|
||||
results = []
|
||||
for item in parsed:
|
||||
if not isinstance(item, dict):
|
||||
for item in parse_llm_json_array(raw):
|
||||
normalized = normalize_candidate_item(item)
|
||||
if normalized is None:
|
||||
continue
|
||||
mem_type = str(item.get("type") or "").strip().lower()
|
||||
content = str(item.get("content") or "").strip()
|
||||
model_project = str(item.get("project") or "").strip()
|
||||
domain = str(item.get("domain") or "").strip().lower()
|
||||
# R9 trust hierarchy: interaction scope always wins when set.
|
||||
# For unscoped interactions, keep model's project tag even if
|
||||
# unregistered — the system will detect new projects/leads.
|
||||
if interaction_project:
|
||||
project = interaction_project
|
||||
elif model_project:
|
||||
project = model_project
|
||||
else:
|
||||
project = ""
|
||||
# Domain knowledge: embed tag in content for cross-project retrieval
|
||||
if domain and not project:
|
||||
content = f"[{domain}] {content}"
|
||||
conf = item.get("confidence", 0.5)
|
||||
if mem_type not in MEMORY_TYPES or not content:
|
||||
continue
|
||||
try:
|
||||
conf = max(0.0, min(1.0, float(conf)))
|
||||
except (TypeError, ValueError):
|
||||
conf = 0.5
|
||||
project = interaction_project or normalized["project"] or ""
|
||||
results.append({
|
||||
"memory_type": mem_type,
|
||||
"content": content[:1000],
|
||||
"memory_type": normalized["type"],
|
||||
"content": normalized["content"],
|
||||
"project": project,
|
||||
"confidence": conf,
|
||||
"confidence": normalized["confidence"],
|
||||
})
|
||||
return results
|
||||
|
||||
|
||||
@@ -42,7 +42,7 @@ from pathlib import Path
|
||||
|
||||
DEFAULT_BASE_URL = os.environ.get("ATOCORE_BASE_URL", "http://localhost:8100")
|
||||
DEFAULT_OPENCLAW_HOST = os.environ.get("ATOCORE_OPENCLAW_HOST", "papa@192.168.86.39")
|
||||
DEFAULT_OPENCLAW_PATH = os.environ.get("ATOCORE_OPENCLAW_PATH", "/home/papa/openclaw-workspace")
|
||||
DEFAULT_OPENCLAW_PATH = os.environ.get("ATOCORE_OPENCLAW_PATH", "/home/papa/clawd")
|
||||
|
||||
# Files to pull and how to classify them
|
||||
DURABLE_FILES = [
|
||||
|
||||
@@ -218,8 +218,8 @@
|
||||
"Tailscale"
|
||||
],
|
||||
"expect_absent": [
|
||||
"GigaBIT"
|
||||
"[Source: p04-gigabit/"
|
||||
],
|
||||
"notes": "New p06 memory: Tailscale mesh for RPi remote access"
|
||||
"notes": "New p06 memory: Tailscale mesh for RPi remote access. Cross-project guard is a source-path check, not a word blacklist: the polisher ARCHITECTURE.md legitimately mentions the GigaBIT M1 mirror (it is what the polisher is built for), so testing for absence of that word produces false positives. The real invariant is that no p04 source chunks are retrieved into p06 context."
|
||||
}
|
||||
]
|
||||
|
||||
159
scripts/seed_project_state.py
Normal file
159
scripts/seed_project_state.py
Normal file
@@ -0,0 +1,159 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Seed Trusted Project State entries for all active projects.
|
||||
|
||||
Populates the project_state table with curated decisions, requirements,
|
||||
facts, contacts, and milestones so context packs have real content
|
||||
in the highest-trust tier.
|
||||
|
||||
Usage:
|
||||
python3 scripts/seed_project_state.py --base-url http://dalidou:8100
|
||||
python3 scripts/seed_project_state.py --base-url http://dalidou:8100 --dry-run
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import urllib.request
|
||||
import sys
|
||||
|
||||
# Each entry: (project, category, key, value, source)
|
||||
SEED_ENTRIES: list[tuple[str, str, str, str, str]] = [
|
||||
# ---- p04-gigabit (GigaBIT M1 1.2m Primary Mirror) ----
|
||||
("p04-gigabit", "fact", "mirror-spec",
|
||||
"1.2m borosilicate primary mirror for GigaBIT telescope. F/1.5, lightweight isogrid back structure.",
|
||||
"CDR docs + vault"),
|
||||
("p04-gigabit", "decision", "back-structure",
|
||||
"Option B selected: conical isogrid back structure with variable rib density. Chosen over flat-back for stiffness-to-weight ratio.",
|
||||
"CDR 2026-01"),
|
||||
("p04-gigabit", "decision", "polishing-vendor",
|
||||
"ABB Space (formerly INO) selected as polishing vendor. Contract includes computer-controlled polishing (CCP) and ion beam figuring (IBF).",
|
||||
"Entente de service 2026-01"),
|
||||
("p04-gigabit", "requirement", "surface-quality",
|
||||
"Surface figure accuracy: < 25nm RMS after final figuring. Microroughness: < 2nm RMS.",
|
||||
"CDR requirements"),
|
||||
("p04-gigabit", "contact", "abb-space",
|
||||
"ABB Space (INO), Quebec City. Primary contact for mirror polishing, CCP, and IBF. Project lead: coordinating FDR deliverables.",
|
||||
"vendor records"),
|
||||
("p04-gigabit", "milestone", "fdr",
|
||||
"Final Design Review (FDR) in preparation. Deliverables include interface drawings, thermal analysis, and updated error budget.",
|
||||
"project timeline"),
|
||||
|
||||
# ---- p05-interferometer (Fullum Interferometer) ----
|
||||
("p05-interferometer", "fact", "system-overview",
|
||||
"Custom Fizeau interferometer for in-situ metrology of large optics. Designed for the Fullum observatory polishing facility.",
|
||||
"vault docs"),
|
||||
("p05-interferometer", "decision", "cgh-design",
|
||||
"Computer-generated hologram (CGH) selected for null testing of the 1.2m mirror. Vendor: Diffraction International.",
|
||||
"vendor correspondence"),
|
||||
("p05-interferometer", "requirement", "measurement-accuracy",
|
||||
"Measurement accuracy target: lambda/20 (< 30nm PV) for surface figure verification.",
|
||||
"system requirements"),
|
||||
("p05-interferometer", "fact", "laser-source",
|
||||
"HeNe laser source at 632.8nm. Beam expansion to cover full 1.2m aperture via diverger + CGH.",
|
||||
"optical design docs"),
|
||||
("p05-interferometer", "contact", "diffraction-intl",
|
||||
"Diffraction International: CGH vendor. Fabricates the computer-generated hologram for null testing.",
|
||||
"vendor records"),
|
||||
|
||||
# ---- p06-polisher (Polisher Suite / P11-Polisher-Fullum) ----
|
||||
("p06-polisher", "fact", "suite-overview",
|
||||
"Integrated CNC polishing suite for the Fullum observatory. Includes 3-axis polishing machine, metrology integration, and real-time process control.",
|
||||
"vault docs"),
|
||||
("p06-polisher", "decision", "control-architecture",
|
||||
"Beckhoff TwinCAT 3 selected for real-time motion control. EtherCAT fieldbus for servo drives and I/O.",
|
||||
"architecture docs"),
|
||||
("p06-polisher", "decision", "firmware-split",
|
||||
"Firmware split into safety layer (PLC-level interlocks) and application layer (trajectory generation, adaptive dwell-time).",
|
||||
"architecture docs"),
|
||||
("p06-polisher", "requirement", "axis-travel",
|
||||
"Z-axis: 200mm travel for tool engagement. X/Y: covers 1.2m mirror diameter plus overshoot margin.",
|
||||
"mechanical requirements"),
|
||||
("p06-polisher", "fact", "telemetry",
|
||||
"Real-time telemetry via MQTT. Metrics: spindle RPM, force sensor, temperature probes, position feedback at 1kHz.",
|
||||
"control design docs"),
|
||||
("p06-polisher", "contact", "fullum-observatory",
|
||||
"Fullum Observatory: site where the polishing suite will be installed. Provides infrastructure (power, vibration isolation, clean environment).",
|
||||
"project records"),
|
||||
|
||||
# ---- atomizer-v2 ----
|
||||
("atomizer-v2", "fact", "product-overview",
|
||||
"Atomizer V2: internal project management and multi-agent orchestration platform. War-room based task coordination.",
|
||||
"repo docs"),
|
||||
("atomizer-v2", "decision", "projects-first-architecture",
|
||||
"Migration to projects-first architecture: each project is a workspace with its own agents, tasks, and knowledge.",
|
||||
"war-room-migration-plan-v2.md"),
|
||||
|
||||
# ---- abb-space (P08) ----
|
||||
("abb-space", "fact", "contract-overview",
|
||||
"ABB Space mirror polishing contract. Phase 1: spherical mirror polishing (200mm). Schott Zerodur substrate.",
|
||||
"quotes + correspondence"),
|
||||
("abb-space", "contact", "schott",
|
||||
"Schott AG: substrate supplier for Zerodur mirror blanks. Quote received for 200mm blank.",
|
||||
"vendor records"),
|
||||
|
||||
# ---- atocore ----
|
||||
("atocore", "fact", "architecture",
|
||||
"AtoCore: runtime memory and knowledge layer. FastAPI + SQLite + ChromaDB. Hosted on Dalidou (Docker). Nightly pipeline: backup, extract, triage, synthesis.",
|
||||
"codebase"),
|
||||
("atocore", "decision", "no-api-keys",
|
||||
"No API keys allowed in AtoCore. LLM-assisted features use OAuth via 'claude -p' CLI or equivalent CLI-authenticated paths.",
|
||||
"DEV-LEDGER 2026-04-12"),
|
||||
("atocore", "decision", "storage-separation",
|
||||
"Human-readable sources (vault, drive) and machine operational storage (SQLite, ChromaDB) must remain separate. Machine DB is derived state.",
|
||||
"AGENTS.md"),
|
||||
("atocore", "decision", "extraction-off-hot-path",
|
||||
"Extraction stays off the capture hot path. Batch/manual only. Never block interaction recording with extraction.",
|
||||
"DEV-LEDGER 2026-04-11"),
|
||||
]
|
||||
|
||||
|
||||
def main() -> None:
|
||||
parser = argparse.ArgumentParser(description="Seed Trusted Project State")
|
||||
parser.add_argument("--base-url", default="http://dalidou:8100")
|
||||
parser.add_argument("--dry-run", action="store_true")
|
||||
args = parser.parse_args()
|
||||
|
||||
base = args.base_url.rstrip("/")
|
||||
created = 0
|
||||
skipped = 0
|
||||
errors = 0
|
||||
|
||||
for project, category, key, value, source in SEED_ENTRIES:
|
||||
if args.dry_run:
|
||||
print(f" [DRY] {project}/{category}/{key}: {value[:60]}...")
|
||||
created += 1
|
||||
continue
|
||||
|
||||
body = json.dumps({
|
||||
"project": project,
|
||||
"category": category,
|
||||
"key": key,
|
||||
"value": value,
|
||||
"source": source,
|
||||
"confidence": 1.0,
|
||||
}).encode()
|
||||
req = urllib.request.Request(
|
||||
f"{base}/project/state",
|
||||
data=body,
|
||||
headers={"Content-Type": "application/json"},
|
||||
method="POST",
|
||||
)
|
||||
try:
|
||||
resp = urllib.request.urlopen(req, timeout=10)
|
||||
result = json.loads(resp.read())
|
||||
if result.get("created"):
|
||||
created += 1
|
||||
print(f" + {project}/{category}/{key}")
|
||||
else:
|
||||
skipped += 1
|
||||
print(f" = {project}/{category}/{key} (already exists)")
|
||||
except Exception as e:
|
||||
errors += 1
|
||||
print(f" ! {project}/{category}/{key}: {e}", file=sys.stderr)
|
||||
|
||||
print(f"\nDone: {created} created, {skipped} skipped, {errors} errors")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -55,6 +55,7 @@ from atocore.memory.extractor import (
|
||||
)
|
||||
from atocore.memory.extractor_llm import (
|
||||
LLM_EXTRACTOR_VERSION,
|
||||
_cli_available as _llm_cli_available,
|
||||
extract_candidates_llm,
|
||||
)
|
||||
from atocore.memory.reinforcement import reinforce_from_interaction
|
||||
@@ -832,6 +833,18 @@ def api_extract_batch(req: ExtractBatchRequest | None = None) -> dict:
|
||||
invoke this endpoint explicitly (cron, manual curl, CLI).
|
||||
"""
|
||||
payload = req or ExtractBatchRequest()
|
||||
|
||||
if payload.mode == "llm" and not _llm_cli_available():
|
||||
raise HTTPException(
|
||||
status_code=503,
|
||||
detail=(
|
||||
"LLM extraction unavailable in this runtime: the `claude` CLI "
|
||||
"is not on PATH. Run host-side via "
|
||||
"`scripts/batch_llm_extract_live.py` instead, or call this "
|
||||
"endpoint with mode=\"rule\"."
|
||||
),
|
||||
)
|
||||
|
||||
since = payload.since
|
||||
|
||||
if not since:
|
||||
@@ -916,11 +929,14 @@ def api_dashboard() -> dict:
|
||||
"""One-shot system observability dashboard.
|
||||
|
||||
Returns memory counts by type/project/status, project state
|
||||
entry counts, recent interaction volume, and extraction pipeline
|
||||
entry counts, interaction volume by client, pipeline health
|
||||
(harness, triage stats, last run), and extraction pipeline
|
||||
status — everything an operator needs to understand AtoCore's
|
||||
health beyond the basic /health endpoint.
|
||||
"""
|
||||
import json as _json
|
||||
from collections import Counter
|
||||
from datetime import datetime as _dt, timezone as _tz
|
||||
|
||||
all_memories = get_memories(active_only=False, limit=500)
|
||||
active = [m for m in all_memories if m.status == "active"]
|
||||
@@ -930,27 +946,81 @@ def api_dashboard() -> dict:
|
||||
project_counts = dict(Counter(m.project or "(none)" for m in active))
|
||||
reinforced = [m for m in active if m.reference_count > 0]
|
||||
|
||||
interactions = list_interactions(limit=1)
|
||||
recent_interaction = interactions[0].created_at if interactions else None
|
||||
# Interaction stats — total + by_client from DB directly
|
||||
interaction_stats: dict = {"most_recent": None, "total": 0, "by_client": {}}
|
||||
try:
|
||||
from atocore.models.database import get_connection as _gc
|
||||
|
||||
# Extraction pipeline status
|
||||
extract_state = {}
|
||||
with _gc() as conn:
|
||||
row = conn.execute("SELECT count(*) FROM interactions").fetchone()
|
||||
interaction_stats["total"] = row[0] if row else 0
|
||||
rows = conn.execute(
|
||||
"SELECT client, count(*) FROM interactions GROUP BY client"
|
||||
).fetchall()
|
||||
interaction_stats["by_client"] = {r[0]: r[1] for r in rows}
|
||||
row = conn.execute(
|
||||
"SELECT created_at FROM interactions ORDER BY created_at DESC LIMIT 1"
|
||||
).fetchone()
|
||||
interaction_stats["most_recent"] = row[0] if row else None
|
||||
except Exception:
|
||||
interactions = list_interactions(limit=1)
|
||||
interaction_stats["most_recent"] = (
|
||||
interactions[0].created_at if interactions else None
|
||||
)
|
||||
|
||||
# Pipeline health from project state
|
||||
pipeline: dict = {}
|
||||
extract_state: dict = {}
|
||||
try:
|
||||
state_entries = get_state("atocore")
|
||||
for entry in state_entries:
|
||||
if entry.category == "status" and entry.key == "last_extract_batch_run":
|
||||
if entry.category != "status":
|
||||
continue
|
||||
if entry.key == "last_extract_batch_run":
|
||||
extract_state["last_run"] = entry.value
|
||||
elif entry.key == "pipeline_last_run":
|
||||
pipeline["last_run"] = entry.value
|
||||
try:
|
||||
last = _dt.fromisoformat(entry.value.replace("Z", "+00:00"))
|
||||
delta = _dt.now(_tz.utc) - last
|
||||
pipeline["hours_since_last_run"] = round(
|
||||
delta.total_seconds() / 3600, 1
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
elif entry.key == "pipeline_summary":
|
||||
try:
|
||||
pipeline["summary"] = _json.loads(entry.value)
|
||||
except Exception:
|
||||
pipeline["summary_raw"] = entry.value
|
||||
elif entry.key == "retrieval_harness_result":
|
||||
try:
|
||||
pipeline["harness"] = _json.loads(entry.value)
|
||||
except Exception:
|
||||
pipeline["harness_raw"] = entry.value
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Project state counts
|
||||
# Project state counts — include all registered projects
|
||||
ps_counts = {}
|
||||
for proj_id in ["p04-gigabit", "p05-interferometer", "p06-polisher", "atocore"]:
|
||||
try:
|
||||
entries = get_state(proj_id)
|
||||
ps_counts[proj_id] = len(entries)
|
||||
except Exception:
|
||||
pass
|
||||
try:
|
||||
from atocore.projects.registry import load_project_registry as _lpr
|
||||
|
||||
for proj in _lpr():
|
||||
try:
|
||||
entries = get_state(proj.project_id)
|
||||
ps_counts[proj.project_id] = len(entries)
|
||||
except Exception:
|
||||
pass
|
||||
except Exception:
|
||||
for proj_id in [
|
||||
"p04-gigabit", "p05-interferometer", "p06-polisher", "atocore",
|
||||
]:
|
||||
try:
|
||||
entries = get_state(proj_id)
|
||||
ps_counts[proj_id] = len(entries)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return {
|
||||
"memories": {
|
||||
@@ -964,10 +1034,9 @@ def api_dashboard() -> dict:
|
||||
"counts": ps_counts,
|
||||
"total": sum(ps_counts.values()),
|
||||
},
|
||||
"interactions": {
|
||||
"most_recent": recent_interaction,
|
||||
},
|
||||
"interactions": interaction_stats,
|
||||
"extraction_pipeline": extract_state,
|
||||
"pipeline": pipeline,
|
||||
}
|
||||
|
||||
|
||||
|
||||
@@ -104,6 +104,21 @@ class Settings(BaseSettings):
|
||||
|
||||
@property
|
||||
def resolved_project_registry_path(self) -> Path:
|
||||
"""Path to the project registry JSON file.
|
||||
|
||||
If ``ATOCORE_PROJECT_REGISTRY_DIR`` env var is set, the registry
|
||||
lives at ``<that dir>/project-registry.json``. Otherwise falls
|
||||
back to the configured ``project_registry_path`` field.
|
||||
|
||||
This lets Docker deployments point at a mounted volume via env
|
||||
var without the ephemeral in-image ``/app/config/`` getting
|
||||
wiped on every rebuild.
|
||||
"""
|
||||
import os
|
||||
|
||||
registry_dir = os.environ.get("ATOCORE_PROJECT_REGISTRY_DIR", "").strip()
|
||||
if registry_dir:
|
||||
return Path(registry_dir) / "project-registry.json"
|
||||
return self._resolve_path(self.project_registry_path)
|
||||
|
||||
@property
|
||||
|
||||
183
src/atocore/memory/_llm_prompt.py
Normal file
183
src/atocore/memory/_llm_prompt.py
Normal file
@@ -0,0 +1,183 @@
|
||||
"""Shared LLM-extractor prompt + parser (stdlib-only).
|
||||
|
||||
R12: single source of truth for the system prompt, memory type set,
|
||||
size limits, and raw JSON parsing used by both paths that shell out
|
||||
to ``claude -p``:
|
||||
|
||||
- ``atocore.memory.extractor_llm`` (in-container extractor, wraps the
|
||||
parsed dicts in ``MemoryCandidate`` with registry-checked project
|
||||
attribution)
|
||||
- ``scripts/batch_llm_extract_live.py`` (host-side extractor, can't
|
||||
import the full atocore package because Dalidou's host Python lacks
|
||||
the container's deps; imports this module via ``sys.path``)
|
||||
|
||||
This module MUST stay stdlib-only. No ``atocore`` imports, no third-
|
||||
party packages. Callers apply their own project-attribution policy on
|
||||
top of the normalized dicts this module emits.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from typing import Any
|
||||
|
||||
LLM_EXTRACTOR_VERSION = "llm-0.4.0"
|
||||
MAX_RESPONSE_CHARS = 8000
|
||||
MAX_PROMPT_CHARS = 2000
|
||||
MEMORY_TYPES = {"identity", "preference", "project", "episodic", "knowledge", "adaptation"}
|
||||
|
||||
SYSTEM_PROMPT = """You extract memory candidates from LLM conversation turns for a personal context engine called AtoCore.
|
||||
|
||||
AtoCore is the brain for Atomaste's engineering work. Known projects:
|
||||
p04-gigabit, p05-interferometer, p06-polisher, atomizer-v2, atocore,
|
||||
abb-space. Unknown project names — still tag them, the system auto-detects.
|
||||
|
||||
Your job is to emit SIGNALS that matter for future context. Be aggressive:
|
||||
err on the side of capturing useful signal. Triage filters noise downstream.
|
||||
|
||||
WHAT TO EMIT (in order of importance):
|
||||
|
||||
1. PROJECT ACTIVITY — any mention of a project with context worth remembering:
|
||||
- "Schott quote received for ABB-Space" (event + project)
|
||||
- "Cédric asked about p06 firmware timing" (stakeholder event)
|
||||
- "Still waiting on Zygo lead-time from Nabeel" (blocker status)
|
||||
- "p05 vendor decision needs to happen this week" (action item)
|
||||
|
||||
2. DECISIONS AND CHOICES — anything that commits to a direction:
|
||||
- "Going with Zygo Verifire SV for p05" (decision)
|
||||
- "Dropping stitching from primary workflow" (design choice)
|
||||
- "USB SSD mandatory, not SD card" (architectural commitment)
|
||||
|
||||
3. DURABLE ENGINEERING INSIGHT — earned knowledge that generalizes:
|
||||
- "CTE gradient dominates WFE at F/1.2" (materials insight)
|
||||
- "Preston model breaks below 5N because contact assumption fails"
|
||||
- "m=1 coma NOT correctable by force modulation" (controls insight)
|
||||
Test: would a competent engineer NEED experience to know this?
|
||||
If it's textbook/google-findable, skip it.
|
||||
|
||||
4. STAKEHOLDER AND VENDOR EVENTS:
|
||||
- "Email sent to Nabeel 2026-04-13 asking for lead time"
|
||||
- "Meeting with Jason on Table 7 next Tuesday"
|
||||
- "Starspec wants updated CAD by Friday"
|
||||
|
||||
5. PREFERENCES AND ADAPTATIONS that shape how Antoine works:
|
||||
- "Antoine prefers OAuth over API keys"
|
||||
- "Extraction stays off the capture hot path"
|
||||
|
||||
WHAT TO SKIP:
|
||||
|
||||
- Pure conversational filler ("ok thanks", "let me check")
|
||||
- Instructional help content ("run this command", "here's how to...")
|
||||
- Obvious textbook facts anyone can google in 30 seconds
|
||||
- Session meta-chatter ("let me commit this", "deploy running")
|
||||
- Transient system state snapshots ("36 active memories right now")
|
||||
|
||||
CANDIDATE TYPES — choose the best fit:
|
||||
|
||||
- project — a fact, decision, or event specific to one named project
|
||||
- knowledge — durable engineering insight (use domain, not project)
|
||||
- preference — how Antoine works / wants things done
|
||||
- adaptation — a standing rule or adjustment to behavior
|
||||
- episodic — a stakeholder event or milestone worth remembering
|
||||
|
||||
DOMAINS for knowledge candidates (required when type=knowledge and project is empty):
|
||||
physics, materials, optics, mechanics, manufacturing, metrology,
|
||||
controls, software, math, finance, business
|
||||
|
||||
TRUST HIERARCHY:
|
||||
|
||||
- project-specific: set project to the project id, leave domain empty
|
||||
- domain knowledge: set domain, leave project empty
|
||||
- events/activity: use project, type=project or episodic
|
||||
- one conversation can produce MULTIPLE candidates — emit them all
|
||||
|
||||
OUTPUT RULES:
|
||||
|
||||
- Each candidate content under 250 characters, stands alone
|
||||
- Default confidence 0.5. Raise to 0.7 only for ratified/committed claims.
|
||||
- Raw JSON array, no prose, no markdown fences
|
||||
- Empty array [] is fine when the conversation has no durable signal
|
||||
|
||||
Each element:
|
||||
{"type": "project|knowledge|preference|adaptation|episodic", "content": "...", "project": "...", "domain": "", "confidence": 0.5}"""
|
||||
|
||||
|
||||
def build_user_message(prompt: str, response: str, project_hint: str) -> str:
|
||||
prompt_excerpt = (prompt or "")[:MAX_PROMPT_CHARS]
|
||||
response_excerpt = (response or "")[:MAX_RESPONSE_CHARS]
|
||||
return (
|
||||
f"PROJECT HINT (may be empty): {project_hint or ''}\n\n"
|
||||
f"USER PROMPT:\n{prompt_excerpt}\n\n"
|
||||
f"ASSISTANT RESPONSE:\n{response_excerpt}\n\n"
|
||||
"Return the JSON array now."
|
||||
)
|
||||
|
||||
|
||||
def parse_llm_json_array(raw_output: str) -> list[dict[str, Any]]:
|
||||
"""Strip markdown fences / leading prose and return the parsed JSON
|
||||
array as a list of raw dicts. Returns an empty list on any parse
|
||||
failure — callers decide whether to log."""
|
||||
text = (raw_output or "").strip()
|
||||
if text.startswith("```"):
|
||||
text = text.strip("`")
|
||||
nl = text.find("\n")
|
||||
if nl >= 0:
|
||||
text = text[nl + 1:]
|
||||
if text.endswith("```"):
|
||||
text = text[:-3]
|
||||
text = text.strip()
|
||||
|
||||
if not text or text == "[]":
|
||||
return []
|
||||
|
||||
if not text.lstrip().startswith("["):
|
||||
start = text.find("[")
|
||||
end = text.rfind("]")
|
||||
if start >= 0 and end > start:
|
||||
text = text[start:end + 1]
|
||||
|
||||
try:
|
||||
parsed = json.loads(text)
|
||||
except json.JSONDecodeError:
|
||||
return []
|
||||
|
||||
if not isinstance(parsed, list):
|
||||
return []
|
||||
return [item for item in parsed if isinstance(item, dict)]
|
||||
|
||||
|
||||
def normalize_candidate_item(item: dict[str, Any]) -> dict[str, Any] | None:
|
||||
"""Validate and normalize one raw model item into a candidate dict.
|
||||
|
||||
Returns None if the item fails basic validation (unknown type,
|
||||
empty content). Does NOT apply project-attribution policy — that's
|
||||
the caller's job, since the registry-check differs between the
|
||||
in-container path and the host path.
|
||||
|
||||
Output keys: type, content, project (raw model value), domain,
|
||||
confidence.
|
||||
"""
|
||||
mem_type = str(item.get("type") or "").strip().lower()
|
||||
content = str(item.get("content") or "").strip()
|
||||
if mem_type not in MEMORY_TYPES or not content:
|
||||
return None
|
||||
|
||||
model_project = str(item.get("project") or "").strip()
|
||||
domain = str(item.get("domain") or "").strip().lower()
|
||||
|
||||
try:
|
||||
confidence = float(item.get("confidence", 0.5))
|
||||
except (TypeError, ValueError):
|
||||
confidence = 0.5
|
||||
confidence = max(0.0, min(1.0, confidence))
|
||||
|
||||
if domain and not model_project:
|
||||
content = f"[{domain}] {content}"
|
||||
|
||||
return {
|
||||
"type": mem_type,
|
||||
"content": content[:1000],
|
||||
"project": model_project,
|
||||
"domain": domain,
|
||||
"confidence": confidence,
|
||||
}
|
||||
@@ -49,7 +49,6 @@ Implementation notes:
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
@@ -58,92 +57,21 @@ from dataclasses import dataclass
|
||||
from functools import lru_cache
|
||||
|
||||
from atocore.interactions.service import Interaction
|
||||
from atocore.memory._llm_prompt import (
|
||||
LLM_EXTRACTOR_VERSION,
|
||||
SYSTEM_PROMPT as _SYSTEM_PROMPT,
|
||||
build_user_message,
|
||||
normalize_candidate_item,
|
||||
parse_llm_json_array,
|
||||
)
|
||||
from atocore.memory.extractor import MemoryCandidate
|
||||
from atocore.memory.service import MEMORY_TYPES
|
||||
from atocore.observability.logger import get_logger
|
||||
|
||||
log = get_logger("extractor_llm")
|
||||
|
||||
LLM_EXTRACTOR_VERSION = "llm-0.4.0"
|
||||
DEFAULT_MODEL = os.environ.get("ATOCORE_LLM_EXTRACTOR_MODEL", "sonnet")
|
||||
DEFAULT_TIMEOUT_S = float(os.environ.get("ATOCORE_LLM_EXTRACTOR_TIMEOUT_S", "90"))
|
||||
MAX_RESPONSE_CHARS = 8000
|
||||
MAX_PROMPT_CHARS = 2000
|
||||
|
||||
_SYSTEM_PROMPT = """You extract memory candidates from LLM conversation turns for a personal context engine called AtoCore.
|
||||
|
||||
AtoCore is the brain for Atomaste's engineering work. Known projects:
|
||||
p04-gigabit, p05-interferometer, p06-polisher, atomizer-v2, atocore,
|
||||
abb-space. Unknown project names — still tag them, the system auto-detects.
|
||||
|
||||
Your job is to emit SIGNALS that matter for future context. Be aggressive:
|
||||
err on the side of capturing useful signal. Triage filters noise downstream.
|
||||
|
||||
WHAT TO EMIT (in order of importance):
|
||||
|
||||
1. PROJECT ACTIVITY — any mention of a project with context worth remembering:
|
||||
- "Schott quote received for ABB-Space" (event + project)
|
||||
- "Cédric asked about p06 firmware timing" (stakeholder event)
|
||||
- "Still waiting on Zygo lead-time from Nabeel" (blocker status)
|
||||
- "p05 vendor decision needs to happen this week" (action item)
|
||||
|
||||
2. DECISIONS AND CHOICES — anything that commits to a direction:
|
||||
- "Going with Zygo Verifire SV for p05" (decision)
|
||||
- "Dropping stitching from primary workflow" (design choice)
|
||||
- "USB SSD mandatory, not SD card" (architectural commitment)
|
||||
|
||||
3. DURABLE ENGINEERING INSIGHT — earned knowledge that generalizes:
|
||||
- "CTE gradient dominates WFE at F/1.2" (materials insight)
|
||||
- "Preston model breaks below 5N because contact assumption fails"
|
||||
- "m=1 coma NOT correctable by force modulation" (controls insight)
|
||||
Test: would a competent engineer NEED experience to know this?
|
||||
If it's textbook/google-findable, skip it.
|
||||
|
||||
4. STAKEHOLDER AND VENDOR EVENTS:
|
||||
- "Email sent to Nabeel 2026-04-13 asking for lead time"
|
||||
- "Meeting with Jason on Table 7 next Tuesday"
|
||||
- "Starspec wants updated CAD by Friday"
|
||||
|
||||
5. PREFERENCES AND ADAPTATIONS that shape how Antoine works:
|
||||
- "Antoine prefers OAuth over API keys"
|
||||
- "Extraction stays off the capture hot path"
|
||||
|
||||
WHAT TO SKIP:
|
||||
|
||||
- Pure conversational filler ("ok thanks", "let me check")
|
||||
- Instructional help content ("run this command", "here's how to...")
|
||||
- Obvious textbook facts anyone can google in 30 seconds
|
||||
- Session meta-chatter ("let me commit this", "deploy running")
|
||||
- Transient system state snapshots ("36 active memories right now")
|
||||
|
||||
CANDIDATE TYPES — choose the best fit:
|
||||
|
||||
- project — a fact, decision, or event specific to one named project
|
||||
- knowledge — durable engineering insight (use domain, not project)
|
||||
- preference — how Antoine works / wants things done
|
||||
- adaptation — a standing rule or adjustment to behavior
|
||||
- episodic — a stakeholder event or milestone worth remembering
|
||||
|
||||
DOMAINS for knowledge candidates (required when type=knowledge and project is empty):
|
||||
physics, materials, optics, mechanics, manufacturing, metrology,
|
||||
controls, software, math, finance, business
|
||||
|
||||
TRUST HIERARCHY:
|
||||
|
||||
- project-specific: set project to the project id, leave domain empty
|
||||
- domain knowledge: set domain, leave project empty
|
||||
- events/activity: use project, type=project or episodic
|
||||
- one conversation can produce MULTIPLE candidates — emit them all
|
||||
|
||||
OUTPUT RULES:
|
||||
|
||||
- Each candidate content under 250 characters, stands alone
|
||||
- Default confidence 0.5. Raise to 0.7 only for ratified/committed claims.
|
||||
- Raw JSON array, no prose, no markdown fences
|
||||
- Empty array [] is fine when the conversation has no durable signal
|
||||
|
||||
Each element:
|
||||
{"type": "project|knowledge|preference|adaptation|episodic", "content": "...", "project": "...", "domain": "", "confidence": 0.5}"""
|
||||
|
||||
|
||||
@dataclass
|
||||
@@ -206,13 +134,10 @@ def extract_candidates_llm_verbose(
|
||||
if not response_text:
|
||||
return LLMExtractionResult(candidates=[], raw_output="", error="empty_response")
|
||||
|
||||
prompt_excerpt = (interaction.prompt or "")[:MAX_PROMPT_CHARS]
|
||||
response_excerpt = response_text[:MAX_RESPONSE_CHARS]
|
||||
user_message = (
|
||||
f"PROJECT HINT (may be empty): {interaction.project or ''}\n\n"
|
||||
f"USER PROMPT:\n{prompt_excerpt}\n\n"
|
||||
f"ASSISTANT RESPONSE:\n{response_excerpt}\n\n"
|
||||
"Return the JSON array now."
|
||||
user_message = build_user_message(
|
||||
interaction.prompt or "",
|
||||
response_text,
|
||||
interaction.project or "",
|
||||
)
|
||||
|
||||
args = [
|
||||
@@ -270,50 +195,25 @@ def extract_candidates_llm_verbose(
|
||||
def _parse_candidates(raw_output: str, interaction: Interaction) -> list[MemoryCandidate]:
|
||||
"""Parse the model's JSON output into MemoryCandidate objects.
|
||||
|
||||
Tolerates common model glitches: surrounding whitespace, stray
|
||||
markdown fences, leading/trailing prose. Silently drops malformed
|
||||
array elements rather than raising.
|
||||
Shared stripping + per-item validation live in
|
||||
``atocore.memory._llm_prompt``. This function adds the container-
|
||||
only R9 project attribution: registry-check model_project and fall
|
||||
back to the interaction scope when set.
|
||||
"""
|
||||
text = raw_output.strip()
|
||||
if text.startswith("```"):
|
||||
text = text.strip("`")
|
||||
first_newline = text.find("\n")
|
||||
if first_newline >= 0:
|
||||
text = text[first_newline + 1 :]
|
||||
if text.endswith("```"):
|
||||
text = text[:-3]
|
||||
text = text.strip()
|
||||
|
||||
if not text or text == "[]":
|
||||
return []
|
||||
|
||||
if not text.lstrip().startswith("["):
|
||||
start = text.find("[")
|
||||
end = text.rfind("]")
|
||||
if start >= 0 and end > start:
|
||||
text = text[start : end + 1]
|
||||
|
||||
try:
|
||||
parsed = json.loads(text)
|
||||
except json.JSONDecodeError as exc:
|
||||
log.error("llm_extractor_parse_failed", error=str(exc), raw_prefix=raw_output[:120])
|
||||
return []
|
||||
|
||||
if not isinstance(parsed, list):
|
||||
return []
|
||||
raw_items = parse_llm_json_array(raw_output)
|
||||
if not raw_items and raw_output.strip() not in ("", "[]"):
|
||||
log.error("llm_extractor_parse_failed", raw_prefix=raw_output[:120])
|
||||
|
||||
results: list[MemoryCandidate] = []
|
||||
for item in parsed:
|
||||
if not isinstance(item, dict):
|
||||
for raw_item in raw_items:
|
||||
normalized = normalize_candidate_item(raw_item)
|
||||
if normalized is None:
|
||||
continue
|
||||
mem_type = str(item.get("type") or "").strip().lower()
|
||||
content = str(item.get("content") or "").strip()
|
||||
model_project = str(item.get("project") or "").strip()
|
||||
# R9 trust hierarchy for project attribution:
|
||||
# 1. Interaction scope always wins when set (strongest signal)
|
||||
# 2. Model project used only when interaction is unscoped
|
||||
# AND model project resolves to a registered project
|
||||
# 3. Empty string when both are empty/unregistered
|
||||
|
||||
model_project = normalized["project"]
|
||||
# R9 trust hierarchy: interaction scope wins; else registry-
|
||||
# resolve the model's tag; else keep the model's tag so auto-
|
||||
# triage can surface unregistered projects.
|
||||
if interaction.project:
|
||||
project = interaction.project
|
||||
elif model_project:
|
||||
@@ -328,9 +228,6 @@ def _parse_candidates(raw_output: str, interaction: Interaction) -> list[MemoryC
|
||||
if resolved in registered_ids:
|
||||
project = resolved
|
||||
else:
|
||||
# Unregistered project — keep the model's tag so
|
||||
# auto-triage / the operator can see it and decide
|
||||
# whether to register it as a new project or lead.
|
||||
project = model_project
|
||||
log.info(
|
||||
"unregistered_project_detected",
|
||||
@@ -338,34 +235,19 @@ def _parse_candidates(raw_output: str, interaction: Interaction) -> list[MemoryC
|
||||
interaction_id=interaction.id,
|
||||
)
|
||||
except Exception:
|
||||
project = model_project if model_project else ""
|
||||
project = model_project
|
||||
else:
|
||||
project = ""
|
||||
domain = str(item.get("domain") or "").strip().lower()
|
||||
confidence_raw = item.get("confidence", 0.5)
|
||||
if mem_type not in MEMORY_TYPES:
|
||||
continue
|
||||
if not content:
|
||||
continue
|
||||
# Domain knowledge: embed the domain tag in the content so it
|
||||
# survives without a schema migration. The context builder
|
||||
# can match on it via query-relevance ranking, and a future
|
||||
# migration can parse it into a proper column.
|
||||
if domain and not project:
|
||||
content = f"[{domain}] {content}"
|
||||
try:
|
||||
confidence = float(confidence_raw)
|
||||
except (TypeError, ValueError):
|
||||
confidence = 0.5
|
||||
confidence = max(0.0, min(1.0, confidence))
|
||||
|
||||
content = normalized["content"]
|
||||
results.append(
|
||||
MemoryCandidate(
|
||||
memory_type=mem_type,
|
||||
content=content[:1000],
|
||||
memory_type=normalized["type"],
|
||||
content=content,
|
||||
rule="llm_extraction",
|
||||
source_span=content[:200],
|
||||
project=project,
|
||||
confidence=confidence,
|
||||
confidence=normalized["confidence"],
|
||||
source_interaction_id=interaction.id,
|
||||
extractor_version=LLM_EXTRACTOR_VERSION,
|
||||
)
|
||||
|
||||
@@ -340,6 +340,84 @@ def reinforce_memory(
|
||||
return True, old_confidence, new_confidence
|
||||
|
||||
|
||||
def auto_promote_reinforced(
|
||||
min_reference_count: int = 3,
|
||||
min_confidence: float = 0.7,
|
||||
max_age_days: int = 14,
|
||||
) -> list[str]:
|
||||
"""Auto-promote candidate memories with strong reinforcement signals.
|
||||
|
||||
Phase 10: memories that have been reinforced by multiple interactions
|
||||
graduate from candidate to active without human review. This rewards
|
||||
knowledge that the system keeps referencing organically.
|
||||
|
||||
Returns a list of promoted memory IDs.
|
||||
"""
|
||||
from datetime import timedelta
|
||||
|
||||
cutoff = (
|
||||
datetime.now(timezone.utc) - timedelta(days=max_age_days)
|
||||
).strftime("%Y-%m-%d %H:%M:%S")
|
||||
promoted: list[str] = []
|
||||
with get_connection() as conn:
|
||||
rows = conn.execute(
|
||||
"SELECT id, content, memory_type, project, confidence, "
|
||||
"reference_count FROM memories "
|
||||
"WHERE status = 'candidate' "
|
||||
"AND COALESCE(reference_count, 0) >= ? "
|
||||
"AND confidence >= ? "
|
||||
"AND last_referenced_at >= ?",
|
||||
(min_reference_count, min_confidence, cutoff),
|
||||
).fetchall()
|
||||
|
||||
for row in rows:
|
||||
mid = row["id"]
|
||||
ok = promote_memory(mid)
|
||||
if ok:
|
||||
promoted.append(mid)
|
||||
log.info(
|
||||
"memory_auto_promoted",
|
||||
memory_id=mid,
|
||||
memory_type=row["memory_type"],
|
||||
project=row["project"] or "(global)",
|
||||
reference_count=row["reference_count"],
|
||||
confidence=round(row["confidence"], 3),
|
||||
)
|
||||
return promoted
|
||||
|
||||
|
||||
def expire_stale_candidates(
|
||||
max_age_days: int = 14,
|
||||
) -> list[str]:
|
||||
"""Reject candidate memories that sat in queue too long unreinforced.
|
||||
|
||||
Candidates older than ``max_age_days`` with zero reinforcement are
|
||||
auto-rejected to prevent unbounded queue growth. Returns rejected IDs.
|
||||
"""
|
||||
from datetime import timedelta
|
||||
|
||||
cutoff = (
|
||||
datetime.now(timezone.utc) - timedelta(days=max_age_days)
|
||||
).strftime("%Y-%m-%d %H:%M:%S")
|
||||
expired: list[str] = []
|
||||
with get_connection() as conn:
|
||||
rows = conn.execute(
|
||||
"SELECT id FROM memories "
|
||||
"WHERE status = 'candidate' "
|
||||
"AND COALESCE(reference_count, 0) = 0 "
|
||||
"AND created_at < ?",
|
||||
(cutoff,),
|
||||
).fetchall()
|
||||
|
||||
for row in rows:
|
||||
mid = row["id"]
|
||||
ok = reject_candidate_memory(mid)
|
||||
if ok:
|
||||
expired.append(mid)
|
||||
log.info("memory_expired", memory_id=mid)
|
||||
return expired
|
||||
|
||||
|
||||
def get_memories_for_context(
|
||||
memory_types: list[str] | None = None,
|
||||
project: str | None = None,
|
||||
|
||||
@@ -171,3 +171,38 @@ def test_llm_extraction_failure_returns_empty(tmp_data_dir, monkeypatch):
|
||||
# Nothing in the candidate queue
|
||||
queue = get_memories(status="candidate", limit=10)
|
||||
assert len(queue) == 0
|
||||
|
||||
|
||||
def test_extract_batch_api_503_when_cli_missing(tmp_data_dir, monkeypatch):
|
||||
"""R11: POST /admin/extract-batch with mode=llm must fail loud when
|
||||
the `claude` CLI is unavailable, instead of silently returning a
|
||||
success-with-0-candidates payload (which masked host-vs-container
|
||||
truth for operators)."""
|
||||
from fastapi.testclient import TestClient
|
||||
from atocore.main import app
|
||||
import atocore.api.routes as routes
|
||||
|
||||
init_db()
|
||||
monkeypatch.setattr(routes, "_llm_cli_available", lambda: False)
|
||||
|
||||
client = TestClient(app)
|
||||
response = client.post("/admin/extract-batch", json={"mode": "llm"})
|
||||
|
||||
assert response.status_code == 503
|
||||
assert "claude" in response.json()["detail"].lower()
|
||||
|
||||
|
||||
def test_extract_batch_api_rule_mode_ok_without_cli(tmp_data_dir, monkeypatch):
|
||||
"""Rule mode must still work when the LLM CLI is missing — R11 only
|
||||
affects mode=llm."""
|
||||
from fastapi.testclient import TestClient
|
||||
from atocore.main import app
|
||||
import atocore.api.routes as routes
|
||||
|
||||
init_db()
|
||||
monkeypatch.setattr(routes, "_llm_cli_available", lambda: False)
|
||||
|
||||
client = TestClient(app)
|
||||
response = client.post("/admin/extract-batch", json={"mode": "rule"})
|
||||
|
||||
assert response.status_code == 200
|
||||
|
||||
@@ -186,3 +186,98 @@ def test_memories_for_context_empty(isolated_db):
|
||||
text, chars = get_memories_for_context()
|
||||
assert text == ""
|
||||
assert chars == 0
|
||||
|
||||
|
||||
# --- Phase 10: auto-promotion + candidate expiry ---
|
||||
|
||||
|
||||
def _get_memory_by_id(memory_id):
|
||||
"""Helper: fetch a single memory by ID."""
|
||||
from atocore.models.database import get_connection
|
||||
with get_connection() as conn:
|
||||
row = conn.execute("SELECT * FROM memories WHERE id = ?", (memory_id,)).fetchone()
|
||||
return dict(row) if row else None
|
||||
|
||||
|
||||
def test_auto_promote_reinforced_basic(isolated_db):
|
||||
from atocore.memory.service import (
|
||||
auto_promote_reinforced,
|
||||
create_memory,
|
||||
reinforce_memory,
|
||||
)
|
||||
|
||||
mem_obj = create_memory("knowledge", "Zerodur has near-zero CTE", status="candidate", confidence=0.7)
|
||||
mid = mem_obj.id
|
||||
# reinforce_memory only touches active memories, so we need to
|
||||
# promote first to reinforce, then demote back to candidate —
|
||||
# OR just bump reference_count + last_referenced_at directly
|
||||
from atocore.models.database import get_connection
|
||||
from datetime import datetime, timezone
|
||||
now = datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S")
|
||||
with get_connection() as conn:
|
||||
conn.execute(
|
||||
"UPDATE memories SET reference_count = 3, last_referenced_at = ? WHERE id = ?",
|
||||
(now, mid),
|
||||
)
|
||||
|
||||
promoted = auto_promote_reinforced(min_reference_count=3, min_confidence=0.7)
|
||||
assert mid in promoted
|
||||
mem = _get_memory_by_id(mid)
|
||||
assert mem["status"] == "active"
|
||||
|
||||
|
||||
def test_auto_promote_reinforced_ignores_low_refs(isolated_db):
|
||||
from atocore.memory.service import auto_promote_reinforced, create_memory
|
||||
from atocore.models.database import get_connection
|
||||
from datetime import datetime, timezone
|
||||
|
||||
mem_obj = create_memory("knowledge", "Some knowledge", status="candidate", confidence=0.7)
|
||||
mid = mem_obj.id
|
||||
now = datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S")
|
||||
with get_connection() as conn:
|
||||
conn.execute(
|
||||
"UPDATE memories SET reference_count = 1, last_referenced_at = ? WHERE id = ?",
|
||||
(now, mid),
|
||||
)
|
||||
|
||||
promoted = auto_promote_reinforced(min_reference_count=3, min_confidence=0.7)
|
||||
assert mid not in promoted
|
||||
mem = _get_memory_by_id(mid)
|
||||
assert mem["status"] == "candidate"
|
||||
|
||||
|
||||
def test_expire_stale_candidates(isolated_db):
|
||||
from atocore.memory.service import create_memory, expire_stale_candidates
|
||||
from atocore.models.database import get_connection
|
||||
|
||||
mem_obj = create_memory("knowledge", "Old unreferenced fact", status="candidate")
|
||||
mid = mem_obj.id
|
||||
with get_connection() as conn:
|
||||
conn.execute(
|
||||
"UPDATE memories SET created_at = datetime('now', '-30 days') WHERE id = ?",
|
||||
(mid,),
|
||||
)
|
||||
|
||||
expired = expire_stale_candidates(max_age_days=14)
|
||||
assert mid in expired
|
||||
mem = _get_memory_by_id(mid)
|
||||
assert mem["status"] == "invalid"
|
||||
|
||||
|
||||
def test_expire_stale_candidates_keeps_reinforced(isolated_db):
|
||||
from atocore.memory.service import create_memory, expire_stale_candidates
|
||||
from atocore.models.database import get_connection
|
||||
|
||||
mem_obj = create_memory("knowledge", "Referenced fact", status="candidate")
|
||||
mid = mem_obj.id
|
||||
with get_connection() as conn:
|
||||
conn.execute(
|
||||
"UPDATE memories SET reference_count = 1, "
|
||||
"created_at = datetime('now', '-30 days') WHERE id = ?",
|
||||
(mid,),
|
||||
)
|
||||
|
||||
expired = expire_stale_candidates(max_age_days=14)
|
||||
assert mid not in expired
|
||||
mem = _get_memory_by_id(mid)
|
||||
assert mem["status"] == "candidate"
|
||||
|
||||
Reference in New Issue
Block a user