3 Commits

6 changed files with 145 additions and 29 deletions

View File

@@ -6,11 +6,11 @@
## Orientation
- **live_sha** (Dalidou `/health` build_sha): `f44a211` (verified 2026-04-24T14:48:44Z post audit-improvements deploy; status=ok)
- **last_updated**: 2026-04-24 by Codex (retrieval boundary deployed; project_id metadata branch started)
- **main_tip**: `f44a211`
- **test_count**: 567 on `codex/project-id-metadata-retrieval` (deployed main baseline: 553)
- **harness**: `19/20 PASS` on live Dalidou, 0 blocking failures, 1 known content gap (`p04-constraints`)
- **live_sha** (Dalidou `/health` build_sha): `d3de9f6` (verified 2026-04-25T01:01Z post trusted-state ranking deploy; status=ok)
- **last_updated**: 2026-04-25 by Codex (retrieval harness fully green)
- **main_tip**: `d3de9f6`
- **test_count**: 572 on `main`
- **harness**: `20/20 PASS` on live Dalidou, 0 blocking failures, 0 known issues
- **vectors**: 33,253
- **active_memories**: 290 (`/admin/dashboard` 2026-04-24; note integrity panel reports a separate active_memory_count=951 and needs reconciliation)
- **candidate_memories**: 0 (triage queue drained)
@@ -23,9 +23,9 @@
- **capture_clients**: claude-code (Stop hook + cwd project inference), openclaw (before_agent_start + llm_output plugin, verified live)
- **wiki**: http://dalidou:8100/wiki (browse), /wiki/projects/{id}, /wiki/entities/{id}, /wiki/search
- **dashboard**: http://dalidou:8100/admin/dashboard (now shows pipeline health, interaction totals by client, all registered projects)
- **active_track**: Engineering V1 Completion (started 2026-04-22). V1-0 landed (`2712c5d`). V1-A density gate CLEARED (784 active ≫ 100 target as of 2026-04-23). V1-A soak gate at day 5/~7 (F4 first run 2026-04-19; nightly clean 2026-04-19 through 2026-04-23; failures confined to the known p04-constraints content gap). Plan: `docs/plans/engineering-v1-completion-plan.md`. Resume map: `docs/plans/v1-resume-state.md`.
- **active_track**: Engineering V1 Completion (started 2026-04-22). V1-0 landed (`2712c5d`). V1-A density gate CLEARED (784 active ≫ 100 target as of 2026-04-23). V1-A soak gate at day 5/~7 (F4 first run 2026-04-19; nightly clean 2026-04-19 through 2026-04-23; live harness is now fully green as of 2026-04-25). Plan: `docs/plans/engineering-v1-completion-plan.md`. Resume map: `docs/plans/v1-resume-state.md`.
- **last_nightly_pipeline**: `2026-04-23T03:00:20Z` — harness 17/18, triage promoted=3 rejected=7 human=0, dedup 7 clusters (1 tier1 + 6 tier2 auto-merged), graduation 30-skipped 0-graduated 0-errors, auto-triage drained the queue (0 new candidates 2026-04-22T00:52Z run)
- **open_branches**: none — R14 squash-merged as `0989fed` and deployed 2026-04-23T15:20:53Z. V1-A is the next scheduled work
- **open_branches**: `codex/p04-constraints-state-gap` pushed and fast-forwarded into `main` as `d3de9f6`; no active unmerged code branch for this tranche.
## Active Plan
@@ -170,6 +170,10 @@ One branch `codex/extractor-eval-loop` for Day 1-5, a second `codex/retrieval-ha
## Session Log
- **2026-04-25 Codex (p04 constraint gap closed; harness fully green)** Root-caused the remaining `p04-constraints` fixture: the `Zerodur` / `1.2` fact already existed in Trusted Project State (`requirement/key_constraints`), but project-state formatting was category/key ordered and then truncated to the 20% state budget, so contacts/decisions consumed the budget before the relevant requirement. Added query-relevance ranking for Trusted Project State entries before formatting/truncation, with regression coverage in `test_project_state_query_relevance_before_truncation`. Removed the fixture's `known_issue` lane so future p04 constraint regressions are blocking. Cleaned up a duplicate live requirement entry created during diagnosis by invalidating `requirement/mirror-blank-core-constraints`; canonical `requirement/key_constraints` remains active. Verified focused suite: 35 passed. Verified full local suite: 572 passed. Deployed `d3de9f67eaa08dfc5b2d86e8221b8c70fef266d3`; live exact p04 probe now surfaces `[REQUIREMENT] key_constraints` with `1.2` and `Zerodur`. Live retrieval harness: 20/20, 0 known issues, 0 blocking failures.
- **2026-04-25 Codex (project_id backfill + retrieval stabilization closed)** Merged `codex/project-id-metadata-retrieval` into `main` (`867a1ab`) and deployed to Dalidou. Took Chroma-inclusive backup `/srv/storage/atocore/backups/snapshots/20260424T154358Z`, then ran `scripts/backfill_chunk_project_ids.py` per project; populated projects `p04-gigabit`, `p05-interferometer`, `p06-polisher`, `atomizer-v2`, and `atocore` applied cleanly for 33,253 vectors total, with 0 missing/malformed and an immediate final dry-run showing 33,253 already tagged / 0 updates. Post-backfill harness exposed p06 memory-ranking misses (`Tailscale`, `encoder`), so Codex shipped `4744c69` then `a87d984 fix(memory): widen query-time context candidates`. Full local suite: 571 passed. Live `/health` reports `a87d9845a8c34395a02890f0cf22aa7a46afaf62`, vectors=33,253, sources_ready=true. Live retrieval harness: 19/20, 0 blocking failures, 1 known issue (`p04-constraints` missing `Zerodur` / `1.2`). A repeat backfill dry-run after the code-only stabilization deploy was aborted after the one-off container ran too long; the live service stayed healthy and the earlier post-apply idempotency result remains the migration acceptance record. Dalidou HTTP push credentials are still not configured; this session pushed through the Windows credential path.
- **2026-04-24 Codex (retrieval boundary deployed + project_id metadata tranche)** Merged `codex/audit-improvements-foundation` to `main` as `f44a211` and pushed to Dalidou Gitea. Took pre-deploy runtime backup `/srv/storage/atocore/backups/snapshots/20260424T144810Z` (DB + registry, no Chroma). Deployed via `papa@dalidou` canonical `deploy/dalidou/deploy.sh`; live `/health` reports build_sha `f44a2114970008a7eec4e7fc2860c8f072914e38`, build_time `2026-04-24T14:48:44Z`, status ok. Post-deploy retrieval harness: 20 fixtures, 19 pass, 0 blocking failures, 1 known issue (`p04-constraints`). The former blocker `p05-broad-status-no-atomizer` now passes. Manual p05 `context-build "current status"` spot check shows no p04/Atomizer source bleed in retrieved chunks. Started follow-up branch `codex/project-id-metadata-retrieval`: registered-project ingestion now writes explicit `project_id` into DB chunk metadata and Chroma vector metadata; retrieval prefers exact `project_id` when present and keeps path/tag matching as legacy fallback; added dry-run-by-default `scripts/backfill_chunk_project_ids.py` to backfill SQLite + Chroma metadata; added tests for project-id ingestion, registered refresh propagation, exact project-id retrieval, and collision fallback. Verified targeted suite (`test_ingestion.py`, `test_project_registry.py`, `test_retrieval.py`): 36 passed. Verified full suite: 556 passed in 72.44s. Branch not merged or deployed yet.
- **2026-04-24 Codex (project_id audit response)** Applied independent-audit fixes on `codex/project-id-metadata-retrieval`. Closed the nightly `/ingest/sources` clobber risk by adding registry-level `derive_project_id_for_path()` and making unscoped `ingest_file()` derive ownership from registered ingest roots when possible; `refresh_registered_project()` still passes the canonical project id directly. Changed retrieval so empty `project_id` falls through to legacy path/tag ownership instead of short-circuiting as unowned. Hardened `scripts/backfill_chunk_project_ids.py`: `--apply` now requires `--chroma-snapshot-confirmed`, runs Chroma metadata updates before SQLite writes, batches updates, skips/report missing vectors, skips/report malformed metadata, reports already-tagged rows, and turns missing ingestion tables into a JSON `db_warning` instead of a traceback. Added tests for auto-derive ingestion, empty-project fallback, ingest-root overlap rejection, and backfill dry-run/apply/snapshot/missing-vector/malformed cases. Verified targeted suite (`test_backfill_chunk_project_ids.py`, `test_ingestion.py`, `test_project_registry.py`, `test_retrieval.py`): 45 passed. Verified full suite: 565 passed in 73.16s. Local dry-run on empty/default data returns 0 updates with `db_warning` rather than crashing. Branch still not merged/deployed.

View File

@@ -1,11 +1,18 @@
# AtoCore - Current State (2026-04-24)
# AtoCore - Current State (2026-04-25)
Update 2026-04-24: audit-improvements deployed as `f44a211`; live harness is
19/20 with 0 blocking failures and 1 known content gap. Active follow-up branch
`codex/project-id-metadata-retrieval` is at 567 passing tests.
Update 2026-04-25: project-id chunk/vector metadata is deployed and backfilled,
and the final p04 Trusted Project State budget/ranking gap is closed. Live
Dalidou is on `d3de9f6`; `/health` is ok with 33,253 vectors and sources
ready. Live retrieval harness is 20/20 with 0 blocking failures and 0 known
issues. Full local suite: 572 passed.
Live deploy: `2b86543` · Dalidou health: ok · Harness: 18/20 with 1 known
content gap and 1 current blocking project-bleed guard · Tests: 553 passing.
The project-id backfill was applied per populated project after a
Chroma-inclusive backup at
`/srv/storage/atocore/backups/snapshots/20260424T154358Z`. The immediate
post-apply dry-run reported 33,253 already tagged, 0 updates, 0 missing, and 0
malformed. A later repeat dry-run after the code-only ranking deploy was
aborted because the one-off container ran too long; the earlier post-apply
idempotency result remains the migration acceptance record.
## V1-0 landed 2026-04-22
@@ -69,10 +76,10 @@ Last nightly run (2026-04-19 03:00 UTC): **31 promoted · 39 rejected · 0 needs
| 7G | Re-extraction on prompt version bump | pending |
| 7H | Chroma vector hygiene (delete vectors for superseded memories) | pending |
## Known gaps (honest, refreshed 2026-04-24)
## Known gaps (honest, refreshed 2026-04-25)
1. **Capture surface is Claude-Code-and-OpenClaw only.** Conversations in Claude Desktop, Claude.ai web, phone, or any other LLM UI are NOT captured. Example: the rotovap/mushroom chat yesterday never reached AtoCore because no hook fired. See Q4 below.
2. **Project-scoped retrieval guard is deployed and passing.** The April 24 p05 broad-status bleed guard now passes on live Dalidou. The active follow-up branch adds explicit `project_id` chunk/vector metadata so the deployed path/tag heuristic can become a legacy fallback.
2. **Project-scoped retrieval guard is deployed and passing.** Explicit `project_id` chunk/vector metadata is now present in SQLite and Chroma for the 33,253-vector corpus. Retrieval prefers exact metadata ownership and keeps path/tag matching as a legacy fallback.
3. **Human interface is useful but not yet the V1 Human Mirror.** Wiki/dashboard pages exist, but the spec routes, deterministic mirror files, disputed markers, and curated annotations remain V1-D work.
4. **Harness known issue:** `p04-constraints` wants "Zerodur" and "1.2"; live retrieval surfaces related constraints but not those exact strings. Treat as content/state gap until fixed.
4. **Harness is currently green.** The former `p04-constraints` known issue is closed; query-relevant Trusted Project State entries now rank before state-budget truncation.
5. **Formal docs lag the ledger during fast work.** Use `DEV-LEDGER.md` and `python scripts/live_status.py` for live truth, then copy verified claims into these docs.

View File

@@ -131,10 +131,12 @@ This sits implicitly between Phase 8 (OpenClaw) and Phase 11
(multi-model). Memory-review and engineering-entity commands are
deferred from the shared client until their workflows are exercised.
## What Is Real Today (updated 2026-04-24)
## What Is Real Today (updated 2026-04-25)
- canonical AtoCore runtime on Dalidou (`2b86543`, deploy.sh verified)
- 33,253 vectors across 6 registered projects
- canonical AtoCore runtime on Dalidou (`d3de9f6`, deploy.sh verified)
- 33,253 vectors across 6 registered projects, with explicit `project_id`
metadata backfilled into SQLite and Chroma after snapshot
`/srv/storage/atocore/backups/snapshots/20260424T154358Z`
- 951 captured interactions as of the 2026-04-24 live dashboard; refresh
exact live counts with
`python scripts/live_status.py`
@@ -149,10 +151,13 @@ deferred from the shared client until their workflows are exercised.
- 290 active memories and 0 candidate memories as of the 2026-04-24 live
dashboard
- context pack assembly with 4 tiers: Trusted Project State > identity/preference > project memories > retrieved chunks
- query-relevance memory ranking with overlap-density scoring
- retrieval eval harness: 20 fixtures; current live has 19 pass, 1 known
content gap, and 0 blocking failures after the audit-improvements deploy
- 567 tests passing on the active `codex/project-id-metadata-retrieval` branch
- query-relevance memory ranking with overlap-density scoring and widened
query-time candidate pools so older exact-intent project memories can rank
ahead of generic high-confidence notes
- query-relevance Trusted Project State ranking before state-budget truncation
- retrieval eval harness: 20 fixtures; current live has 20 pass, 0 known
issues, and 0 blocking failures
- 572 tests passing on `main`
- nightly pipeline: backup → cleanup → rsync → OpenClaw import → vault refresh → extract → triage → **auto-promote/expire** → weekly synth/lint → **retrieval harness****pipeline summary to project state**
- Phase 10 operational: reinforcement-based auto-promotion (ref_count ≥ 3, confidence ≥ 0.7) + stale candidate expiry (14 days unreinforced)
- pipeline health visible in dashboard: interaction totals by client, pipeline last_run, harness results, triage stats
@@ -173,9 +178,10 @@ These are the current practical priorities.
Target: 100+ active memories.
3. **Multi-model triage** (Phase 11 entry) — switch auto-triage to a
different model than the extractor for independent validation
4. **Fix p04-constraints harness failure** — retrieval doesn't surface
"Zerodur" for p04 constraint queries. Investigate if it's a missing
memory or retrieval ranking issue.
4. **Fix Dalidou Git credentials** — the host checkout can fetch but cannot
push to Gitea over HTTP in non-interactive SSH sessions. Prefer switching
the deploy checkout to a Gitea SSH key; PAT-backed `credential.helper store`
is the fallback.
## Active — Engineering V1 Completion Track (started 2026-04-22)

View File

@@ -27,8 +27,7 @@
"expect_absent": [
"polisher suite"
],
"known_issue": true,
"notes": "Known content gap as of 2026-04-24: live retrieval surfaces related constraints but not the exact Zerodur / 1.2 strings. Keep visible, but do not make nightly harness red until the source/state gap is fixed."
"notes": "Regression guard: query-relevant Trusted Project State requirements must survive the project-state budget cap."
},
{
"name": "p04-short-ambiguous",

View File

@@ -11,7 +11,7 @@ from dataclasses import dataclass, field
from pathlib import Path
import atocore.config as _config
from atocore.context.project_state import format_project_state, get_state
from atocore.context.project_state import ProjectStateEntry, format_project_state, get_state
from atocore.memory.service import get_memories_for_context
from atocore.observability.logger import get_logger
from atocore.engineering.service import get_entities, get_entity_with_context
@@ -116,6 +116,11 @@ def build_context(
if canonical_project:
state_entries = get_state(canonical_project)
if state_entries:
state_entries = _rank_project_state_entries(
state_entries,
query=user_prompt,
project=canonical_project,
)
project_state_text = format_project_state(state_entries)
project_state_text, project_state_chars = _truncate_text_block(
project_state_text,
@@ -284,6 +289,55 @@ def get_last_context_pack() -> ContextPack | None:
return _last_context_pack
def _rank_project_state_entries(
entries: list[ProjectStateEntry],
query: str,
project: str,
) -> list[ProjectStateEntry]:
"""Promote query-relevant trusted state before the state band is truncated."""
if not query or len(entries) <= 1:
return entries
from atocore.memory.reinforcement import _normalize, _tokenize
query_text = _normalize(query.replace("_", " "))
query_tokens = set(_tokenize(query_text))
query_tokens -= {
"how",
"what",
"when",
"where",
"which",
"who",
"why",
"current",
"status",
"project",
}
for part in (project or "").lower().replace("_", "-").split("-"):
query_tokens.discard(part)
if not query_tokens:
return entries
scored: list[tuple[int, float, float, int, ProjectStateEntry]] = []
for index, entry in enumerate(entries):
entry_text = " ".join(
[
entry.category,
entry.key.replace("_", " "),
entry.value,
entry.source,
]
)
entry_tokens = _tokenize(_normalize(entry_text))
overlap = len(entry_tokens & query_tokens) if entry_tokens else 0
density = overlap / len(entry_tokens) if entry_tokens else 0.0
scored.append((overlap, density, entry.confidence, -index, entry))
scored.sort(key=lambda item: (item[0], item[1], item[2], item[3]), reverse=True)
return [entry for _, _, _, _, entry in scored]
def _rank_chunks(
candidates: list[ChunkResult],
project_hint: str | None,

View File

@@ -143,6 +143,52 @@ def test_project_state_respects_total_budget(tmp_data_dir, sample_markdown):
assert len(pack.formatted_context) <= 120
def test_project_state_query_relevance_before_truncation(tmp_data_dir, sample_markdown):
"""Relevant trusted state should survive the project-state budget cap."""
init_db()
init_project_state_schema()
ingest_file(sample_markdown)
set_state(
"p04-gigabit",
"contact",
"abb-space",
"ABB Space is the primary vendor contact for polishing, CCP, IBF, procurement coordination, "
"contract administration, interface planning, and delivery discussions.",
)
set_state(
"p04-gigabit",
"decision",
"back-structure",
"Option B selected: conical isogrid back structure with variable rib density. "
"Chosen over flat-back for stiffness-to-weight ratio and manufacturability.",
)
set_state(
"p04-gigabit",
"decision",
"polishing-vendor",
"ABB Space selected as polishing vendor. Contract includes computer-controlled polishing "
"and ion beam figuring.",
)
set_state(
"p04-gigabit",
"requirement",
"key_constraints",
"The program targets a 1.2 m lightweight Zerodur mirror with filtered mechanical WFE below 15 nm "
"and mass below 103.5 kg.",
)
pack = build_context(
"what are the key GigaBIT M1 program constraints",
project_hint="p04-gigabit",
budget=3000,
)
assert "Zerodur" in pack.formatted_context
assert "1.2" in pack.formatted_context
assert pack.formatted_context.find("[REQUIREMENT]") < pack.formatted_context.find("[CONTACT]")
def test_project_hint_matches_state_case_insensitively(tmp_data_dir, sample_markdown):
"""Project state lookup should not depend on exact casing."""
init_db()