b309e7fd491d92459cfcdf0e05c3ef45a0d512ad
13 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
| b309e7fd49 |
feat(eval-loop): Day 4 — LLM-assisted extractor path (additive, flagged)
Day 2 baseline showed 0% recall for the rule-based extractor across
5 distinct miss classes. Day 4 decision gate: prototype an
LLM-assisted mode behind a flag. Option A ratified by Antoine.
New module src/atocore/memory/extractor_llm.py:
- extract_candidates_llm(interaction) returns the same MemoryCandidate
dataclass the rule extractor produces, so both paths flow through
the existing triage / candidate pipeline unchanged.
- extract_candidates_llm_verbose() also returns the raw model output
and any error string, for eval and debugging.
- Uses Claude Haiku 4.5 by default; model overridable via
ATOCORE_LLM_EXTRACTOR_MODEL env. Timeout via
ATOCORE_LLM_EXTRACTOR_TIMEOUT_S (default 20s).
- Silent-failure contract: missing API key, unreachable model,
malformed JSON — all return [] and log an error. Never raises
into the caller. The capture audit trail must not break on an
optional side effect.
- Parser tolerates markdown fences, surrounding prose, invalid
memory types, clamps confidence to [0,1], drops empty content.
- System prompt explicitly tells the model to return [] for most
conversational turns (durable-fact bar, not "extract everything").
- Trust rules unchanged: candidates are never auto-promoted,
extraction stays off the capture hot path, human triages via the
existing CLI.
scripts/extractor_eval.py: new --mode {rule,llm} flag so the same
labeled corpus can be scored against both extractors. Default
remains rule so existing invocations are unchanged.
tests/test_extractor_llm.py: 12 new unit tests covering the parser
(empty array, malformed JSON, markdown fences, surrounding prose,
invalid types, empty content, confidence clamping, version tagging),
plus contract tests for missing API key, empty response, and a
mocked api_error path so failure modes never raise.
Test count: 264 -> 276 passing. No existing tests changed.
Next step: run `python scripts/extractor_eval.py --mode llm` against
the labeled set with ANTHROPIC_API_KEY in env, record the delta,
decide whether to wire LLM mode into the API endpoint and CLI or
keep it script-only for now.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
|||
| 7d8d599030 |
feat(eval-loop): Day 1+2 — labeled extractor corpus + baseline scorecard
Day 1 (labeled corpus):
- scripts/eval_data/interactions_snapshot_2026-04-11.json — frozen
snapshot of 64 real claude-code interactions pulled from live
Dalidou (test-client captures filtered out). This is the stable
corpus the whole mini-phase labels against, independent of future
captures.
- scripts/eval_data/extractor_labels_2026-04-11.json — 20 hand-labeled
interactions drawn by length-stratified random sample. Positives:
5/20 = ~25%, total expected candidates: 7. Plan deviation: Codex's
plan asked for 30 (10/10/10 buckets); the real corpus is heavily
skewed toward instructional/status content, so honest labeling of
20 already crosses the fail-early threshold of "at least 5 plausible
positives" without padding.
Day 2 (baseline measurement):
- scripts/extractor_eval.py — file-based eval runner that loads the
snapshot + labels, runs extract_candidates_from_interaction on each,
and reports yield / recall / precision / miss-class breakdown.
Returns exit 1 on any false positive or false negative.
Current rule extractor against the labeled set:
labeled=20 exact_match=15 positive_expected=5
yield=0.0 recall=0.0 precision=0.0
false_negatives=5 false_positives=0
miss_classes:
recommendation_prose
architectural_change_summary
spec_update_announcement
layered_recommendation
alignment_assertion
Interpretation: the rule-based extractor matches exactly zero of the
5 plausible positive interactions in the labeled set, and the misses
are spread across 5 distinct cue classes with no single dominant
pattern. This is the Day 4 hard-stop signal landing on Day 2 — a
single rule expansion cannot close a 5-way miss, and widening rules
blindly will collapse precision. The right move is to go straight to
the Day 4 decision gate and consider LLM-assisted extraction.
Escalating to DEV-LEDGER.md as R5 for human ratification before
continuing. Not skipping Day 3 silently.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
|||
| 30ee857d62 |
test: loosen p05-configuration fixture cross-project check
The fixture asserted 'GigaBIT M1' must not appear in a p05 pack,
but GigaBIT M1 is the mirror the interferometer measures, so its
name legitimately shows up in p05 source docs (CGH test setup
diagrams, AOM design input, etc.). Flagging it as bleed was false
positive.
Replace the assertion with genuinely p04-only material: the
'Option B' / 'conical back' architecture decision and a p06 tag,
neither of which has any reason to appear in a p05 configuration
answer.
Harness now passes 6/6 against live Dalidou at
|
|||
| 4da81c9e4e |
feat: retrieval eval harness + doc sync
scripts/retrieval_eval.py walks a fixture file of project-hinted
questions, runs each against POST /context/build, and scores the
returned formatted_context against per-fixture expect_present and
expect_absent substring checklists. Exit 0 on all-pass, 1 on any
miss. Human-readable by default, --json for automation.
First live run against Dalidou at SHA
|
|||
| 9366ba7879 |
feat: length-aware reinforcement + batch triage CLI + off-host backup
- Reinforcement matcher now handles paragraph-length memories via a
dual-mode threshold: short memories keep the 70% overlap rule,
long memories (>15 stems) require 12 absolute overlaps AND 35%
fraction so organic paraphrase can still reinforce. Diagnosis:
every active memory stayed at reference_count=0 because 40-token
project summaries never hit 70% overlap on real responses.
- scripts/atocore_client.py gains batch-extract (fan out
/interactions/{id}/extract over recent interactions) and triage
(interactive promote/reject walker for the candidate queue),
matching the Phase 9 reflection-loop review flow without pulling
extraction into the capture hot path.
- deploy/dalidou/cron-backup.sh adds an optional off-host rsync step
gated on ATOCORE_BACKUP_RSYNC, fail-open when the target is offline
so a laptop being off at 03:00 UTC never reds the local backup.
- docs/next-steps.md records the retrieval-quality sweep: project
state surfaces, chunks are on-topic but broad, active memories
never reach the pack (reflection loop has no retrieval outlet yet).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
|||
| b492f5f7b0 |
fix: schema init ordering, deploy.sh default, client BASE_URL docs
Three issues Dalidou Claude surfaced during the first real deploy
of commit
|
|||
| fad30d5461 |
feat(client): Phase 9 reflection loop surface in shared operator CLI
Codex's sequence step 3: finish the Phase 9 operator surface in the
shared client. The previous client version (0.1.0) covered stable
operations (project lifecycle, retrieval, context build, trusted
state, audit-query) but explicitly deferred capture/extract/queue/
promote/reject pending "exercised workflow". That deferral ran
into a bootstrap problem: real Claude Code sessions can't exercise
the Phase 9 loop without a usable client surface to drive it. This
commit ships the 8 missing subcommands so the next step (real
validation on Dalidou) is unblocked.
Bumps CLIENT_VERSION from 0.1.0 to 0.2.0 per the semver rules in
llm-client-integration.md (new subcommands = minor bump).
New subcommands in scripts/atocore_client.py
--------------------------------------------
| Subcommand | Endpoint |
|-----------------------|-------------------------------------------|
| capture | POST /interactions |
| extract | POST /interactions/{id}/extract |
| reinforce-interaction | POST /interactions/{id}/reinforce |
| list-interactions | GET /interactions |
| get-interaction | GET /interactions/{id} |
| queue | GET /memory?status=candidate |
| promote | POST /memory/{id}/promote |
| reject | POST /memory/{id}/reject |
Each follows the existing client style: positional arguments with
empty-string defaults for optional filters, truthy-string arguments
for booleans (matching the existing refresh-project pattern), JSON
output via print_json(), fail-open behavior inherited from
request().
capture accepts prompt + response + project + client + session_id +
reinforce as positionals, defaulting the client field to
"atocore-client" when omitted so every capture from the shared
client is identifiable in the interactions audit trail.
extract defaults to preview mode (persist=false). Pass "true" as
the second positional to create candidate memories.
list-interactions and queue build URL query strings with
url-encoded values and always include the limit, matching how the
existing context-build subcommand handles its parameters.
Security fix: ID-field URL encoding
-----------------------------------
The initial draft used urllib.parse.quote() with the default safe
set, which does NOT encode "/" because it's a reserved path
character. That's a security footgun on ID fields: passing
"promote mem/evil/action" would build /memory/mem/evil/action/promote
and hit a completely different endpoint than intended.
Fixed by passing safe="" to urllib.parse.quote() on every ID field
(interaction_id and memory_id). The tests cover this explicitly via
test_extract_url_encodes_interaction_id and test_promote_url_encodes_memory_id,
both of which would have failed with the default behavior.
Project names keep the default quote behavior because a project
name with a slash would already be broken elsewhere in the system
(ingest root resolution, file paths, etc).
tests/test_atocore_client.py (new, 18 tests, all green)
-------------------------------------------------------
A dedicated test file for the shared client that mocks the
request() helper and verifies each subcommand:
- calls the correct HTTP method and path
- builds the correct JSON body (or query string)
- passes the right subset of CLI arguments through
- URL-encodes ID fields so path traversal isn't possible
Tests are structured as unit tests (not integration tests) because
the API surface on the server side already has its own route tests
in test_api_storage.py and the Phase 9 specific files. These tests
are the wiring contract between CLI args and HTTP calls.
Test file highlights:
- capture: default values, custom client, reinforce=false
- extract: preview by default, persist=true opt-in, URL encoding
- reinforce-interaction: correct path construction
- list-interactions: no filters, single filter, full filter set
(including ISO 8601 since parameter with T separator and Z)
- get-interaction: fetch by id
- queue: always filters status=candidate, accepts memory_type
and project, coerces limit to int
- promote / reject: correct path + URL encoding
- test_phase9_full_loop_via_client_shape: end-to-end sequence
that drives capture -> extract preview -> extract persist ->
queue list -> promote -> reject through the shared client and
verifies the exact sequence of HTTP calls that would be made
These tests run in ~0.2s because they mock request() — no DB, no
Chroma, no HTTP. The fast feedback loop matters because the
client surface is what every agent integration eventually depends
on.
docs/architecture/llm-client-integration.md updates
---------------------------------------------------
- New "Phase 9 reflection loop (shipped after migration safety
work)" section under "What's in scope for the shared client
today" with the full 8-subcommand table and a note explaining
the bootstrap-problem rationale
- Removed the "Memory review queue and reflection loop" section
from "What's intentionally NOT in scope today"; backup admin
and engineering-entity commands remain the only deferred
families
- Renumbered the deferred-commands list (was 3 items, now 2)
- Open follow-ups updated: memory-review-subcommand item replaced
with "real-usage validation of the Phase 9 loop" as the next
concrete dependency
- TL;DR updated to list the reflection-loop subcommands
- Versioning note records the v0.1.0 -> v0.2.0 bump with the
subcommands included
Full suite: 215 passing (was 197), 1 warning. The +18 is
tests/test_atocore_client.py. Runtime unchanged because the new
tests don't touch the DB.
What this commit does NOT do
----------------------------
- Does NOT change the server-side endpoints. All 8 subcommands
call existing API routes that were shipped in Phase 9 Commits
A/B/C. This is purely a client-side wiring commit.
- Does NOT run the reflection loop against the live Dalidou
instance. That's the next concrete step and is explicitly
called out in the open-follow-ups section of the updated doc.
- Does NOT modify the Claude Code slash command. It still pulls
context only; the capture/extract/queue/promote companion
commands (e.g. /atocore-record-response) are deferred until the
capture workflow has been exercised in real use at least once.
- Does NOT refactor the OpenClaw helper. That's a cross-repo
change and remains a queued follow-up, now unblocked by the
shared client having the reflection-loop subcommands.
|
|||
| 261277fd51 |
fix(migration): preserve superseded/invalid shadow state during rekey
Codex caught a real data-loss bug in the legacy alias migration
shipped in
|
|||
| 7e60f5a0e6 |
feat(ops): legacy alias migration script with dry-run/apply modes
Closes the compatibility gap documented in docs/architecture/project-identity-canonicalization.md. Before |
|||
| d0ff8b5738 |
Merge origin/main into codex/dalidou-storage-foundation
Integrate codex/port-atocore-ops-client (operator client + operations playbook) so the dalidou-storage-foundation branch can fast-forward into main. # Conflicts: # README.md |
|||
| b9da5b6d84 |
phase9 first-real-use validation + small hygiene wins
Session 1 of the four-session plan. Empirically exercises the Phase 9
loop (capture -> reinforce -> extract) for the first time and lands
three small hygiene fixes.
Validation script + report
--------------------------
scripts/phase9_first_real_use.py — reproducible script that:
- sets up an isolated SQLite + Chroma store under
data/validation/phase9-first-use (gitignored)
- seeds 3 active memories
- runs 8 sample interactions through capture + reinforce + extract
- prints what each step produced and reinforcement state at the end
- supports --json output for downstream tooling
docs/phase9-first-real-use.md — narrative report of the run with:
- extraction results table (8/8 expectations met exactly)
- the empirical finding that REINFORCEMENT MATCHED ZERO seeds
despite sample 5 clearly echoing the rebase preference memory
- root cause analysis: the substring matcher is too brittle for
natural paraphrases (e.g. "prefers" vs "I prefer", "history"
vs "the history")
- recommended fix: replace substring matcher with a token-overlap
matcher (>=70% of memory tokens present in response, with
light stemming and a small stop list)
- explicit note that the fix is queued as a follow-up commit, not
bundled into the report — keeps the audit trail clean
Key extraction results from the run:
- all 7 heading/sentence rules fired correctly
- 0 false positives on the prose-only sample (the most important
sanity check)
- long content preserved without truncation
- dedup correctly kept three distinct cues from one interaction
- project scoping flowed cleanly through the pipeline
Hygiene 1: FastAPI lifespan migration (src/atocore/main.py)
- Replaced @app.on_event("startup") with the modern @asynccontextmanager
lifespan handler
- Same setup work (setup_logging, ensure_runtime_dirs, init_db,
init_project_state_schema, startup_ready log)
- Removes the two on_event deprecation warnings from every test run
- Test suite now shows 1 warning instead of 3
Hygiene 2: EXTRACTOR_VERSION constant (src/atocore/memory/extractor.py)
- Added EXTRACTOR_VERSION = "0.1.0" with a versioned change log comment
- MemoryCandidate dataclass carries extractor_version on every candidate
- POST /interactions/{id}/extract response now includes extractor_version
on both the top level (current run) and on each candidate
- Implements the versioning requirement called out in
docs/architecture/promotion-rules.md so old candidates can be
identified and re-evaluated when the rule set evolves
Hygiene 3: ~/.git-credentials cleanup (out-of-tree, not committed)
- Removed the dead OAUTH_USER:<jwt> line for dalidou:3000 that was
being silently rewritten by the system credential manager on every
push attempt
- Configured credential.http://dalidou:3000.helper with the empty-string
sentinel pattern so the URL-specific helper chain is exactly
["", store] instead of inheriting the system-level "manager" helper
that ships with Git for Windows
- Same fix for the 100.80.199.40 (Tailscale) entry
- Verified end to end: a fresh push using only the cleaned credentials
file (no embedded URL) authenticates as Antoine and lands cleanly
Full suite: 160 passing (no change from previous), 1 warning
(was 3) thanks to the lifespan migration.
|
|||
| ceb129c7d1 | Add operator client and operations playbook | |||
| b4afbbb53a |
feat: implement AtoCore Phase 0 + Phase 0.5 (foundation + PoC)
Complete implementation of the personal context engine foundation: - FastAPI server with 5 endpoints (ingest, query, context/build, health, debug) - SQLite database with 5 tables (documents, chunks, memories, projects, interactions) - Heading-aware markdown chunker (800 char max, recursive splitting) - Multilingual embeddings via sentence-transformers (EN/FR) - ChromaDB vector store with cosine similarity retrieval - Context builder with project boosting, dedup, and budget enforcement - CLI scripts for batch ingestion and test prompt evaluation - 19 unit tests passing, 79% coverage - Validated on 482 real project files (8383 chunks, 0 errors) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |