028d4c35947cf3807f0926e9139a0e03c3c8ac4d
33 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
| 028d4c3594 |
feat: Phase 7A — semantic memory dedup ("sleep cycle" V1)
New table memory_merge_candidates + service functions to cluster near-duplicate active memories within (project, memory_type) buckets, draft a unified content via LLM, and merge on human approval. Source memories become superseded (never deleted); merged memory carries union of tags, max of confidence, sum of reference_count. - schema migration for memory_merge_candidates - atocore.memory.similarity: cosine + transitive clustering - atocore.memory._dedup_prompt: stdlib-only LLM prompt preserving every specific - service: merge_memories / create_merge_candidate / get_merge_candidates / reject_merge_candidate - scripts/memory_dedup.py: host-side detector (HTTP-only, idempotent) - 5 API endpoints under /admin/memory/merge-candidates* + /admin/memory/dedup-scan - triage UI: purple "🔗 Merge Candidates" section + "🔗 Scan for duplicates" bar - batch-extract.sh Step B3 (0.90 daily, 0.85 Sundays) - deploy/dalidou/dedup-watcher.sh for UI-triggered scans - 21 new tests (374 → 395) - docs/PHASE-7-MEMORY-CONSOLIDATION.md covering 7A-7H roadmap Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
|||
| 9f262a21b0 |
feat: extractor llm-0.6.0 — bolder unknown-project tagging
User observation: APM work was captured + extracted, but candidates got
tagged project=atocore or left blank instead of project=apm. Reason:
the prompt said 'Unknown project names — still tag them' but was too
terse; sonnet hedged toward registered matches rather than proposing
new slugs.
Fix: explicit guidance in the system prompt for when to propose an
unregistered project name vs when to stick with a registered one.
New instructions:
- When a memory is clearly ABOUT a named tool/product/project/system
not in the known list, use a slugified version as the project tag
('apm' for 'Atomaste Part Manager'). The Living Taxonomy detector
(Phase 6 C.1) scans these and surfaces for one-click registration
once ≥3 memories accumulate with that tag.
- Exception: if a memory merely USES an unknown tool but is about a
registered project ('p04 parts missing materials in APM'), tag with
the registered project and mention the tool in content.
This closes the loop on the Phase 6 detector: extractor now produces
taggable data for it, detector surfaces, user registers with one click.
Version bump: llm-0.5.0 → llm-0.6.0.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
|||
| 02055e8db3 |
feat: Phase 6 — Living Taxonomy + Universal Capture
Closes two real-use gaps:
1. "APM tool" gap: work done outside Claude Code (desktop, web, phone,
other machine) was invisible to AtoCore.
2. Project discovery gap: manual JSON-file edits required to promote
an emerging theme to a first-class project.
B — atocore_remember MCP tool (scripts/atocore_mcp.py):
- New MCP tool for universal capture from any MCP-aware client
(Claude Desktop, Code, Cursor, Zed, Windsurf, etc.)
- Accepts content (required) + memory_type/project/confidence/
valid_until/domain_tags (all optional with sensible defaults)
- Creates a candidate memory, goes through the existing 3-tier triage
(no bypass — the quality gate catches noise)
- Detailed tool description guides Claude on when to invoke: "remember
this", "save that for later", "don't lose this fact"
- Total tools exposed by MCP server: 14 → 15
C.1 Emerging-concepts detector (scripts/detect_emerging.py):
- Nightly scan of active + candidate memories for:
* Unregistered project names with ≥3 memory occurrences
* Top 20 domain_tags by frequency (emerging categories)
* Active memories with reference_count ≥ 5 + valid_until set
(reinforced transients — candidates for extension)
- Writes findings to atocore/proposals/* project state entries
- Emits "warning" alert via Phase 4 framework the FIRST time a new
project crosses the 5-memory alert threshold (avoids spam)
- Configurable via env vars: ATOCORE_EMERGING_PROJECT_MIN (default 3),
ATOCORE_EMERGING_ALERT_THRESHOLD (default 5), TOP_TAGS_LIMIT (20)
C.2 Registration surface (src/atocore/api/routes.py + wiki.py):
- POST /admin/projects/register-emerging — one-click register with
sensible defaults (ingest_roots auto-filled with
vault:incoming/projects/<id>/ convention). Clears the proposal
from the dashboard list on success.
- Dashboard /admin/dashboard: new "proposals" section with
unregistered_projects + emerging_categories + reinforced_transients.
- Wiki homepage: "📋 Emerging" section rendering each unregistered
project as a card with count + 2 sample memory previews + inline
"📌 Register as project" button that calls the endpoint via fetch,
reloads the page on success.
C.3 Transient-to-durable extension
(src/atocore/memory/service.py + API + cron):
- New extend_reinforced_valid_until() function — scans active memories
with valid_until in the next 30 days and reference_count ≥ 5.
Extends expiry by 90 days. If reference_count ≥ 10, clears expiry
entirely (makes permanent). Writes audit rows via the Phase 4
memory_audit framework with actor="transient-to-durable".
- POST /admin/memory/extend-reinforced — API wrapper for cron.
- Matches the user's intuition: "something transient becomes important
if you keep coming back to it".
Nightly cron (deploy/dalidou/batch-extract.sh):
- Step F2: detect_emerging.py (after F pipeline summary)
- Step F3: /admin/memory/extend-reinforced (before integrity check)
- Both fail-open; errors don't break the pipeline.
Tests: 366 → 374 (+8 for Phase 6):
- 6 tests for extend_reinforced_valid_until covering:
extension path, permanent path, skip far-future, skip low-refs,
skip permanent memories, audit row write
- 2 smoke tests for the detector (imports cleanly, handles empty DB)
- MCP tool changes don't need new tests — the wrapper is pure passthrough
Design decisions documented in plan file:
- atocore_remember deliberately doesn't bypass triage (quality gate)
- Detector is passive (surfaces proposals) not active (auto-registers)
- Sensible ingest-root defaults ("vault:incoming/projects/<id>/")
so registration is one-click with no file-path thinking
- Extension adds 90 days rather than clearing expiry (gradual
permanence earned through sustained reinforcement)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
|||
| 07664bd743 |
feat: Phase 5A — Engineering V1 foundation
First slice of the Engineering V1 sprint. Lays the schema + lifecycle
plumbing so the 10 canonical queries, memory graduation, and conflict
detection can land cleanly on top.
Schema (src/atocore/models/database.py):
- conflicts + conflict_members tables per conflict-model.md (with 5
indexes on status/project/slot/members)
- memory_audit.entity_kind discriminator — same audit table serves
both memories ("memory") and entities ("entity"); unified history
without duplicating infrastructure
- memories.graduated_to_entity_id forward pointer for graduated
memories (M → E transition preserves the memory as historical
pointer)
Memory (src/atocore/memory/service.py):
- MEMORY_STATUSES gains "graduated" — memory-entity graduation flow
ready to wire in Phase 5F
Engineering service (src/atocore/engineering/service.py):
- RELATIONSHIP_TYPES organized into 4 families per ontology-v1.md:
+ Structural: contains, part_of, interfaces_with
+ Intent: satisfies, constrained_by, affected_by_decision,
based_on_assumption (new), supersedes
+ Validation: analyzed_by, validated_by, supports (new),
conflicts_with (new), depends_on
+ Provenance: described_by, updated_by_session (new),
evidenced_by (new), summarized_in (new)
- create_entity + create_relationship now call resolve_project_name()
on write (canonicalization contract per doc)
- Both accept actor= parameter for audit provenance
- _audit_entity() helper uses shared memory_audit table with
entity_kind="entity" — one observability layer for everything
- promote_entity / reject_entity_candidate / supersede_entity —
mirror the memory lifecycle exactly (same pattern, same naming)
- get_entity_audit() reads from the shared table filtered by
entity_kind
API (src/atocore/api/routes.py):
- POST /entities/{id}/promote (candidate → active)
- POST /entities/{id}/reject (candidate → invalid)
- GET /entities/{id}/audit (full history for one entity)
- POST /entities passes actor="api-http" through
Tests: 317 → 326 (9 new):
- test_entity_project_canonicalization (p04 → p04-gigabit)
- test_promote_entity_candidate_to_active
- test_reject_entity_candidate
- test_promote_active_entity_noop (only candidates promote)
- test_entity_audit_log_captures_lifecycle (before/after snapshots)
- test_new_relationship_types_available (6 new types present)
- test_conflicts_tables_exist
- test_memory_audit_has_entity_kind
- test_graduated_status_accepted
What's next (5B-5I, deferred): entity triage UI tab, core structure
queries, the 3 killer queries, memory graduation script, conflict
detection, MCP + context pack integration. See plan file.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
|||
| 88f2f7c4e1 |
feat: Phase 4 V1 — Robustness Hardening
Adds the observability + safety layer that turns AtoCore from
"works until something silently breaks" into "every mutation is
traceable, drift is detected, failures raise alerts."
1. Audit log (memory_audit table):
- New table with id, memory_id, action, actor, before/after JSON,
note, timestamp; 3 indexes for memory_id/timestamp/action
- _audit_memory() helper called from every mutation:
create_memory, update_memory, promote_memory,
reject_candidate_memory, invalidate_memory, supersede_memory,
reinforce_memory, auto_promote_reinforced, expire_stale_candidates
- Action verb auto-selected: promoted/rejected/invalidated/
superseded/updated based on state transition
- "actor" threaded through: api-http, human-triage, phase10-auto-
promote, candidate-expiry, reinforcement, etc.
- Fail-open: audit write failure logs but never breaks the mutation
- GET /memory/{id}/audit: full history for one memory
- GET /admin/audit/recent: last 50 mutations across the system
2. Alerts framework (src/atocore/observability/alerts.py):
- emit_alert(severity, title, message, context) fans out to:
- structlog logger (always)
- ~/atocore-logs/alerts.log append (configurable via
ATOCORE_ALERT_LOG)
- project_state atocore/alert/last_{severity} (dashboard surface)
- ATOCORE_ALERT_WEBHOOK POST if set (auto-detects Discord webhook
format for nice embeds; generic JSON otherwise)
- Every sink fail-open — one failure doesn't prevent the others
- Pipeline alert step in nightly cron: harness < 85% → warning;
candidate queue > 200 → warning
3. Integrity checks (scripts/integrity_check.py):
- Nightly scan for drift:
- Memories → missing source_chunk_id references
- Duplicate active memories (same type+content+project)
- project_state → missing projects
- Orphaned source_chunks (no parent document)
- Results persisted to atocore/status/integrity_check_result
- Any finding emits a warning alert
- Added as Step G in deploy/dalidou/batch-extract.sh nightly cron
4. Dashboard surfaces it all:
- integrity (findings + details)
- alerts (last info/warning/critical per severity)
- recent_audit (last 10 mutations with actor + action + preview)
Tests: 308 → 317 (9 new):
- test_audit_create_logs_entry
- test_audit_promote_logs_entry
- test_audit_reject_logs_entry
- test_audit_update_captures_before_after
- test_audit_reinforce_logs_entry
- test_recent_audit_returns_cross_memory_entries
- test_emit_alert_writes_log_file
- test_emit_alert_invalid_severity_falls_back_to_info
- test_emit_alert_fails_open_on_log_write_error
Deferred: formal migration framework with rollback (current additive
pattern is fine for V1); memory detail wiki page with audit view
(quick follow-up).
To enable Discord alerts: set ATOCORE_ALERT_WEBHOOK to a Discord
webhook URL in Dalidou's environment. Default = log-only.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
|||
| bfa7dba4de |
feat: Phase 3 V1 — Auto-Organization (domain_tags + valid_until)
Adds structural metadata that the LLM triage was already implicitly
reasoning about ("stale snapshot" → reject). Phase 3 captures that
reasoning as fields so it can DRIVE retrieval, not just rejection.
Schema (src/atocore/models/database.py):
- domain_tags TEXT DEFAULT '[]' JSON array of lowercase topic keywords
- valid_until DATETIME ISO date; null = permanent
- idx_memories_valid_until index for efficient expiry queries
Memory service (src/atocore/memory/service.py):
- Memory dataclass gains domain_tags + valid_until
- create_memory, update_memory accept/persist both
- _row_to_memory safely reads both (JSON-decode + null handling)
- _normalize_tags helper: lowercase, dedup, strip, cap at 10
- get_memories_for_context filters expired (valid_until < today UTC)
- _rank_memories_for_query adds tag-boost: memories whose domain_tags
appear as substrings in query text rank higher (tertiary key after
content-overlap density + absolute overlap, before confidence)
LLM extractor (_llm_prompt.py → llm-0.5.0):
- SYSTEM_PROMPT documents domain_tags (2-5 keywords) + valid_until
(time-bounded facts get expiry dates; durable facts stay null)
- normalize_candidate_item parses both fields from model output with
graceful fallback for string/null/missing
LLM triage (scripts/auto_triage.py):
- TRIAGE_SYSTEM_PROMPT documents same two fields
- parse_verdict extracts them from verdict JSON
- On promote: PUT /memory/{id} with tags + valid_until BEFORE
POST /memory/{id}/promote, so active memories carry them
API (src/atocore/api/routes.py):
- MemoryCreateRequest: adds domain_tags, valid_until
- MemoryUpdateRequest: adds domain_tags, valid_until, memory_type
- GET /memory response exposes domain_tags + valid_until + created_at
Triage UI (src/atocore/engineering/triage_ui.py):
- Renders existing tags as colored badges
- Adds inline text field for tags (comma-separated) + date picker for
valid_until on every candidate card
- Save&Promote button persists edits via PUT then promotes
- Plain Promote (and Y shortcut) also saves tags/expiry if edited
Wiki (src/atocore/engineering/wiki.py):
- Search now matches memory content OR domain_tags
- Search results render tags as clickable badges linking to
/wiki/search?q=<tag> for cross-project navigation
- valid_until shown as amber "valid until YYYY-MM-DD" hint
Tests: 303 → 308 (5 new for Phase 3 behavior):
- test_create_memory_with_tags_and_valid_until
- test_create_memory_normalizes_tags
- test_update_memory_sets_tags_and_valid_until
- test_get_memories_for_context_excludes_expired
- test_context_builder_tag_boost_orders_results
Deferred (explicitly): temporal_scope enum, source_refs memory graph,
HDBSCAN clustering, memory detail wiki page, backfill of existing
actives. See docs/MASTER-BRAIN-PLAN.md.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
|||
| 775960c8c8 |
feat: "Make It Actually Useful" sprint — observability + Phase 10
Pipeline observability: - Retrieval harness runs nightly (Step E in batch-extract.sh) - Pipeline summary persisted to project state after each run (pipeline_last_run, pipeline_summary, retrieval_harness_result) - Dashboard enhanced: interaction total + by_client, pipeline health (last_run, hours_since, harness results, triage stats), dynamic project list from registry Phase 10 — reinforcement-based auto-promotion: - auto_promote_reinforced(): candidates with reference_count >= 3 and confidence >= 0.7 auto-graduate to active - expire_stale_candidates(): candidates unreinforced for 14+ days auto-rejected to prevent unbounded queue growth - Both wired into nightly cron (Step B2) - Batch script: scripts/auto_promote_reinforced.py (--dry-run support) Knowledge seeding: - scripts/seed_project_state.py: 26 curated Trusted Project State entries across p04-gigabit, p05-interferometer, p06-polisher, atomizer-v2, abb-space, atocore (decisions, requirements, facts, contacts, milestones) Tests: 299 → 303 (4 new Phase 10 tests) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
|||
| c2e7064238 |
fix(extraction): R11 container 503 + R12 shared prompt module
R11: POST /admin/extract-batch with mode=llm now returns 503 when the claude CLI is unavailable (was silently returning success with 0 candidates), with a message pointing at the host-side script. +2 tests. R12: extracted SYSTEM_PROMPT + parse_llm_json_array + normalize_candidate_item + build_user_message into stdlib-only src/atocore/memory/_llm_prompt.py. Both the container extractor and scripts/batch_llm_extract_live.py now import from it, eliminating the prompt/parser drift risk. Tests 297 -> 299. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
|||
| 3f23ca1bc6 |
feat: signal-aggressive extraction + auto vault refresh in nightly cron
Extraction prompt rewritten for signal-aggressive mode. The old prompt
rewarded silence ("durable insight only, empty is correct") which
caused quiet failures — real project signal (Schott quotes arriving,
stakeholder events, blockers) was dropped as "not architectural enough".
New prompt explicitly lists what to emit:
1. Project activity (mentions with context — quote received, blocker,
action item)
2. Decisions and choices (architectural commitments, vendor selection)
3. Durable engineering insight (earned knowledge, generalizable)
4. Stakeholder and vendor events (emails sent, meetings scheduled)
5. Preferences and adaptations (how Antoine works)
Philosophy shift: "capture more signal, let triage filter noise"
replaces "extract only durable architectural facts". Auto-triage
already rejects noise well, so moving the filter downstream gives us
visibility into weak signals without polluting active memory.
Added 'episodic' to the candidate types list to support stakeholder
events with a timestamp feel.
LLM_EXTRACTOR_VERSION bumped to llm-0.4.0.
Also: cron-backup.sh now runs POST /ingest/sources before extraction
so new PKM files flow in automatically. Fail-open, non-blocking.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
|||
| c57617f611 |
feat: auto-project-detection + project stages
Three changes: 1. ABB-Space registered as a lead project with stage=lead in Trusted Project State. Projects now have lifecycle awareness (lead/proposition vs active contract vs completed). 2. Extraction no longer drops unregistered project tags. When the LLM extractor sees a conversation about a project not in the registry, it keeps the model's tag on the candidate instead of falling back to empty. This enables auto-detection of new projects/leads from organic conversations. The nightly pipeline surfaces these candidates for triage, where the operator sees "hey, there's a new project called X" and can decide whether to register it. 3. Extraction prompt updated to tell the model: "If the conversation discusses a project NOT in the known list, still tag it — the system will auto-detect it." This removes the artificial ceiling that prevented new project discovery. Updated Case D test: unregistered + unscoped now keeps the model's tag instead of dropping to empty. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
|||
| 9118f824fa |
feat: dual-layer knowledge extraction + domain knowledge band
The extraction system now produces two kinds of candidates from
the same conversation:
A. PROJECT-SPECIFIC: applied facts scoped to a named project
(unchanged behavior)
B. DOMAIN KNOWLEDGE: generalizable engineering insight earned
through project work, tagged with a domain (physics, materials,
optics, mechanics, manufacturing, metrology, controls, software,
math, finance) and stored with project="" so it surfaces across
all projects.
Critical quality bar enforced in the system prompt: "Would a
competent engineer need experience to know this, or could they
find it in 30 seconds on Google?" Textbook values, definitions,
and obvious facts are explicitly excluded. Only hard-won insight
qualifies — the kind that takes weeks of FEA or real machining
experience to discover.
Domain tags are embedded in the content as a prefix ("[physics]",
"[materials]") so they survive without a schema migration. A future
column can parse them out.
Context builder gains a new tier between project memories and
retrieved chunks:
Tier 1: Trusted Project State (project-specific)
Tier 2: Identity / Preferences (global)
Tier 3: Project Memories (project-specific)
Tier 4: Domain Knowledge (NEW) (cross-project, 10% budget)
Tier 5: Retrieved Chunks (project-boosted)
Trim order: chunks -> domain knowledge -> project memories ->
identity/preference -> project state.
Host-side extraction script updated with the same prompt and
domain-tag handling.
LLM_EXTRACTOR_VERSION bumped to llm-0.3.0.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
|||
| e5e9a9931e |
fix(R9): trust hierarchy for project attribution
Batch 3, Days 1-3. The core R9 failure was Case F: when the model returned a registered project DIFFERENT from the interaction's known scope, the old code trusted the model because the project was registered. A p06-polisher interaction could silently produce a p04-gigabit candidate. New rule (trust hierarchy): 1. Interaction scope always wins when set (cases A, C, E, F) 2. Model project used only for unscoped interactions AND only when it resolves to a registered project (cases D, G) 3. Empty string when both are empty or unregistered (case B) The rule is: interaction.project is the strongest signal because it comes from the capture hook's project detection, which runs before the LLM ever sees the content. The model's project guess is only useful when the capture hook had no project context. 7 case tests (A-G) cover every combination of model/interaction project state. Pre-existing tests updated for the new behavior. Host-side script mirrors the same hierarchy using _known_projects fetched from GET /projects at startup. Test count: 286 -> 290 (+4 net, 7 new R9 cases, 3 old tests consolidated). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
|||
| 8951c624fe |
fix(R7/R9): overlap-density ranking + project trust-preservation
R7: ranking scorer now uses overlap-density (overlap_count / memory_token_count) as primary key instead of raw overlap count. A 5-token memory with 3 overlapping tokens (density 0.6) now beats a 40-token overview memory with 3 overlapping tokens (density 0.075) at the same absolute count. Secondary: absolute overlap. Tertiary: confidence. Targeting p06-firmware-interface harness fixture. R9: when the LLM extractor returns a project that differs from the interaction's known project, it now checks the project registry. If the model's project is a registered canonical ID, trust it. If not (hallucinated name), fall back to the interaction's project. Uses load_project_registry() for the check. The host-side script mirrors this via an API call to GET /projects at startup. Two new tests: test_parser_keeps_registered_model_project and test_parser_rejects_hallucinated_project. Test count: 280 -> 281. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
|||
| ac7f77d86d |
fix: remove --no-session-persistence (unsupported on claude 2.0.60)
Dalidou runs Claude Code 2.0.60 which does not have this flag (added in 2.1.x). Removed from both extractor_llm.py and the host-side batch script. --append-system-prompt and --disable-slash-commands are supported on 2.0.60. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
|||
| 39d73e91b4 |
fix(R6): fall back to interaction.project when LLM returns empty
Codex R6: the LLM extractor accepted the model's project field verbatim. When the model returned empty string, clearly p06 memories got promoted as project='', making them invisible to the p06 project-memory band and explaining the p06-offline-design harness failure. Fix: if model returns empty project but interaction.project is set, inherit the interaction's project. Model-supplied project still takes precedence when non-empty. Two new tests lock the fallback and precedence behaviors. R5 acknowledged (LLM extractor not yet wired into API — next task). Test count: 278 -> 280. Harness re-run pending after deploy. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
|||
| b0fde3ee60 |
config: default LLM extractor model haiku -> sonnet
Haiku was producing noisy candidates (31% accept rate on first triage). Sonnet should give tighter extraction with fewer false positives while still catching the same durable-fact patterns. Override: ATOCORE_LLM_EXTRACTOR_MODEL=haiku to revert. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
|||
| 5c69f77b45 |
fix: cap per-entry memory length at 250 chars in context band
A 530-char program overview memory with confidence 0.96 was filling the entire 25% project-memory budget at equal overlap score (3 tokens), beating shorter query-relevant newly-promoted memories (confidence 0.5) on the confidence tiebreaker. The long memory legitimately scored well, but its length starved every other memory from the band. Fix: truncate each formatted entry to 250 chars with '...' so at least 2-3 memories fit the ~700-char available budget. This doesn't change ranking — the most relevant memory still goes first — but it ensures the runner-up can also appear. Harness fixture delta: Day 7 regression pass pending after deploy. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
|||
| 93f796207f |
docs: Day 5 — extractor scope + stale follow-ups cleaned
Documents the LLM-assisted extractor's in-scope / out-of-scope categories derived from the first live triage pass (16 promoted, 35 rejected). Five in-scope classes, six explicit out-of-scope classes, trust model summary, multi-model future direction. Cleaned up stale follow-up items in next-steps.md: rule expansion marked deprioritized, LLM extractor marked done, retrieval harness marked done with expansion pending. Fixed docstring timeout (45s -> 90s) in extractor_llm.py. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
|||
| a29b5e22f2 |
feat(eval-loop): Day 4 — LLM extractor via claude -p (OAuth, no API key)
Second pass on the LLM-assisted extractor after Antoine's explicit
rule: no API key, ever. Refactored src/atocore/memory/extractor_llm.py
to shell out to the Claude Code 'claude -p' CLI via subprocess instead
of the anthropic SDK, so extraction reuses the user's existing Claude.ai
OAuth credentials and needs zero secret management.
Implementation:
- subprocess.run(["claude", "-p", "--model", "haiku",
"--append-system-prompt", <instructions>,
"--no-session-persistence", "--disable-slash-commands",
user_message], ...)
- cwd is a cached tempfile.mkdtemp() so every invocation starts with
a clean context instead of auto-discovering CLAUDE.md / AGENTS.md /
DEV-LEDGER.md from the repo root. We cannot use --bare because it
forces API-key auth, which defeats the purpose; the temp-cwd trick
is the lightest way to keep OAuth auth while skipping project
context loading.
- Silent-failure contract unchanged: missing CLI, non-zero exit,
timeout, malformed JSON — all return [] and log an error. The
capture audit trail must not break on an optional side effect.
- Default timeout bumped from 20s to 90s: Haiku + Node.js startup
+ OAuth check is ~20-40s per call in practice, plus real responses
up to 8KB take longer. 45s hit 2 timeouts on the first live run.
- tests/test_extractor_llm.py refactored: the API-key / anthropic SDK
tests are replaced by subprocess-mocking tests covering missing
CLI, timeout, non-zero exit, and a happy-path stdout parse. 14
tests, all green.
scripts/extractor_eval.py:
- New --output <path> flag writes the JSON result directly to a file,
bypassing stdout/log interleaving (structlog sends INFO to stdout
via PrintLoggerFactory, so a naive '> out.json' pollutes the file).
- Forces UTF-8 on stdout so real LLM output with em-dashes / arrows /
CJK doesn't crash the human report on Windows cp1252 consoles.
First live baseline run against the 20-interaction labeled corpus
(scripts/eval_data/extractor_llm_baseline_2026-04-11.json):
mode=llm labeled=20 recall=1.0 precision=0.357 yield_rate=2.55
total_actual_candidates=51 total_expected_candidates=7
false_negative_interactions=0 false_positive_interactions=9
Recall 0% -> 100% vs rule baseline — every human-labeled positive is
caught. Precision reads low (0.357) but inspection shows the "false
positives" are real candidates the human labels under-counted. For
example interaction a6b0d279 was labeled at 2 expected candidates,
the model caught all 6 polisher architectural facts; interaction
52c8c0f3 was labeled at 1, the model caught all 5 infra commitments.
The labels are the bottleneck, not the model.
Day 4 gate against Codex's criteria:
- candidate yield: 255% vs ≥15-25% target
- FP rate tolerable for manual triage: 51 candidates reviewable in
~10 minutes via the triage CLI
- ≥2 real non-synthetic candidates worth review: 20+ obvious wins
(polisher architecture set, p05 infra set, DEV-LEDGER protocol set)
Gate cleared. LLM-assisted extraction is the path forward for
conversational captures. Rule-based extractor stays as-is for
structured-cue inputs and remains the default mode. The next step
(Day 5 stabilize / document) will wire LLM mode behind a flag in
the public extraction endpoint and document scope.
Test count: 276 -> 278 passing. No existing tests changed.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
|||
| b309e7fd49 |
feat(eval-loop): Day 4 — LLM-assisted extractor path (additive, flagged)
Day 2 baseline showed 0% recall for the rule-based extractor across
5 distinct miss classes. Day 4 decision gate: prototype an
LLM-assisted mode behind a flag. Option A ratified by Antoine.
New module src/atocore/memory/extractor_llm.py:
- extract_candidates_llm(interaction) returns the same MemoryCandidate
dataclass the rule extractor produces, so both paths flow through
the existing triage / candidate pipeline unchanged.
- extract_candidates_llm_verbose() also returns the raw model output
and any error string, for eval and debugging.
- Uses Claude Haiku 4.5 by default; model overridable via
ATOCORE_LLM_EXTRACTOR_MODEL env. Timeout via
ATOCORE_LLM_EXTRACTOR_TIMEOUT_S (default 20s).
- Silent-failure contract: missing API key, unreachable model,
malformed JSON — all return [] and log an error. Never raises
into the caller. The capture audit trail must not break on an
optional side effect.
- Parser tolerates markdown fences, surrounding prose, invalid
memory types, clamps confidence to [0,1], drops empty content.
- System prompt explicitly tells the model to return [] for most
conversational turns (durable-fact bar, not "extract everything").
- Trust rules unchanged: candidates are never auto-promoted,
extraction stays off the capture hot path, human triages via the
existing CLI.
scripts/extractor_eval.py: new --mode {rule,llm} flag so the same
labeled corpus can be scored against both extractors. Default
remains rule so existing invocations are unchanged.
tests/test_extractor_llm.py: 12 new unit tests covering the parser
(empty array, malformed JSON, markdown fences, surrounding prose,
invalid types, empty content, confidence clamping, version tagging),
plus contract tests for missing API key, empty response, and a
mocked api_error path so failure modes never raise.
Test count: 264 -> 276 passing. No existing tests changed.
Next step: run `python scripts/extractor_eval.py --mode llm` against
the labeled set with ANTHROPIC_API_KEY in env, record the delta,
decide whether to wire LLM mode into the API endpoint and CLI or
keep it script-only for now.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
|||
| 38f6e525af |
fix: tokenizer splits hyphenated identifiers
Hyphen- and slash-separated identifiers (polisher-control, twyman-green, etc.) were single tokens in the reinforcement / memory-ranking tokenizer, so queries had to match the exact hyphenation to score. The harness caught this on p06-control-rule: 'polisher control design rule' scored 2 overlap on each of the three polisher-*/design-rule memories and the tiebreaker picked the wrong one. Now hyphenated words contribute both the full form AND each sub-token. Extracted _add_token helper to avoid duplicating the stop-word / length gate at both insertion points. Reinforcement matcher tests still pass (28) — the new sub-tokens only widen the match set, they never narrow it, so memories that previously reinforced continue to reinforce. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
|||
| 37331d53ef |
fix: rank memories globally before budget walk
Per-type ranking was still starving later types: when a p05 query
matched a 'knowledge' memory best but 'project' came first in the
type order, the project-type candidates filled the budget before
the knowledge-type pool was even ranked.
Collect all candidates into a single pool, dedupe by id, then
rank the whole pool once against the query before walking the
flat budget. Python's stable sort preserves insertion order (which
still reflects the caller's memory_types order) as a natural
tiebreaker when scores are equal.
Regression surfaced by the retrieval eval harness:
p05-vendor-signal still missing 'Zygo' after
|
|||
| 5aeeb1cad1 |
feat: query-relevance ordering for memory selection
get_memories_for_context now accepts an optional query string.
When provided, candidate memories are reranked by lexical overlap
with the query (stemmed token intersection, ties broken by
confidence) before the budget walk. Without a query the order is
unchanged — effectively "by confidence desc" as before — so
non-builder callers see no behaviour change.
The fetch limit is raised from 10 to 30 so there's a real pool to
rerank. Token overlap reuses _normalize/_tokenize from
reinforcement.py so ranking and reinforcement matching share the
same notion of distinctive terms.
build_context passes the user_prompt through to both the identity/
preference and project-memory calls. The retrieval harness
regression the fix is targeting:
- p05-vendor-signal FAIL @
|
|||
| 5913da53c5 |
fix: flat-budget walk in get_memories_for_context
The per-type slicing (available // len(memory_types)) starved paragraph-length memories: with 3 types and a 450-char budget, each type got ~131 chars while real project memories are 300-500 chars each — every entry was skipped and the new Project Memories band never appeared in the live pack. Switch to a flat budget pool walked type-by-type in order. Short identity/preference memories still get first pick when the budget is tight, but long project memories can now compete for space. Caught on the first post-deploy probe: 2 active p04 memories existed but none landed in formatted_context. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
|||
| 8ea53f4003 |
feat: fold project-scoped memories into context pack
The retrieval-quality review on 2026-04-11 found that active project/knowledge/episodic memories never reached the pack: only Trusted Project State and identity/preference memories were being assembled. Reinforcement bumped confidence on memories that had no retrieval outlet, so the reflection loop was half-open. This change adds a third memory tier between identity/preference and retrieved chunks: - PROJECT_MEMORY_BUDGET_RATIO = 0.15 - Memory types: project, knowledge, episodic - Only populated when a canonical project is in scope — without a project hint, project memories stay out (cross-project bleed would rot the signal) - Rendered under a dedicated "--- Project Memories ---" header so the LLM can distinguish it from the identity/preference band - Trim order in _trim_context_to_budget: retrieval → project memories → identity/preference → project state (most recently added tier drops first when budget is tight) get_memories_for_context gains header/footer kwargs so the two memory blocks can be distinguished in a single pack without a second helper. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
|||
| 9366ba7879 |
feat: length-aware reinforcement + batch triage CLI + off-host backup
- Reinforcement matcher now handles paragraph-length memories via a
dual-mode threshold: short memories keep the 70% overlap rule,
long memories (>15 stems) require 12 absolute overlaps AND 35%
fraction so organic paraphrase can still reinforce. Diagnosis:
every active memory stayed at reference_count=0 because 40-token
project summaries never hit 70% overlap on real responses.
- scripts/atocore_client.py gains batch-extract (fan out
/interactions/{id}/extract over recent interactions) and triage
(interactive promote/reject walker for the candidate queue),
matching the Phase 9 reflection-loop review flow without pulling
extraction into the capture hot path.
- deploy/dalidou/cron-backup.sh adds an optional off-host rsync step
gated on ATOCORE_BACKUP_RSYNC, fail-open when the target is offline
so a laptop being off at 03:00 UTC never reds the local backup.
- docs/next-steps.md records the retrieval-quality sweep: project
state surfaces, chunks are on-topic but broad, active memories
never reach the pack (reflection loop has no retrieval outlet yet).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
|||
| a34a7a995f |
fix: token-overlap matcher for reinforcement (Phase 9B)
Replace the substring-based _memory_matches() with a token-overlap
matcher that tokenizes both memory content and response, applies
lightweight stemming (trailing s/ed/ing) and stop-word removal, then
checks whether >= 70% of the memory's tokens appear in the response.
This fixes the paraphrase blindness that prevented reinforcement from
ever firing on natural responses ("prefers" vs "prefer", "because
history" vs "because the history").
7 new tests (26 total reinforcement tests, all passing).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
|||
| fb6298a9a1 |
fix(P1+P2): canonicalize project names at every trust boundary
Three findings from codex's review of the previous P1+P2 fix. The earlier commit ( |
|||
| b9da5b6d84 |
phase9 first-real-use validation + small hygiene wins
Session 1 of the four-session plan. Empirically exercises the Phase 9
loop (capture -> reinforce -> extract) for the first time and lands
three small hygiene fixes.
Validation script + report
--------------------------
scripts/phase9_first_real_use.py — reproducible script that:
- sets up an isolated SQLite + Chroma store under
data/validation/phase9-first-use (gitignored)
- seeds 3 active memories
- runs 8 sample interactions through capture + reinforce + extract
- prints what each step produced and reinforcement state at the end
- supports --json output for downstream tooling
docs/phase9-first-real-use.md — narrative report of the run with:
- extraction results table (8/8 expectations met exactly)
- the empirical finding that REINFORCEMENT MATCHED ZERO seeds
despite sample 5 clearly echoing the rebase preference memory
- root cause analysis: the substring matcher is too brittle for
natural paraphrases (e.g. "prefers" vs "I prefer", "history"
vs "the history")
- recommended fix: replace substring matcher with a token-overlap
matcher (>=70% of memory tokens present in response, with
light stemming and a small stop list)
- explicit note that the fix is queued as a follow-up commit, not
bundled into the report — keeps the audit trail clean
Key extraction results from the run:
- all 7 heading/sentence rules fired correctly
- 0 false positives on the prose-only sample (the most important
sanity check)
- long content preserved without truncation
- dedup correctly kept three distinct cues from one interaction
- project scoping flowed cleanly through the pipeline
Hygiene 1: FastAPI lifespan migration (src/atocore/main.py)
- Replaced @app.on_event("startup") with the modern @asynccontextmanager
lifespan handler
- Same setup work (setup_logging, ensure_runtime_dirs, init_db,
init_project_state_schema, startup_ready log)
- Removes the two on_event deprecation warnings from every test run
- Test suite now shows 1 warning instead of 3
Hygiene 2: EXTRACTOR_VERSION constant (src/atocore/memory/extractor.py)
- Added EXTRACTOR_VERSION = "0.1.0" with a versioned change log comment
- MemoryCandidate dataclass carries extractor_version on every candidate
- POST /interactions/{id}/extract response now includes extractor_version
on both the top level (current run) and on each candidate
- Implements the versioning requirement called out in
docs/architecture/promotion-rules.md so old candidates can be
identified and re-evaluated when the rule set evolves
Hygiene 3: ~/.git-credentials cleanup (out-of-tree, not committed)
- Removed the dead OAUTH_USER:<jwt> line for dalidou:3000 that was
being silently rewritten by the system credential manager on every
push attempt
- Configured credential.http://dalidou:3000.helper with the empty-string
sentinel pattern so the URL-specific helper chain is exactly
["", store] instead of inheriting the system-level "manager" helper
that ships with Git for Windows
- Same fix for the 100.80.199.40 (Tailscale) entry
- Verified end to end: a fresh push using only the cleaned credentials
file (no embedded URL) authenticates as Antoine and lands cleanly
Full suite: 160 passing (no change from previous), 1 warning
(was 3) thanks to the lifespan migration.
|
|||
| 53147d326c |
feat(phase9-C): rule-based candidate extractor and review queue
Phase 9 Commit C. Closes the capture loop: Commit A records what
AtoCore fed the LLM and what came back, Commit B bumps confidence on
active memories the response actually references, and this commit
turns structured cues in the response into candidate memories for a
human review queue.
Nothing extracted here is ever automatically promoted into trusted
state. Every candidate sits at status="candidate" until a human (or
later, a confident automatic policy) calls /memory/{id}/promote or
/memory/{id}/reject. This keeps the "bad memory is worse than no
memory" invariant from the operating model intact.
New module: src/atocore/memory/extractor.py
- MemoryCandidate dataclass (type, content, rule, source_span,
project, confidence, source_interaction_id)
- extract_candidates_from_interaction(interaction): runs a fixed set
of regex rules over the response + response_summary and returns
a list of candidates
V0 rule set (deliberately narrow to keep false positives low):
- decision_heading ## Decision: / ## Decision - / ## Decision —
-> adaptation candidate
- constraint_heading ## Constraint: ... -> project candidate
- requirement_heading ## Requirement: ... -> project candidate
- fact_heading ## Fact: ... -> knowledge candidate
- preference_sentence "I prefer X" / "the user prefers X"
-> preference candidate
- decided_to_sentence "decided to X" -> adaptation candidate
- requirement_sentence "the requirement is X" -> project candidate
Extractor post-processing:
- clean_value: collapse whitespace, strip trailing punctuation
- min content length 8 chars, max 280 (keeps candidates reviewable)
- dedupe by (memory_type, normalized value, rule)
- drop candidates whose content already matches an active memory of
the same type+project so the queue doesn't ask humans to re-curate
things they already promoted
Memory service (extends Commit B candidate-status foundation):
- promote_memory(id): candidate -> active (404 if not a candidate)
- reject_candidate_memory(id): candidate -> invalid
- both are no-ops if the target isn't currently a candidate so the
API can surface 404 without the caller needing to pre-check
API endpoints (new):
- POST /interactions/{id}/extract run extractor, preview-only
body: {"persist": false} (default) returns candidates
{"persist": true} creates candidate memories
- POST /memory/{id}/promote candidate -> active
- POST /memory/{id}/reject candidate -> invalid
- GET /memory?status=candidate list review queue explicitly
(existing endpoint now accepts status= override)
- GET /memory now also returns reference_count and last_referenced_at
per memory so the Commit B reinforcement signal is visible to clients
Trust model unchanged:
- candidates NEVER appear in context packs (get_memories_for_context
still filters to active via the active_only default)
- candidates NEVER get reinforced by the Commit B loop (reinforcement
refuses non-active memories)
- trusted project state is untouched end-to-end
Tests (25 new, all green):
- heading pattern: decision, constraint, requirement, fact
- separator variants :, -, em-dash
- sentence patterns: preference, decided_to, requirement
- rejects too-short matches
- dedupes identical matches
- strips trailing punctuation
- carries project and source_interaction_id onto candidates
- drops candidates that duplicate an existing active memory
- returns empty for prose without structural cues
- candidate and active coexist in the memory table
- promote_memory moves candidate -> active
- promote on non-candidate returns False
- reject_candidate_memory moves candidate -> invalid
- reject on non-candidate returns False
- get_memories(status="candidate") returns just the queue
- POST /interactions/{id}/extract preview-only path
- POST /interactions/{id}/extract persist=true path
- POST /interactions/{id}/extract 404 for missing interaction
- POST /memory/{id}/promote success + 404 on non-candidate
- POST /memory/{id}/reject 404 on missing
- GET /memory?status=candidate surfaces the queue
- GET /memory?status=<invalid> returns 400
Full suite: 160 passing (was 135).
What Phase 9 looks like end to end after this commit
----------------------------------------------------
prompt
-> context pack assembled
-> LLM response
-> POST /interactions (capture)
-> automatic Commit B reinforcement (active memories only)
-> [optional] POST /interactions/{id}/extract
-> Commit C extractor proposes candidates
-> human reviews via GET /memory?status=candidate
-> POST /memory/{id}/promote (candidate -> active)
OR POST /memory/{id}/reject (candidate -> invalid)
Not in this commit (deferred on purpose):
- Decay of unused memories (we keep reference_count and
last_referenced_at so a later decay job has the signal it needs)
- LLM-based extractor as an alternative to the regex rules
- Automatic promotion of high-confidence candidates
- Candidate-to-entity upgrade path (needs the engineering layer
memory-vs-entities decision, planned in a coming architecture doc)
|
|||
| 2704997256 |
feat(phase9-B): reinforce active memories from captured interactions
Phase 9 Commit B from the agreed plan. With Commit A capturing what
AtoCore fed to the LLM and what came back, this commit closes the
weakest part of the loop: when a memory is actually referenced in a
response, its confidence should drift up, and stale memories that
nobody ever mentions should stay where they are.
This is reinforcement only — nothing is promoted into trusted state
and no candidates are created. Extraction is Commit C.
Schema (additive migration):
- memories.last_referenced_at DATETIME (null by default)
- memories.reference_count INTEGER DEFAULT 0
- idx_memories_last_referenced on last_referenced_at
- memories.status now accepts the new "candidate" value so Commit C
has the status slot to land on. Existing active/superseded/invalid
rows are untouched.
New module: src/atocore/memory/reinforcement.py
- reinforce_from_interaction(interaction): scans the interaction's
response + response_summary for echoes of active memories and
bumps confidence / reference_count for each match
- matching is intentionally simple and explainable:
* normalize both sides (lowercase, collapse whitespace)
* require >= 12 chars of memory content to match
* compare the leading 80-char window of each memory
- the candidate pool is project-scoped memories for the interaction's
project + global identity + preference memories, deduplicated
- candidates and invalidated memories are NEVER reinforced; only
active memories move
Memory service changes:
- MEMORY_STATUSES = ["candidate", "active", "superseded", "invalid"]
- create_memory(status="candidate"|"active"|...) with per-status
duplicate scoping so a candidate and an active with identical text
can legitimately coexist during review
- get_memories(status=...) explicit override of the legacy active_only
flag; callers can now list the review queue cleanly
- update_memory accepts any valid status including "candidate"
- reinforce_memory(id, delta): low-level primitive that bumps
confidence (capped at 1.0), increments reference_count, and sets
last_referenced_at. Only active memories; returns (applied, old, new)
- promote_memory / reject_candidate_memory helpers prepping Commit C
Interactions service:
- record_interaction(reinforce=True) runs reinforce_from_interaction
automatically when the interaction has response content. reinforcement
errors are logged but never raised back to the caller so capture
itself is never blocked by a flaky downstream.
- circular import between interactions service and memory.reinforcement
avoided by lazy import inside the function
API:
- POST /interactions now accepts a reinforce bool field (default true)
- POST /interactions/{id}/reinforce runs reinforcement on an existing
captured interaction — useful for backfilling or for retrying after
a transient error in the automatic pass
- response lists which memory ids were reinforced with
old / new confidence for audit
Tests (17 new, all green):
- reinforce_memory bumps, caps at 1.0, accumulates reference_count
- reinforce_memory rejects candidates and missing ids
- reinforce_memory rejects negative delta
- reinforce_from_interaction matches active memory
- reinforce_from_interaction ignores candidates and inactive
- reinforce_from_interaction requires minimum content length
- reinforce_from_interaction handles empty response cleanly
- reinforce_from_interaction normalizes casing and whitespace
- reinforce_from_interaction deduplicates across memory buckets
- record_interaction auto-reinforces by default
- record_interaction reinforce=False skips the pass
- record_interaction handles empty response
- POST /interactions/{id}/reinforce runs against stored interaction
- POST /interactions/{id}/reinforce returns 404 for missing id
- POST /interactions accepts reinforce=false
Full suite: 135 passing (was 118).
Trust model unchanged:
- reinforcement only moves confidence within the existing active set
- the candidate lifecycle is declared but only Commit C will actually
create candidate memories
- trusted project state is never touched by reinforcement
Next: Commit C adds the rule-based extractor that produces candidate
memories from captured interactions plus the promote/reject review
queue endpoints.
|
|||
| b0889b3925 | Stabilize core correctness and sync project plan state | |||
| b48f0c95ab |
feat: Phase 2 Memory Core — structured memory with context integration
Memory Core implementation: - Memory service with 6 types: identity, preference, project, episodic, knowledge, adaptation - CRUD operations: create (with dedup), get (filtered), update, invalidate, supersede - Confidence scoring (0.0-1.0) and lifecycle management (active/superseded/invalid) - Memory API endpoints: POST/GET/PUT/DELETE /memory Context builder integration (trust precedence per Master Plan): 1. Trusted Project State (highest trust, 20% budget) 2. Identity + Preference memories (10% budget) 3. Retrieved chunks (remaining budget) Also fixed database.py to use dynamic settings reference for test isolation. 45/45 tests passing. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |