Files
ATOCore/src/atocore/memory/service.py

1363 lines
49 KiB
Python
Raw Normal View History

"""Memory Core — structured memory management.
Memory types (per Master Plan):
- identity: who the user is, role, background
- preference: how they like to work, style, tools
- project: project-specific knowledge and context
- episodic: what happened, conversations, events
- knowledge: verified facts, technical knowledge
- adaptation: learned corrections, behavioral adjustments
Memories have:
- confidence (0.01.0): how certain we are
feat(phase9-B): reinforce active memories from captured interactions Phase 9 Commit B from the agreed plan. With Commit A capturing what AtoCore fed to the LLM and what came back, this commit closes the weakest part of the loop: when a memory is actually referenced in a response, its confidence should drift up, and stale memories that nobody ever mentions should stay where they are. This is reinforcement only — nothing is promoted into trusted state and no candidates are created. Extraction is Commit C. Schema (additive migration): - memories.last_referenced_at DATETIME (null by default) - memories.reference_count INTEGER DEFAULT 0 - idx_memories_last_referenced on last_referenced_at - memories.status now accepts the new "candidate" value so Commit C has the status slot to land on. Existing active/superseded/invalid rows are untouched. New module: src/atocore/memory/reinforcement.py - reinforce_from_interaction(interaction): scans the interaction's response + response_summary for echoes of active memories and bumps confidence / reference_count for each match - matching is intentionally simple and explainable: * normalize both sides (lowercase, collapse whitespace) * require >= 12 chars of memory content to match * compare the leading 80-char window of each memory - the candidate pool is project-scoped memories for the interaction's project + global identity + preference memories, deduplicated - candidates and invalidated memories are NEVER reinforced; only active memories move Memory service changes: - MEMORY_STATUSES = ["candidate", "active", "superseded", "invalid"] - create_memory(status="candidate"|"active"|...) with per-status duplicate scoping so a candidate and an active with identical text can legitimately coexist during review - get_memories(status=...) explicit override of the legacy active_only flag; callers can now list the review queue cleanly - update_memory accepts any valid status including "candidate" - reinforce_memory(id, delta): low-level primitive that bumps confidence (capped at 1.0), increments reference_count, and sets last_referenced_at. Only active memories; returns (applied, old, new) - promote_memory / reject_candidate_memory helpers prepping Commit C Interactions service: - record_interaction(reinforce=True) runs reinforce_from_interaction automatically when the interaction has response content. reinforcement errors are logged but never raised back to the caller so capture itself is never blocked by a flaky downstream. - circular import between interactions service and memory.reinforcement avoided by lazy import inside the function API: - POST /interactions now accepts a reinforce bool field (default true) - POST /interactions/{id}/reinforce runs reinforcement on an existing captured interaction — useful for backfilling or for retrying after a transient error in the automatic pass - response lists which memory ids were reinforced with old / new confidence for audit Tests (17 new, all green): - reinforce_memory bumps, caps at 1.0, accumulates reference_count - reinforce_memory rejects candidates and missing ids - reinforce_memory rejects negative delta - reinforce_from_interaction matches active memory - reinforce_from_interaction ignores candidates and inactive - reinforce_from_interaction requires minimum content length - reinforce_from_interaction handles empty response cleanly - reinforce_from_interaction normalizes casing and whitespace - reinforce_from_interaction deduplicates across memory buckets - record_interaction auto-reinforces by default - record_interaction reinforce=False skips the pass - record_interaction handles empty response - POST /interactions/{id}/reinforce runs against stored interaction - POST /interactions/{id}/reinforce returns 404 for missing id - POST /interactions accepts reinforce=false Full suite: 135 passing (was 118). Trust model unchanged: - reinforcement only moves confidence within the existing active set - the candidate lifecycle is declared but only Commit C will actually create candidate memories - trusted project state is never touched by reinforcement Next: Commit C adds the rule-based extractor that produces candidate memories from captured interactions plus the promote/reject review queue endpoints.
2026-04-06 21:18:38 -04:00
- status: lifecycle state, one of MEMORY_STATUSES
* candidate: extracted from an interaction, awaiting human review
(Phase 9 Commit C). Candidates are NEVER included in
context packs.
* active: promoted/curated, visible to retrieval and context
* superseded: replaced by a newer entry
* invalid: rejected / error-corrected
- last_referenced_at / reference_count: reinforcement signal
(Phase 9 Commit B). Bumped whenever a captured interaction's
response content echoes this memory.
- optional link to source chunk: traceability
"""
import uuid
from dataclasses import dataclass
from datetime import datetime, timezone
from atocore.models.database import get_connection
from atocore.observability.logger import get_logger
fix(P1+P2): canonicalize project names at every trust boundary Three findings from codex's review of the previous P1+P2 fix. The earlier commit (f2372ef) only fixed alias resolution at the context builder. Codex correctly pointed out that the same fragmentation applies at every other place a project name crosses a boundary — project_state writes/reads, interaction capture/listing/filtering, memory create/queries, and reinforcement's downstream queries. Plus a real bug in the interaction `since` filter where the storage format and the documented ISO format don't compare cleanly. The fix is one helper used at every boundary instead of duplicating the resolution inline. New helper: src/atocore/projects/registry.py::resolve_project_name --------------------------------------------------------------- - Single canonicalization boundary for project names - Returns the canonical project_id when the input matches any registered id or alias - Returns the input unchanged for empty/None and for unregistered names (preserves backwards compat with hand-curated state that predates the registry) - Documented as the contract that every read/write at the trust boundary should pass through P1 — Trusted Project State endpoints ------------------------------------ src/atocore/context/project_state.py: set_state, get_state, and invalidate_state now all canonicalize project_name through resolve_project_name BEFORE looking up or creating the project row. Before this fix: - POST /project/state with project="p05" called ensure_project("p05") which created a separate row in the projects table - The state row was attached to that alias project_id - Later context builds canonicalized "p05" -> "p05-interferometer" via the builder fix from f2372ef and never found the state - Result: trusted state silently fragmented across alias rows After this fix: - The alias is resolved to the canonical id at every entry point - Two captures (one via "p05", one via "p05-interferometer") write to the same row - get_state via either alias or the canonical id finds the same row Fixes the highest-priority gap codex flagged because Trusted Project State is supposed to be the most dependable layer in the AtoCore trust hierarchy. P2.a — Interaction capture project canonicalization ---------------------------------------------------- src/atocore/interactions/service.py: record_interaction now canonicalizes project before storing, so interaction.project is always the canonical id regardless of what the client passed. Downstream effects: - reinforce_from_interaction queries memories by interaction.project -> previously missed memories stored under canonical id -> now consistent because interaction.project IS the canonical id - the extractor stamps candidates with interaction.project -> previously created candidates in alias buckets -> now creates candidates in the canonical bucket - list_interactions(project=alias) was already broken, now fixed by canonicalizing the filter input on the read side too Memory service applied the same fix: - src/atocore/memory/service.py: create_memory and get_memories both canonicalize project through resolve_project_name - This keeps stored memory.project consistent with the reinforcement query path P2.b — Interaction `since` filter format normalization ------------------------------------------------------ src/atocore/interactions/service.py: new _normalize_since helper. The bug: - created_at is stored as 'YYYY-MM-DD HH:MM:SS' (no timezone, UTC by convention) so it sorts lexically and compares cleanly with the SQLite CURRENT_TIMESTAMP default - The `since` parameter was documented as ISO 8601 but compared as a raw string against the storage format - The lexically-greater 'T' separator means an ISO timestamp like '2026-04-07T12:00:00Z' is GREATER than the storage form '2026-04-07 12:00:00' for the same instant - Result: a client passing ISO `since` got an empty result for any row from the same day, even though those rows existed and were technically "after" the cutoff in real-world time The fix: - _normalize_since accepts ISO 8601 with T, optional Z suffix, optional fractional seconds, optional +HH:MM offsets - Uses datetime.fromisoformat for parsing (Python 3.11+) - Converts to UTC and reformats as the storage format before the SQL comparison - The bare storage format still works (backwards compat path is a regex match that returns the input unchanged) - Unparseable input is returned as-is so the comparison degrades gracefully (rows just don't match) instead of raising and breaking the listing endpoint builder.py refactor ------------------- The previous P1 fix had inline canonicalization. Now it uses the shared helper for consistency: - import changed from get_registered_project to resolve_project_name - the inline lookup is replaced with a single helper call - the comment block now points at representation-authority.md for the canonicalization contract New shared test fixture: tests/conftest.py::project_registry ------------------------------------------------------------ - Standardizes the registry-setup pattern that was duplicated across test_context_builder.py, test_project_state.py, test_interactions.py, and test_reinforcement.py - Returns a callable that takes (project_id, [aliases]) tuples and writes them into a temp registry file with the env var pointed at it and config.settings reloaded - Used by all 12 new regression tests in this commit Tests (12 new, all green on first run) -------------------------------------- test_project_state.py: - test_set_state_canonicalizes_alias: write via alias, read via every alias and the canonical id, verify same row id - test_get_state_canonicalizes_alias_after_canonical_write - test_invalidate_state_canonicalizes_alias - test_unregistered_project_state_still_works (backwards compat) test_interactions.py: - test_record_interaction_canonicalizes_project - test_list_interactions_canonicalizes_project_filter - test_list_interactions_since_accepts_iso_with_t_separator - test_list_interactions_since_accepts_z_suffix - test_list_interactions_since_accepts_offset - test_list_interactions_since_storage_format_still_works test_reinforcement.py: - test_reinforcement_works_when_capture_uses_alias (end-to-end: capture under alias, seed memory under canonical, verify reinforcement matches) - test_get_memories_filter_by_alias Full suite: 174 passing (was 162), 1 warning. The +12 is the new regression tests, no existing tests regressed. What's still NOT canonicalized (and why) ---------------------------------------- - _rank_chunks's secondary substring boost in builder.py — the retriever already does the right thing via its own _project_match_boost which calls get_registered_project. The redundant secondary boost still uses the raw hint but it's a multiplicative factor on top of correct retrieval, not a filter, so it can't drop relevant chunks. Tracked as a future cleanup but not a P1. - update_memory's project field (you can't change a memory's project after creation in the API anyway). - The retriever's project_hint parameter on direct /query calls — same reasoning as the builder boost, plus the retriever's own get_registered_project call already handles aliases there.
2026-04-07 08:29:33 -04:00
from atocore.projects.registry import resolve_project_name
log = get_logger("memory")
MEMORY_TYPES = [
"identity",
"preference",
"project",
"episodic",
"knowledge",
"adaptation",
]
feat(phase9-B): reinforce active memories from captured interactions Phase 9 Commit B from the agreed plan. With Commit A capturing what AtoCore fed to the LLM and what came back, this commit closes the weakest part of the loop: when a memory is actually referenced in a response, its confidence should drift up, and stale memories that nobody ever mentions should stay where they are. This is reinforcement only — nothing is promoted into trusted state and no candidates are created. Extraction is Commit C. Schema (additive migration): - memories.last_referenced_at DATETIME (null by default) - memories.reference_count INTEGER DEFAULT 0 - idx_memories_last_referenced on last_referenced_at - memories.status now accepts the new "candidate" value so Commit C has the status slot to land on. Existing active/superseded/invalid rows are untouched. New module: src/atocore/memory/reinforcement.py - reinforce_from_interaction(interaction): scans the interaction's response + response_summary for echoes of active memories and bumps confidence / reference_count for each match - matching is intentionally simple and explainable: * normalize both sides (lowercase, collapse whitespace) * require >= 12 chars of memory content to match * compare the leading 80-char window of each memory - the candidate pool is project-scoped memories for the interaction's project + global identity + preference memories, deduplicated - candidates and invalidated memories are NEVER reinforced; only active memories move Memory service changes: - MEMORY_STATUSES = ["candidate", "active", "superseded", "invalid"] - create_memory(status="candidate"|"active"|...) with per-status duplicate scoping so a candidate and an active with identical text can legitimately coexist during review - get_memories(status=...) explicit override of the legacy active_only flag; callers can now list the review queue cleanly - update_memory accepts any valid status including "candidate" - reinforce_memory(id, delta): low-level primitive that bumps confidence (capped at 1.0), increments reference_count, and sets last_referenced_at. Only active memories; returns (applied, old, new) - promote_memory / reject_candidate_memory helpers prepping Commit C Interactions service: - record_interaction(reinforce=True) runs reinforce_from_interaction automatically when the interaction has response content. reinforcement errors are logged but never raised back to the caller so capture itself is never blocked by a flaky downstream. - circular import between interactions service and memory.reinforcement avoided by lazy import inside the function API: - POST /interactions now accepts a reinforce bool field (default true) - POST /interactions/{id}/reinforce runs reinforcement on an existing captured interaction — useful for backfilling or for retrying after a transient error in the automatic pass - response lists which memory ids were reinforced with old / new confidence for audit Tests (17 new, all green): - reinforce_memory bumps, caps at 1.0, accumulates reference_count - reinforce_memory rejects candidates and missing ids - reinforce_memory rejects negative delta - reinforce_from_interaction matches active memory - reinforce_from_interaction ignores candidates and inactive - reinforce_from_interaction requires minimum content length - reinforce_from_interaction handles empty response cleanly - reinforce_from_interaction normalizes casing and whitespace - reinforce_from_interaction deduplicates across memory buckets - record_interaction auto-reinforces by default - record_interaction reinforce=False skips the pass - record_interaction handles empty response - POST /interactions/{id}/reinforce runs against stored interaction - POST /interactions/{id}/reinforce returns 404 for missing id - POST /interactions accepts reinforce=false Full suite: 135 passing (was 118). Trust model unchanged: - reinforcement only moves confidence within the existing active set - the candidate lifecycle is declared but only Commit C will actually create candidate memories - trusted project state is never touched by reinforcement Next: Commit C adds the rule-based extractor that produces candidate memories from captured interactions plus the promote/reject review queue endpoints.
2026-04-06 21:18:38 -04:00
MEMORY_STATUSES = [
"candidate",
"active",
"superseded",
"invalid",
feat: Phase 5A — Engineering V1 foundation First slice of the Engineering V1 sprint. Lays the schema + lifecycle plumbing so the 10 canonical queries, memory graduation, and conflict detection can land cleanly on top. Schema (src/atocore/models/database.py): - conflicts + conflict_members tables per conflict-model.md (with 5 indexes on status/project/slot/members) - memory_audit.entity_kind discriminator — same audit table serves both memories ("memory") and entities ("entity"); unified history without duplicating infrastructure - memories.graduated_to_entity_id forward pointer for graduated memories (M → E transition preserves the memory as historical pointer) Memory (src/atocore/memory/service.py): - MEMORY_STATUSES gains "graduated" — memory-entity graduation flow ready to wire in Phase 5F Engineering service (src/atocore/engineering/service.py): - RELATIONSHIP_TYPES organized into 4 families per ontology-v1.md: + Structural: contains, part_of, interfaces_with + Intent: satisfies, constrained_by, affected_by_decision, based_on_assumption (new), supersedes + Validation: analyzed_by, validated_by, supports (new), conflicts_with (new), depends_on + Provenance: described_by, updated_by_session (new), evidenced_by (new), summarized_in (new) - create_entity + create_relationship now call resolve_project_name() on write (canonicalization contract per doc) - Both accept actor= parameter for audit provenance - _audit_entity() helper uses shared memory_audit table with entity_kind="entity" — one observability layer for everything - promote_entity / reject_entity_candidate / supersede_entity — mirror the memory lifecycle exactly (same pattern, same naming) - get_entity_audit() reads from the shared table filtered by entity_kind API (src/atocore/api/routes.py): - POST /entities/{id}/promote (candidate → active) - POST /entities/{id}/reject (candidate → invalid) - GET /entities/{id}/audit (full history for one entity) - POST /entities passes actor="api-http" through Tests: 317 → 326 (9 new): - test_entity_project_canonicalization (p04 → p04-gigabit) - test_promote_entity_candidate_to_active - test_reject_entity_candidate - test_promote_active_entity_noop (only candidates promote) - test_entity_audit_log_captures_lifecycle (before/after snapshots) - test_new_relationship_types_available (6 new types present) - test_conflicts_tables_exist - test_memory_audit_has_entity_kind - test_graduated_status_accepted What's next (5B-5I, deferred): entity triage UI tab, core structure queries, the 3 killer queries, memory graduation script, conflict detection, MCP + context pack integration. See plan file. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-17 07:01:28 -04:00
"graduated", # Phase 5: memory has become an entity; content frozen, forward pointer in properties
feat(phase9-B): reinforce active memories from captured interactions Phase 9 Commit B from the agreed plan. With Commit A capturing what AtoCore fed to the LLM and what came back, this commit closes the weakest part of the loop: when a memory is actually referenced in a response, its confidence should drift up, and stale memories that nobody ever mentions should stay where they are. This is reinforcement only — nothing is promoted into trusted state and no candidates are created. Extraction is Commit C. Schema (additive migration): - memories.last_referenced_at DATETIME (null by default) - memories.reference_count INTEGER DEFAULT 0 - idx_memories_last_referenced on last_referenced_at - memories.status now accepts the new "candidate" value so Commit C has the status slot to land on. Existing active/superseded/invalid rows are untouched. New module: src/atocore/memory/reinforcement.py - reinforce_from_interaction(interaction): scans the interaction's response + response_summary for echoes of active memories and bumps confidence / reference_count for each match - matching is intentionally simple and explainable: * normalize both sides (lowercase, collapse whitespace) * require >= 12 chars of memory content to match * compare the leading 80-char window of each memory - the candidate pool is project-scoped memories for the interaction's project + global identity + preference memories, deduplicated - candidates and invalidated memories are NEVER reinforced; only active memories move Memory service changes: - MEMORY_STATUSES = ["candidate", "active", "superseded", "invalid"] - create_memory(status="candidate"|"active"|...) with per-status duplicate scoping so a candidate and an active with identical text can legitimately coexist during review - get_memories(status=...) explicit override of the legacy active_only flag; callers can now list the review queue cleanly - update_memory accepts any valid status including "candidate" - reinforce_memory(id, delta): low-level primitive that bumps confidence (capped at 1.0), increments reference_count, and sets last_referenced_at. Only active memories; returns (applied, old, new) - promote_memory / reject_candidate_memory helpers prepping Commit C Interactions service: - record_interaction(reinforce=True) runs reinforce_from_interaction automatically when the interaction has response content. reinforcement errors are logged but never raised back to the caller so capture itself is never blocked by a flaky downstream. - circular import between interactions service and memory.reinforcement avoided by lazy import inside the function API: - POST /interactions now accepts a reinforce bool field (default true) - POST /interactions/{id}/reinforce runs reinforcement on an existing captured interaction — useful for backfilling or for retrying after a transient error in the automatic pass - response lists which memory ids were reinforced with old / new confidence for audit Tests (17 new, all green): - reinforce_memory bumps, caps at 1.0, accumulates reference_count - reinforce_memory rejects candidates and missing ids - reinforce_memory rejects negative delta - reinforce_from_interaction matches active memory - reinforce_from_interaction ignores candidates and inactive - reinforce_from_interaction requires minimum content length - reinforce_from_interaction handles empty response cleanly - reinforce_from_interaction normalizes casing and whitespace - reinforce_from_interaction deduplicates across memory buckets - record_interaction auto-reinforces by default - record_interaction reinforce=False skips the pass - record_interaction handles empty response - POST /interactions/{id}/reinforce runs against stored interaction - POST /interactions/{id}/reinforce returns 404 for missing id - POST /interactions accepts reinforce=false Full suite: 135 passing (was 118). Trust model unchanged: - reinforcement only moves confidence within the existing active set - the candidate lifecycle is declared but only Commit C will actually create candidate memories - trusted project state is never touched by reinforcement Next: Commit C adds the rule-based extractor that produces candidate memories from captured interactions plus the promote/reject review queue endpoints.
2026-04-06 21:18:38 -04:00
]
@dataclass
class Memory:
id: str
memory_type: str
content: str
project: str
source_chunk_id: str
confidence: float
status: str
created_at: str
updated_at: str
feat(phase9-B): reinforce active memories from captured interactions Phase 9 Commit B from the agreed plan. With Commit A capturing what AtoCore fed to the LLM and what came back, this commit closes the weakest part of the loop: when a memory is actually referenced in a response, its confidence should drift up, and stale memories that nobody ever mentions should stay where they are. This is reinforcement only — nothing is promoted into trusted state and no candidates are created. Extraction is Commit C. Schema (additive migration): - memories.last_referenced_at DATETIME (null by default) - memories.reference_count INTEGER DEFAULT 0 - idx_memories_last_referenced on last_referenced_at - memories.status now accepts the new "candidate" value so Commit C has the status slot to land on. Existing active/superseded/invalid rows are untouched. New module: src/atocore/memory/reinforcement.py - reinforce_from_interaction(interaction): scans the interaction's response + response_summary for echoes of active memories and bumps confidence / reference_count for each match - matching is intentionally simple and explainable: * normalize both sides (lowercase, collapse whitespace) * require >= 12 chars of memory content to match * compare the leading 80-char window of each memory - the candidate pool is project-scoped memories for the interaction's project + global identity + preference memories, deduplicated - candidates and invalidated memories are NEVER reinforced; only active memories move Memory service changes: - MEMORY_STATUSES = ["candidate", "active", "superseded", "invalid"] - create_memory(status="candidate"|"active"|...) with per-status duplicate scoping so a candidate and an active with identical text can legitimately coexist during review - get_memories(status=...) explicit override of the legacy active_only flag; callers can now list the review queue cleanly - update_memory accepts any valid status including "candidate" - reinforce_memory(id, delta): low-level primitive that bumps confidence (capped at 1.0), increments reference_count, and sets last_referenced_at. Only active memories; returns (applied, old, new) - promote_memory / reject_candidate_memory helpers prepping Commit C Interactions service: - record_interaction(reinforce=True) runs reinforce_from_interaction automatically when the interaction has response content. reinforcement errors are logged but never raised back to the caller so capture itself is never blocked by a flaky downstream. - circular import between interactions service and memory.reinforcement avoided by lazy import inside the function API: - POST /interactions now accepts a reinforce bool field (default true) - POST /interactions/{id}/reinforce runs reinforcement on an existing captured interaction — useful for backfilling or for retrying after a transient error in the automatic pass - response lists which memory ids were reinforced with old / new confidence for audit Tests (17 new, all green): - reinforce_memory bumps, caps at 1.0, accumulates reference_count - reinforce_memory rejects candidates and missing ids - reinforce_memory rejects negative delta - reinforce_from_interaction matches active memory - reinforce_from_interaction ignores candidates and inactive - reinforce_from_interaction requires minimum content length - reinforce_from_interaction handles empty response cleanly - reinforce_from_interaction normalizes casing and whitespace - reinforce_from_interaction deduplicates across memory buckets - record_interaction auto-reinforces by default - record_interaction reinforce=False skips the pass - record_interaction handles empty response - POST /interactions/{id}/reinforce runs against stored interaction - POST /interactions/{id}/reinforce returns 404 for missing id - POST /interactions accepts reinforce=false Full suite: 135 passing (was 118). Trust model unchanged: - reinforcement only moves confidence within the existing active set - the candidate lifecycle is declared but only Commit C will actually create candidate memories - trusted project state is never touched by reinforcement Next: Commit C adds the rule-based extractor that produces candidate memories from captured interactions plus the promote/reject review queue endpoints.
2026-04-06 21:18:38 -04:00
last_referenced_at: str = ""
reference_count: int = 0
feat: Phase 3 V1 — Auto-Organization (domain_tags + valid_until) Adds structural metadata that the LLM triage was already implicitly reasoning about ("stale snapshot" → reject). Phase 3 captures that reasoning as fields so it can DRIVE retrieval, not just rejection. Schema (src/atocore/models/database.py): - domain_tags TEXT DEFAULT '[]' JSON array of lowercase topic keywords - valid_until DATETIME ISO date; null = permanent - idx_memories_valid_until index for efficient expiry queries Memory service (src/atocore/memory/service.py): - Memory dataclass gains domain_tags + valid_until - create_memory, update_memory accept/persist both - _row_to_memory safely reads both (JSON-decode + null handling) - _normalize_tags helper: lowercase, dedup, strip, cap at 10 - get_memories_for_context filters expired (valid_until < today UTC) - _rank_memories_for_query adds tag-boost: memories whose domain_tags appear as substrings in query text rank higher (tertiary key after content-overlap density + absolute overlap, before confidence) LLM extractor (_llm_prompt.py → llm-0.5.0): - SYSTEM_PROMPT documents domain_tags (2-5 keywords) + valid_until (time-bounded facts get expiry dates; durable facts stay null) - normalize_candidate_item parses both fields from model output with graceful fallback for string/null/missing LLM triage (scripts/auto_triage.py): - TRIAGE_SYSTEM_PROMPT documents same two fields - parse_verdict extracts them from verdict JSON - On promote: PUT /memory/{id} with tags + valid_until BEFORE POST /memory/{id}/promote, so active memories carry them API (src/atocore/api/routes.py): - MemoryCreateRequest: adds domain_tags, valid_until - MemoryUpdateRequest: adds domain_tags, valid_until, memory_type - GET /memory response exposes domain_tags + valid_until + created_at Triage UI (src/atocore/engineering/triage_ui.py): - Renders existing tags as colored badges - Adds inline text field for tags (comma-separated) + date picker for valid_until on every candidate card - Save&Promote button persists edits via PUT then promotes - Plain Promote (and Y shortcut) also saves tags/expiry if edited Wiki (src/atocore/engineering/wiki.py): - Search now matches memory content OR domain_tags - Search results render tags as clickable badges linking to /wiki/search?q=<tag> for cross-project navigation - valid_until shown as amber "valid until YYYY-MM-DD" hint Tests: 303 → 308 (5 new for Phase 3 behavior): - test_create_memory_with_tags_and_valid_until - test_create_memory_normalizes_tags - test_update_memory_sets_tags_and_valid_until - test_get_memories_for_context_excludes_expired - test_context_builder_tag_boost_orders_results Deferred (explicitly): temporal_scope enum, source_refs memory graph, HDBSCAN clustering, memory detail wiki page, backfill of existing actives. See docs/MASTER-BRAIN-PLAN.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:37:01 -04:00
domain_tags: list[str] | None = None
valid_until: str = "" # ISO UTC; empty = permanent
feat: Phase 4 V1 — Robustness Hardening Adds the observability + safety layer that turns AtoCore from "works until something silently breaks" into "every mutation is traceable, drift is detected, failures raise alerts." 1. Audit log (memory_audit table): - New table with id, memory_id, action, actor, before/after JSON, note, timestamp; 3 indexes for memory_id/timestamp/action - _audit_memory() helper called from every mutation: create_memory, update_memory, promote_memory, reject_candidate_memory, invalidate_memory, supersede_memory, reinforce_memory, auto_promote_reinforced, expire_stale_candidates - Action verb auto-selected: promoted/rejected/invalidated/ superseded/updated based on state transition - "actor" threaded through: api-http, human-triage, phase10-auto- promote, candidate-expiry, reinforcement, etc. - Fail-open: audit write failure logs but never breaks the mutation - GET /memory/{id}/audit: full history for one memory - GET /admin/audit/recent: last 50 mutations across the system 2. Alerts framework (src/atocore/observability/alerts.py): - emit_alert(severity, title, message, context) fans out to: - structlog logger (always) - ~/atocore-logs/alerts.log append (configurable via ATOCORE_ALERT_LOG) - project_state atocore/alert/last_{severity} (dashboard surface) - ATOCORE_ALERT_WEBHOOK POST if set (auto-detects Discord webhook format for nice embeds; generic JSON otherwise) - Every sink fail-open — one failure doesn't prevent the others - Pipeline alert step in nightly cron: harness < 85% → warning; candidate queue > 200 → warning 3. Integrity checks (scripts/integrity_check.py): - Nightly scan for drift: - Memories → missing source_chunk_id references - Duplicate active memories (same type+content+project) - project_state → missing projects - Orphaned source_chunks (no parent document) - Results persisted to atocore/status/integrity_check_result - Any finding emits a warning alert - Added as Step G in deploy/dalidou/batch-extract.sh nightly cron 4. Dashboard surfaces it all: - integrity (findings + details) - alerts (last info/warning/critical per severity) - recent_audit (last 10 mutations with actor + action + preview) Tests: 308 → 317 (9 new): - test_audit_create_logs_entry - test_audit_promote_logs_entry - test_audit_reject_logs_entry - test_audit_update_captures_before_after - test_audit_reinforce_logs_entry - test_recent_audit_returns_cross_memory_entries - test_emit_alert_writes_log_file - test_emit_alert_invalid_severity_falls_back_to_info - test_emit_alert_fails_open_on_log_write_error Deferred: formal migration framework with rollback (current additive pattern is fine for V1); memory detail wiki page with audit view (quick follow-up). To enable Discord alerts: set ATOCORE_ALERT_WEBHOOK to a Discord webhook URL in Dalidou's environment. Default = log-only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:54:10 -04:00
def _audit_memory(
memory_id: str,
action: str,
actor: str = "api",
before: dict | None = None,
after: dict | None = None,
note: str = "",
) -> None:
"""Append an entry to memory_audit.
Phase 4 Robustness V1. Every memory mutation flows through this
helper so we can answer "how did this memory get to its current
state?" and "when did we learn X?".
``action`` is a short verb: created, updated, promoted, rejected,
superseded, invalidated, reinforced, auto_promoted, expired.
``actor`` identifies the caller: api (default), auto-triage,
human-triage, host-cron, reinforcement, phase10-auto-promote,
etc. ``before`` / ``after`` are field snapshots (JSON-serialized).
Fail-open: a logging failure never breaks the mutation itself.
"""
import json as _json
try:
with get_connection() as conn:
conn.execute(
"INSERT INTO memory_audit (id, memory_id, action, actor, "
"before_json, after_json, note) VALUES (?, ?, ?, ?, ?, ?, ?)",
(
str(uuid.uuid4()),
memory_id,
action,
actor or "api",
_json.dumps(before or {}),
_json.dumps(after or {}),
(note or "")[:500],
),
)
except Exception as e:
log.warning("memory_audit_failed", memory_id=memory_id, action=action, error=str(e))
def get_memory_audit(memory_id: str, limit: int = 100) -> list[dict]:
"""Fetch audit entries for a memory, newest first."""
import json as _json
with get_connection() as conn:
rows = conn.execute(
"SELECT id, memory_id, action, actor, before_json, after_json, note, timestamp "
"FROM memory_audit WHERE memory_id = ? ORDER BY timestamp DESC LIMIT ?",
(memory_id, limit),
).fetchall()
out = []
for r in rows:
try:
before = _json.loads(r["before_json"] or "{}")
except Exception:
before = {}
try:
after = _json.loads(r["after_json"] or "{}")
except Exception:
after = {}
out.append({
"id": r["id"],
"memory_id": r["memory_id"],
"action": r["action"],
"actor": r["actor"] or "api",
"before": before,
"after": after,
"note": r["note"] or "",
"timestamp": r["timestamp"],
})
return out
def get_recent_audit(limit: int = 50) -> list[dict]:
"""Fetch recent memory_audit entries across all memories, newest first."""
import json as _json
with get_connection() as conn:
rows = conn.execute(
"SELECT id, memory_id, action, actor, before_json, after_json, note, timestamp "
"FROM memory_audit ORDER BY timestamp DESC LIMIT ?",
(limit,),
).fetchall()
out = []
for r in rows:
try:
after = _json.loads(r["after_json"] or "{}")
except Exception:
after = {}
out.append({
"id": r["id"],
"memory_id": r["memory_id"],
"action": r["action"],
"actor": r["actor"] or "api",
"note": r["note"] or "",
"timestamp": r["timestamp"],
"content_preview": (after.get("content") or "")[:120],
})
return out
feat: Phase 3 V1 — Auto-Organization (domain_tags + valid_until) Adds structural metadata that the LLM triage was already implicitly reasoning about ("stale snapshot" → reject). Phase 3 captures that reasoning as fields so it can DRIVE retrieval, not just rejection. Schema (src/atocore/models/database.py): - domain_tags TEXT DEFAULT '[]' JSON array of lowercase topic keywords - valid_until DATETIME ISO date; null = permanent - idx_memories_valid_until index for efficient expiry queries Memory service (src/atocore/memory/service.py): - Memory dataclass gains domain_tags + valid_until - create_memory, update_memory accept/persist both - _row_to_memory safely reads both (JSON-decode + null handling) - _normalize_tags helper: lowercase, dedup, strip, cap at 10 - get_memories_for_context filters expired (valid_until < today UTC) - _rank_memories_for_query adds tag-boost: memories whose domain_tags appear as substrings in query text rank higher (tertiary key after content-overlap density + absolute overlap, before confidence) LLM extractor (_llm_prompt.py → llm-0.5.0): - SYSTEM_PROMPT documents domain_tags (2-5 keywords) + valid_until (time-bounded facts get expiry dates; durable facts stay null) - normalize_candidate_item parses both fields from model output with graceful fallback for string/null/missing LLM triage (scripts/auto_triage.py): - TRIAGE_SYSTEM_PROMPT documents same two fields - parse_verdict extracts them from verdict JSON - On promote: PUT /memory/{id} with tags + valid_until BEFORE POST /memory/{id}/promote, so active memories carry them API (src/atocore/api/routes.py): - MemoryCreateRequest: adds domain_tags, valid_until - MemoryUpdateRequest: adds domain_tags, valid_until, memory_type - GET /memory response exposes domain_tags + valid_until + created_at Triage UI (src/atocore/engineering/triage_ui.py): - Renders existing tags as colored badges - Adds inline text field for tags (comma-separated) + date picker for valid_until on every candidate card - Save&Promote button persists edits via PUT then promotes - Plain Promote (and Y shortcut) also saves tags/expiry if edited Wiki (src/atocore/engineering/wiki.py): - Search now matches memory content OR domain_tags - Search results render tags as clickable badges linking to /wiki/search?q=<tag> for cross-project navigation - valid_until shown as amber "valid until YYYY-MM-DD" hint Tests: 303 → 308 (5 new for Phase 3 behavior): - test_create_memory_with_tags_and_valid_until - test_create_memory_normalizes_tags - test_update_memory_sets_tags_and_valid_until - test_get_memories_for_context_excludes_expired - test_context_builder_tag_boost_orders_results Deferred (explicitly): temporal_scope enum, source_refs memory graph, HDBSCAN clustering, memory detail wiki page, backfill of existing actives. See docs/MASTER-BRAIN-PLAN.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:37:01 -04:00
def _normalize_tags(tags) -> list[str]:
"""Coerce a tags value (list, JSON string, None) to a clean lowercase list."""
import json as _json
if tags is None:
return []
if isinstance(tags, str):
try:
tags = _json.loads(tags) if tags.strip().startswith("[") else []
except Exception:
tags = []
if not isinstance(tags, list):
return []
out = []
for t in tags:
if not isinstance(t, str):
continue
t = t.strip().lower()
if t and t not in out:
out.append(t)
return out
def create_memory(
memory_type: str,
content: str,
project: str = "",
source_chunk_id: str = "",
confidence: float = 1.0,
feat(phase9-B): reinforce active memories from captured interactions Phase 9 Commit B from the agreed plan. With Commit A capturing what AtoCore fed to the LLM and what came back, this commit closes the weakest part of the loop: when a memory is actually referenced in a response, its confidence should drift up, and stale memories that nobody ever mentions should stay where they are. This is reinforcement only — nothing is promoted into trusted state and no candidates are created. Extraction is Commit C. Schema (additive migration): - memories.last_referenced_at DATETIME (null by default) - memories.reference_count INTEGER DEFAULT 0 - idx_memories_last_referenced on last_referenced_at - memories.status now accepts the new "candidate" value so Commit C has the status slot to land on. Existing active/superseded/invalid rows are untouched. New module: src/atocore/memory/reinforcement.py - reinforce_from_interaction(interaction): scans the interaction's response + response_summary for echoes of active memories and bumps confidence / reference_count for each match - matching is intentionally simple and explainable: * normalize both sides (lowercase, collapse whitespace) * require >= 12 chars of memory content to match * compare the leading 80-char window of each memory - the candidate pool is project-scoped memories for the interaction's project + global identity + preference memories, deduplicated - candidates and invalidated memories are NEVER reinforced; only active memories move Memory service changes: - MEMORY_STATUSES = ["candidate", "active", "superseded", "invalid"] - create_memory(status="candidate"|"active"|...) with per-status duplicate scoping so a candidate and an active with identical text can legitimately coexist during review - get_memories(status=...) explicit override of the legacy active_only flag; callers can now list the review queue cleanly - update_memory accepts any valid status including "candidate" - reinforce_memory(id, delta): low-level primitive that bumps confidence (capped at 1.0), increments reference_count, and sets last_referenced_at. Only active memories; returns (applied, old, new) - promote_memory / reject_candidate_memory helpers prepping Commit C Interactions service: - record_interaction(reinforce=True) runs reinforce_from_interaction automatically when the interaction has response content. reinforcement errors are logged but never raised back to the caller so capture itself is never blocked by a flaky downstream. - circular import between interactions service and memory.reinforcement avoided by lazy import inside the function API: - POST /interactions now accepts a reinforce bool field (default true) - POST /interactions/{id}/reinforce runs reinforcement on an existing captured interaction — useful for backfilling or for retrying after a transient error in the automatic pass - response lists which memory ids were reinforced with old / new confidence for audit Tests (17 new, all green): - reinforce_memory bumps, caps at 1.0, accumulates reference_count - reinforce_memory rejects candidates and missing ids - reinforce_memory rejects negative delta - reinforce_from_interaction matches active memory - reinforce_from_interaction ignores candidates and inactive - reinforce_from_interaction requires minimum content length - reinforce_from_interaction handles empty response cleanly - reinforce_from_interaction normalizes casing and whitespace - reinforce_from_interaction deduplicates across memory buckets - record_interaction auto-reinforces by default - record_interaction reinforce=False skips the pass - record_interaction handles empty response - POST /interactions/{id}/reinforce runs against stored interaction - POST /interactions/{id}/reinforce returns 404 for missing id - POST /interactions accepts reinforce=false Full suite: 135 passing (was 118). Trust model unchanged: - reinforcement only moves confidence within the existing active set - the candidate lifecycle is declared but only Commit C will actually create candidate memories - trusted project state is never touched by reinforcement Next: Commit C adds the rule-based extractor that produces candidate memories from captured interactions plus the promote/reject review queue endpoints.
2026-04-06 21:18:38 -04:00
status: str = "active",
feat: Phase 3 V1 — Auto-Organization (domain_tags + valid_until) Adds structural metadata that the LLM triage was already implicitly reasoning about ("stale snapshot" → reject). Phase 3 captures that reasoning as fields so it can DRIVE retrieval, not just rejection. Schema (src/atocore/models/database.py): - domain_tags TEXT DEFAULT '[]' JSON array of lowercase topic keywords - valid_until DATETIME ISO date; null = permanent - idx_memories_valid_until index for efficient expiry queries Memory service (src/atocore/memory/service.py): - Memory dataclass gains domain_tags + valid_until - create_memory, update_memory accept/persist both - _row_to_memory safely reads both (JSON-decode + null handling) - _normalize_tags helper: lowercase, dedup, strip, cap at 10 - get_memories_for_context filters expired (valid_until < today UTC) - _rank_memories_for_query adds tag-boost: memories whose domain_tags appear as substrings in query text rank higher (tertiary key after content-overlap density + absolute overlap, before confidence) LLM extractor (_llm_prompt.py → llm-0.5.0): - SYSTEM_PROMPT documents domain_tags (2-5 keywords) + valid_until (time-bounded facts get expiry dates; durable facts stay null) - normalize_candidate_item parses both fields from model output with graceful fallback for string/null/missing LLM triage (scripts/auto_triage.py): - TRIAGE_SYSTEM_PROMPT documents same two fields - parse_verdict extracts them from verdict JSON - On promote: PUT /memory/{id} with tags + valid_until BEFORE POST /memory/{id}/promote, so active memories carry them API (src/atocore/api/routes.py): - MemoryCreateRequest: adds domain_tags, valid_until - MemoryUpdateRequest: adds domain_tags, valid_until, memory_type - GET /memory response exposes domain_tags + valid_until + created_at Triage UI (src/atocore/engineering/triage_ui.py): - Renders existing tags as colored badges - Adds inline text field for tags (comma-separated) + date picker for valid_until on every candidate card - Save&Promote button persists edits via PUT then promotes - Plain Promote (and Y shortcut) also saves tags/expiry if edited Wiki (src/atocore/engineering/wiki.py): - Search now matches memory content OR domain_tags - Search results render tags as clickable badges linking to /wiki/search?q=<tag> for cross-project navigation - valid_until shown as amber "valid until YYYY-MM-DD" hint Tests: 303 → 308 (5 new for Phase 3 behavior): - test_create_memory_with_tags_and_valid_until - test_create_memory_normalizes_tags - test_update_memory_sets_tags_and_valid_until - test_get_memories_for_context_excludes_expired - test_context_builder_tag_boost_orders_results Deferred (explicitly): temporal_scope enum, source_refs memory graph, HDBSCAN clustering, memory detail wiki page, backfill of existing actives. See docs/MASTER-BRAIN-PLAN.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:37:01 -04:00
domain_tags: list[str] | None = None,
valid_until: str = "",
feat: Phase 4 V1 — Robustness Hardening Adds the observability + safety layer that turns AtoCore from "works until something silently breaks" into "every mutation is traceable, drift is detected, failures raise alerts." 1. Audit log (memory_audit table): - New table with id, memory_id, action, actor, before/after JSON, note, timestamp; 3 indexes for memory_id/timestamp/action - _audit_memory() helper called from every mutation: create_memory, update_memory, promote_memory, reject_candidate_memory, invalidate_memory, supersede_memory, reinforce_memory, auto_promote_reinforced, expire_stale_candidates - Action verb auto-selected: promoted/rejected/invalidated/ superseded/updated based on state transition - "actor" threaded through: api-http, human-triage, phase10-auto- promote, candidate-expiry, reinforcement, etc. - Fail-open: audit write failure logs but never breaks the mutation - GET /memory/{id}/audit: full history for one memory - GET /admin/audit/recent: last 50 mutations across the system 2. Alerts framework (src/atocore/observability/alerts.py): - emit_alert(severity, title, message, context) fans out to: - structlog logger (always) - ~/atocore-logs/alerts.log append (configurable via ATOCORE_ALERT_LOG) - project_state atocore/alert/last_{severity} (dashboard surface) - ATOCORE_ALERT_WEBHOOK POST if set (auto-detects Discord webhook format for nice embeds; generic JSON otherwise) - Every sink fail-open — one failure doesn't prevent the others - Pipeline alert step in nightly cron: harness < 85% → warning; candidate queue > 200 → warning 3. Integrity checks (scripts/integrity_check.py): - Nightly scan for drift: - Memories → missing source_chunk_id references - Duplicate active memories (same type+content+project) - project_state → missing projects - Orphaned source_chunks (no parent document) - Results persisted to atocore/status/integrity_check_result - Any finding emits a warning alert - Added as Step G in deploy/dalidou/batch-extract.sh nightly cron 4. Dashboard surfaces it all: - integrity (findings + details) - alerts (last info/warning/critical per severity) - recent_audit (last 10 mutations with actor + action + preview) Tests: 308 → 317 (9 new): - test_audit_create_logs_entry - test_audit_promote_logs_entry - test_audit_reject_logs_entry - test_audit_update_captures_before_after - test_audit_reinforce_logs_entry - test_recent_audit_returns_cross_memory_entries - test_emit_alert_writes_log_file - test_emit_alert_invalid_severity_falls_back_to_info - test_emit_alert_fails_open_on_log_write_error Deferred: formal migration framework with rollback (current additive pattern is fine for V1); memory detail wiki page with audit view (quick follow-up). To enable Discord alerts: set ATOCORE_ALERT_WEBHOOK to a Discord webhook URL in Dalidou's environment. Default = log-only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:54:10 -04:00
actor: str = "api",
) -> Memory:
feat(phase9-B): reinforce active memories from captured interactions Phase 9 Commit B from the agreed plan. With Commit A capturing what AtoCore fed to the LLM and what came back, this commit closes the weakest part of the loop: when a memory is actually referenced in a response, its confidence should drift up, and stale memories that nobody ever mentions should stay where they are. This is reinforcement only — nothing is promoted into trusted state and no candidates are created. Extraction is Commit C. Schema (additive migration): - memories.last_referenced_at DATETIME (null by default) - memories.reference_count INTEGER DEFAULT 0 - idx_memories_last_referenced on last_referenced_at - memories.status now accepts the new "candidate" value so Commit C has the status slot to land on. Existing active/superseded/invalid rows are untouched. New module: src/atocore/memory/reinforcement.py - reinforce_from_interaction(interaction): scans the interaction's response + response_summary for echoes of active memories and bumps confidence / reference_count for each match - matching is intentionally simple and explainable: * normalize both sides (lowercase, collapse whitespace) * require >= 12 chars of memory content to match * compare the leading 80-char window of each memory - the candidate pool is project-scoped memories for the interaction's project + global identity + preference memories, deduplicated - candidates and invalidated memories are NEVER reinforced; only active memories move Memory service changes: - MEMORY_STATUSES = ["candidate", "active", "superseded", "invalid"] - create_memory(status="candidate"|"active"|...) with per-status duplicate scoping so a candidate and an active with identical text can legitimately coexist during review - get_memories(status=...) explicit override of the legacy active_only flag; callers can now list the review queue cleanly - update_memory accepts any valid status including "candidate" - reinforce_memory(id, delta): low-level primitive that bumps confidence (capped at 1.0), increments reference_count, and sets last_referenced_at. Only active memories; returns (applied, old, new) - promote_memory / reject_candidate_memory helpers prepping Commit C Interactions service: - record_interaction(reinforce=True) runs reinforce_from_interaction automatically when the interaction has response content. reinforcement errors are logged but never raised back to the caller so capture itself is never blocked by a flaky downstream. - circular import between interactions service and memory.reinforcement avoided by lazy import inside the function API: - POST /interactions now accepts a reinforce bool field (default true) - POST /interactions/{id}/reinforce runs reinforcement on an existing captured interaction — useful for backfilling or for retrying after a transient error in the automatic pass - response lists which memory ids were reinforced with old / new confidence for audit Tests (17 new, all green): - reinforce_memory bumps, caps at 1.0, accumulates reference_count - reinforce_memory rejects candidates and missing ids - reinforce_memory rejects negative delta - reinforce_from_interaction matches active memory - reinforce_from_interaction ignores candidates and inactive - reinforce_from_interaction requires minimum content length - reinforce_from_interaction handles empty response cleanly - reinforce_from_interaction normalizes casing and whitespace - reinforce_from_interaction deduplicates across memory buckets - record_interaction auto-reinforces by default - record_interaction reinforce=False skips the pass - record_interaction handles empty response - POST /interactions/{id}/reinforce runs against stored interaction - POST /interactions/{id}/reinforce returns 404 for missing id - POST /interactions accepts reinforce=false Full suite: 135 passing (was 118). Trust model unchanged: - reinforcement only moves confidence within the existing active set - the candidate lifecycle is declared but only Commit C will actually create candidate memories - trusted project state is never touched by reinforcement Next: Commit C adds the rule-based extractor that produces candidate memories from captured interactions plus the promote/reject review queue endpoints.
2026-04-06 21:18:38 -04:00
"""Create a new memory entry.
``status`` defaults to ``active`` for backward compatibility. Pass
``candidate`` when the memory is being proposed by the Phase 9 Commit C
extractor and still needs human review before it can influence context.
feat: Phase 3 V1 — Auto-Organization (domain_tags + valid_until) Adds structural metadata that the LLM triage was already implicitly reasoning about ("stale snapshot" → reject). Phase 3 captures that reasoning as fields so it can DRIVE retrieval, not just rejection. Schema (src/atocore/models/database.py): - domain_tags TEXT DEFAULT '[]' JSON array of lowercase topic keywords - valid_until DATETIME ISO date; null = permanent - idx_memories_valid_until index for efficient expiry queries Memory service (src/atocore/memory/service.py): - Memory dataclass gains domain_tags + valid_until - create_memory, update_memory accept/persist both - _row_to_memory safely reads both (JSON-decode + null handling) - _normalize_tags helper: lowercase, dedup, strip, cap at 10 - get_memories_for_context filters expired (valid_until < today UTC) - _rank_memories_for_query adds tag-boost: memories whose domain_tags appear as substrings in query text rank higher (tertiary key after content-overlap density + absolute overlap, before confidence) LLM extractor (_llm_prompt.py → llm-0.5.0): - SYSTEM_PROMPT documents domain_tags (2-5 keywords) + valid_until (time-bounded facts get expiry dates; durable facts stay null) - normalize_candidate_item parses both fields from model output with graceful fallback for string/null/missing LLM triage (scripts/auto_triage.py): - TRIAGE_SYSTEM_PROMPT documents same two fields - parse_verdict extracts them from verdict JSON - On promote: PUT /memory/{id} with tags + valid_until BEFORE POST /memory/{id}/promote, so active memories carry them API (src/atocore/api/routes.py): - MemoryCreateRequest: adds domain_tags, valid_until - MemoryUpdateRequest: adds domain_tags, valid_until, memory_type - GET /memory response exposes domain_tags + valid_until + created_at Triage UI (src/atocore/engineering/triage_ui.py): - Renders existing tags as colored badges - Adds inline text field for tags (comma-separated) + date picker for valid_until on every candidate card - Save&Promote button persists edits via PUT then promotes - Plain Promote (and Y shortcut) also saves tags/expiry if edited Wiki (src/atocore/engineering/wiki.py): - Search now matches memory content OR domain_tags - Search results render tags as clickable badges linking to /wiki/search?q=<tag> for cross-project navigation - valid_until shown as amber "valid until YYYY-MM-DD" hint Tests: 303 → 308 (5 new for Phase 3 behavior): - test_create_memory_with_tags_and_valid_until - test_create_memory_normalizes_tags - test_update_memory_sets_tags_and_valid_until - test_get_memories_for_context_excludes_expired - test_context_builder_tag_boost_orders_results Deferred (explicitly): temporal_scope enum, source_refs memory graph, HDBSCAN clustering, memory detail wiki page, backfill of existing actives. See docs/MASTER-BRAIN-PLAN.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:37:01 -04:00
Phase 3: ``domain_tags`` is a list of lowercase domain strings
(optics, mechanics, firmware, ...) for cross-project retrieval.
``valid_until`` is an ISO UTC timestamp; memories with valid_until
in the past are excluded from context packs (but remain queryable).
feat(phase9-B): reinforce active memories from captured interactions Phase 9 Commit B from the agreed plan. With Commit A capturing what AtoCore fed to the LLM and what came back, this commit closes the weakest part of the loop: when a memory is actually referenced in a response, its confidence should drift up, and stale memories that nobody ever mentions should stay where they are. This is reinforcement only — nothing is promoted into trusted state and no candidates are created. Extraction is Commit C. Schema (additive migration): - memories.last_referenced_at DATETIME (null by default) - memories.reference_count INTEGER DEFAULT 0 - idx_memories_last_referenced on last_referenced_at - memories.status now accepts the new "candidate" value so Commit C has the status slot to land on. Existing active/superseded/invalid rows are untouched. New module: src/atocore/memory/reinforcement.py - reinforce_from_interaction(interaction): scans the interaction's response + response_summary for echoes of active memories and bumps confidence / reference_count for each match - matching is intentionally simple and explainable: * normalize both sides (lowercase, collapse whitespace) * require >= 12 chars of memory content to match * compare the leading 80-char window of each memory - the candidate pool is project-scoped memories for the interaction's project + global identity + preference memories, deduplicated - candidates and invalidated memories are NEVER reinforced; only active memories move Memory service changes: - MEMORY_STATUSES = ["candidate", "active", "superseded", "invalid"] - create_memory(status="candidate"|"active"|...) with per-status duplicate scoping so a candidate and an active with identical text can legitimately coexist during review - get_memories(status=...) explicit override of the legacy active_only flag; callers can now list the review queue cleanly - update_memory accepts any valid status including "candidate" - reinforce_memory(id, delta): low-level primitive that bumps confidence (capped at 1.0), increments reference_count, and sets last_referenced_at. Only active memories; returns (applied, old, new) - promote_memory / reject_candidate_memory helpers prepping Commit C Interactions service: - record_interaction(reinforce=True) runs reinforce_from_interaction automatically when the interaction has response content. reinforcement errors are logged but never raised back to the caller so capture itself is never blocked by a flaky downstream. - circular import between interactions service and memory.reinforcement avoided by lazy import inside the function API: - POST /interactions now accepts a reinforce bool field (default true) - POST /interactions/{id}/reinforce runs reinforcement on an existing captured interaction — useful for backfilling or for retrying after a transient error in the automatic pass - response lists which memory ids were reinforced with old / new confidence for audit Tests (17 new, all green): - reinforce_memory bumps, caps at 1.0, accumulates reference_count - reinforce_memory rejects candidates and missing ids - reinforce_memory rejects negative delta - reinforce_from_interaction matches active memory - reinforce_from_interaction ignores candidates and inactive - reinforce_from_interaction requires minimum content length - reinforce_from_interaction handles empty response cleanly - reinforce_from_interaction normalizes casing and whitespace - reinforce_from_interaction deduplicates across memory buckets - record_interaction auto-reinforces by default - record_interaction reinforce=False skips the pass - record_interaction handles empty response - POST /interactions/{id}/reinforce runs against stored interaction - POST /interactions/{id}/reinforce returns 404 for missing id - POST /interactions accepts reinforce=false Full suite: 135 passing (was 118). Trust model unchanged: - reinforcement only moves confidence within the existing active set - the candidate lifecycle is declared but only Commit C will actually create candidate memories - trusted project state is never touched by reinforcement Next: Commit C adds the rule-based extractor that produces candidate memories from captured interactions plus the promote/reject review queue endpoints.
2026-04-06 21:18:38 -04:00
"""
feat: Phase 3 V1 — Auto-Organization (domain_tags + valid_until) Adds structural metadata that the LLM triage was already implicitly reasoning about ("stale snapshot" → reject). Phase 3 captures that reasoning as fields so it can DRIVE retrieval, not just rejection. Schema (src/atocore/models/database.py): - domain_tags TEXT DEFAULT '[]' JSON array of lowercase topic keywords - valid_until DATETIME ISO date; null = permanent - idx_memories_valid_until index for efficient expiry queries Memory service (src/atocore/memory/service.py): - Memory dataclass gains domain_tags + valid_until - create_memory, update_memory accept/persist both - _row_to_memory safely reads both (JSON-decode + null handling) - _normalize_tags helper: lowercase, dedup, strip, cap at 10 - get_memories_for_context filters expired (valid_until < today UTC) - _rank_memories_for_query adds tag-boost: memories whose domain_tags appear as substrings in query text rank higher (tertiary key after content-overlap density + absolute overlap, before confidence) LLM extractor (_llm_prompt.py → llm-0.5.0): - SYSTEM_PROMPT documents domain_tags (2-5 keywords) + valid_until (time-bounded facts get expiry dates; durable facts stay null) - normalize_candidate_item parses both fields from model output with graceful fallback for string/null/missing LLM triage (scripts/auto_triage.py): - TRIAGE_SYSTEM_PROMPT documents same two fields - parse_verdict extracts them from verdict JSON - On promote: PUT /memory/{id} with tags + valid_until BEFORE POST /memory/{id}/promote, so active memories carry them API (src/atocore/api/routes.py): - MemoryCreateRequest: adds domain_tags, valid_until - MemoryUpdateRequest: adds domain_tags, valid_until, memory_type - GET /memory response exposes domain_tags + valid_until + created_at Triage UI (src/atocore/engineering/triage_ui.py): - Renders existing tags as colored badges - Adds inline text field for tags (comma-separated) + date picker for valid_until on every candidate card - Save&Promote button persists edits via PUT then promotes - Plain Promote (and Y shortcut) also saves tags/expiry if edited Wiki (src/atocore/engineering/wiki.py): - Search now matches memory content OR domain_tags - Search results render tags as clickable badges linking to /wiki/search?q=<tag> for cross-project navigation - valid_until shown as amber "valid until YYYY-MM-DD" hint Tests: 303 → 308 (5 new for Phase 3 behavior): - test_create_memory_with_tags_and_valid_until - test_create_memory_normalizes_tags - test_update_memory_sets_tags_and_valid_until - test_get_memories_for_context_excludes_expired - test_context_builder_tag_boost_orders_results Deferred (explicitly): temporal_scope enum, source_refs memory graph, HDBSCAN clustering, memory detail wiki page, backfill of existing actives. See docs/MASTER-BRAIN-PLAN.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:37:01 -04:00
import json as _json
if memory_type not in MEMORY_TYPES:
raise ValueError(f"Invalid memory type '{memory_type}'. Must be one of: {MEMORY_TYPES}")
feat(phase9-B): reinforce active memories from captured interactions Phase 9 Commit B from the agreed plan. With Commit A capturing what AtoCore fed to the LLM and what came back, this commit closes the weakest part of the loop: when a memory is actually referenced in a response, its confidence should drift up, and stale memories that nobody ever mentions should stay where they are. This is reinforcement only — nothing is promoted into trusted state and no candidates are created. Extraction is Commit C. Schema (additive migration): - memories.last_referenced_at DATETIME (null by default) - memories.reference_count INTEGER DEFAULT 0 - idx_memories_last_referenced on last_referenced_at - memories.status now accepts the new "candidate" value so Commit C has the status slot to land on. Existing active/superseded/invalid rows are untouched. New module: src/atocore/memory/reinforcement.py - reinforce_from_interaction(interaction): scans the interaction's response + response_summary for echoes of active memories and bumps confidence / reference_count for each match - matching is intentionally simple and explainable: * normalize both sides (lowercase, collapse whitespace) * require >= 12 chars of memory content to match * compare the leading 80-char window of each memory - the candidate pool is project-scoped memories for the interaction's project + global identity + preference memories, deduplicated - candidates and invalidated memories are NEVER reinforced; only active memories move Memory service changes: - MEMORY_STATUSES = ["candidate", "active", "superseded", "invalid"] - create_memory(status="candidate"|"active"|...) with per-status duplicate scoping so a candidate and an active with identical text can legitimately coexist during review - get_memories(status=...) explicit override of the legacy active_only flag; callers can now list the review queue cleanly - update_memory accepts any valid status including "candidate" - reinforce_memory(id, delta): low-level primitive that bumps confidence (capped at 1.0), increments reference_count, and sets last_referenced_at. Only active memories; returns (applied, old, new) - promote_memory / reject_candidate_memory helpers prepping Commit C Interactions service: - record_interaction(reinforce=True) runs reinforce_from_interaction automatically when the interaction has response content. reinforcement errors are logged but never raised back to the caller so capture itself is never blocked by a flaky downstream. - circular import between interactions service and memory.reinforcement avoided by lazy import inside the function API: - POST /interactions now accepts a reinforce bool field (default true) - POST /interactions/{id}/reinforce runs reinforcement on an existing captured interaction — useful for backfilling or for retrying after a transient error in the automatic pass - response lists which memory ids were reinforced with old / new confidence for audit Tests (17 new, all green): - reinforce_memory bumps, caps at 1.0, accumulates reference_count - reinforce_memory rejects candidates and missing ids - reinforce_memory rejects negative delta - reinforce_from_interaction matches active memory - reinforce_from_interaction ignores candidates and inactive - reinforce_from_interaction requires minimum content length - reinforce_from_interaction handles empty response cleanly - reinforce_from_interaction normalizes casing and whitespace - reinforce_from_interaction deduplicates across memory buckets - record_interaction auto-reinforces by default - record_interaction reinforce=False skips the pass - record_interaction handles empty response - POST /interactions/{id}/reinforce runs against stored interaction - POST /interactions/{id}/reinforce returns 404 for missing id - POST /interactions accepts reinforce=false Full suite: 135 passing (was 118). Trust model unchanged: - reinforcement only moves confidence within the existing active set - the candidate lifecycle is declared but only Commit C will actually create candidate memories - trusted project state is never touched by reinforcement Next: Commit C adds the rule-based extractor that produces candidate memories from captured interactions plus the promote/reject review queue endpoints.
2026-04-06 21:18:38 -04:00
if status not in MEMORY_STATUSES:
raise ValueError(f"Invalid status '{status}'. Must be one of: {MEMORY_STATUSES}")
_validate_confidence(confidence)
fix(P1+P2): canonicalize project names at every trust boundary Three findings from codex's review of the previous P1+P2 fix. The earlier commit (f2372ef) only fixed alias resolution at the context builder. Codex correctly pointed out that the same fragmentation applies at every other place a project name crosses a boundary — project_state writes/reads, interaction capture/listing/filtering, memory create/queries, and reinforcement's downstream queries. Plus a real bug in the interaction `since` filter where the storage format and the documented ISO format don't compare cleanly. The fix is one helper used at every boundary instead of duplicating the resolution inline. New helper: src/atocore/projects/registry.py::resolve_project_name --------------------------------------------------------------- - Single canonicalization boundary for project names - Returns the canonical project_id when the input matches any registered id or alias - Returns the input unchanged for empty/None and for unregistered names (preserves backwards compat with hand-curated state that predates the registry) - Documented as the contract that every read/write at the trust boundary should pass through P1 — Trusted Project State endpoints ------------------------------------ src/atocore/context/project_state.py: set_state, get_state, and invalidate_state now all canonicalize project_name through resolve_project_name BEFORE looking up or creating the project row. Before this fix: - POST /project/state with project="p05" called ensure_project("p05") which created a separate row in the projects table - The state row was attached to that alias project_id - Later context builds canonicalized "p05" -> "p05-interferometer" via the builder fix from f2372ef and never found the state - Result: trusted state silently fragmented across alias rows After this fix: - The alias is resolved to the canonical id at every entry point - Two captures (one via "p05", one via "p05-interferometer") write to the same row - get_state via either alias or the canonical id finds the same row Fixes the highest-priority gap codex flagged because Trusted Project State is supposed to be the most dependable layer in the AtoCore trust hierarchy. P2.a — Interaction capture project canonicalization ---------------------------------------------------- src/atocore/interactions/service.py: record_interaction now canonicalizes project before storing, so interaction.project is always the canonical id regardless of what the client passed. Downstream effects: - reinforce_from_interaction queries memories by interaction.project -> previously missed memories stored under canonical id -> now consistent because interaction.project IS the canonical id - the extractor stamps candidates with interaction.project -> previously created candidates in alias buckets -> now creates candidates in the canonical bucket - list_interactions(project=alias) was already broken, now fixed by canonicalizing the filter input on the read side too Memory service applied the same fix: - src/atocore/memory/service.py: create_memory and get_memories both canonicalize project through resolve_project_name - This keeps stored memory.project consistent with the reinforcement query path P2.b — Interaction `since` filter format normalization ------------------------------------------------------ src/atocore/interactions/service.py: new _normalize_since helper. The bug: - created_at is stored as 'YYYY-MM-DD HH:MM:SS' (no timezone, UTC by convention) so it sorts lexically and compares cleanly with the SQLite CURRENT_TIMESTAMP default - The `since` parameter was documented as ISO 8601 but compared as a raw string against the storage format - The lexically-greater 'T' separator means an ISO timestamp like '2026-04-07T12:00:00Z' is GREATER than the storage form '2026-04-07 12:00:00' for the same instant - Result: a client passing ISO `since` got an empty result for any row from the same day, even though those rows existed and were technically "after" the cutoff in real-world time The fix: - _normalize_since accepts ISO 8601 with T, optional Z suffix, optional fractional seconds, optional +HH:MM offsets - Uses datetime.fromisoformat for parsing (Python 3.11+) - Converts to UTC and reformats as the storage format before the SQL comparison - The bare storage format still works (backwards compat path is a regex match that returns the input unchanged) - Unparseable input is returned as-is so the comparison degrades gracefully (rows just don't match) instead of raising and breaking the listing endpoint builder.py refactor ------------------- The previous P1 fix had inline canonicalization. Now it uses the shared helper for consistency: - import changed from get_registered_project to resolve_project_name - the inline lookup is replaced with a single helper call - the comment block now points at representation-authority.md for the canonicalization contract New shared test fixture: tests/conftest.py::project_registry ------------------------------------------------------------ - Standardizes the registry-setup pattern that was duplicated across test_context_builder.py, test_project_state.py, test_interactions.py, and test_reinforcement.py - Returns a callable that takes (project_id, [aliases]) tuples and writes them into a temp registry file with the env var pointed at it and config.settings reloaded - Used by all 12 new regression tests in this commit Tests (12 new, all green on first run) -------------------------------------- test_project_state.py: - test_set_state_canonicalizes_alias: write via alias, read via every alias and the canonical id, verify same row id - test_get_state_canonicalizes_alias_after_canonical_write - test_invalidate_state_canonicalizes_alias - test_unregistered_project_state_still_works (backwards compat) test_interactions.py: - test_record_interaction_canonicalizes_project - test_list_interactions_canonicalizes_project_filter - test_list_interactions_since_accepts_iso_with_t_separator - test_list_interactions_since_accepts_z_suffix - test_list_interactions_since_accepts_offset - test_list_interactions_since_storage_format_still_works test_reinforcement.py: - test_reinforcement_works_when_capture_uses_alias (end-to-end: capture under alias, seed memory under canonical, verify reinforcement matches) - test_get_memories_filter_by_alias Full suite: 174 passing (was 162), 1 warning. The +12 is the new regression tests, no existing tests regressed. What's still NOT canonicalized (and why) ---------------------------------------- - _rank_chunks's secondary substring boost in builder.py — the retriever already does the right thing via its own _project_match_boost which calls get_registered_project. The redundant secondary boost still uses the raw hint but it's a multiplicative factor on top of correct retrieval, not a filter, so it can't drop relevant chunks. Tracked as a future cleanup but not a P1. - update_memory's project field (you can't change a memory's project after creation in the API anyway). - The retriever's project_hint parameter on direct /query calls — same reasoning as the builder boost, plus the retriever's own get_registered_project call already handles aliases there.
2026-04-07 08:29:33 -04:00
project = resolve_project_name(project)
feat: Phase 3 V1 — Auto-Organization (domain_tags + valid_until) Adds structural metadata that the LLM triage was already implicitly reasoning about ("stale snapshot" → reject). Phase 3 captures that reasoning as fields so it can DRIVE retrieval, not just rejection. Schema (src/atocore/models/database.py): - domain_tags TEXT DEFAULT '[]' JSON array of lowercase topic keywords - valid_until DATETIME ISO date; null = permanent - idx_memories_valid_until index for efficient expiry queries Memory service (src/atocore/memory/service.py): - Memory dataclass gains domain_tags + valid_until - create_memory, update_memory accept/persist both - _row_to_memory safely reads both (JSON-decode + null handling) - _normalize_tags helper: lowercase, dedup, strip, cap at 10 - get_memories_for_context filters expired (valid_until < today UTC) - _rank_memories_for_query adds tag-boost: memories whose domain_tags appear as substrings in query text rank higher (tertiary key after content-overlap density + absolute overlap, before confidence) LLM extractor (_llm_prompt.py → llm-0.5.0): - SYSTEM_PROMPT documents domain_tags (2-5 keywords) + valid_until (time-bounded facts get expiry dates; durable facts stay null) - normalize_candidate_item parses both fields from model output with graceful fallback for string/null/missing LLM triage (scripts/auto_triage.py): - TRIAGE_SYSTEM_PROMPT documents same two fields - parse_verdict extracts them from verdict JSON - On promote: PUT /memory/{id} with tags + valid_until BEFORE POST /memory/{id}/promote, so active memories carry them API (src/atocore/api/routes.py): - MemoryCreateRequest: adds domain_tags, valid_until - MemoryUpdateRequest: adds domain_tags, valid_until, memory_type - GET /memory response exposes domain_tags + valid_until + created_at Triage UI (src/atocore/engineering/triage_ui.py): - Renders existing tags as colored badges - Adds inline text field for tags (comma-separated) + date picker for valid_until on every candidate card - Save&Promote button persists edits via PUT then promotes - Plain Promote (and Y shortcut) also saves tags/expiry if edited Wiki (src/atocore/engineering/wiki.py): - Search now matches memory content OR domain_tags - Search results render tags as clickable badges linking to /wiki/search?q=<tag> for cross-project navigation - valid_until shown as amber "valid until YYYY-MM-DD" hint Tests: 303 → 308 (5 new for Phase 3 behavior): - test_create_memory_with_tags_and_valid_until - test_create_memory_normalizes_tags - test_update_memory_sets_tags_and_valid_until - test_get_memories_for_context_excludes_expired - test_context_builder_tag_boost_orders_results Deferred (explicitly): temporal_scope enum, source_refs memory graph, HDBSCAN clustering, memory detail wiki page, backfill of existing actives. See docs/MASTER-BRAIN-PLAN.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:37:01 -04:00
tags = _normalize_tags(domain_tags)
tags_json = _json.dumps(tags)
valid_until = (valid_until or "").strip() or None
fix(P1+P2): canonicalize project names at every trust boundary Three findings from codex's review of the previous P1+P2 fix. The earlier commit (f2372ef) only fixed alias resolution at the context builder. Codex correctly pointed out that the same fragmentation applies at every other place a project name crosses a boundary — project_state writes/reads, interaction capture/listing/filtering, memory create/queries, and reinforcement's downstream queries. Plus a real bug in the interaction `since` filter where the storage format and the documented ISO format don't compare cleanly. The fix is one helper used at every boundary instead of duplicating the resolution inline. New helper: src/atocore/projects/registry.py::resolve_project_name --------------------------------------------------------------- - Single canonicalization boundary for project names - Returns the canonical project_id when the input matches any registered id or alias - Returns the input unchanged for empty/None and for unregistered names (preserves backwards compat with hand-curated state that predates the registry) - Documented as the contract that every read/write at the trust boundary should pass through P1 — Trusted Project State endpoints ------------------------------------ src/atocore/context/project_state.py: set_state, get_state, and invalidate_state now all canonicalize project_name through resolve_project_name BEFORE looking up or creating the project row. Before this fix: - POST /project/state with project="p05" called ensure_project("p05") which created a separate row in the projects table - The state row was attached to that alias project_id - Later context builds canonicalized "p05" -> "p05-interferometer" via the builder fix from f2372ef and never found the state - Result: trusted state silently fragmented across alias rows After this fix: - The alias is resolved to the canonical id at every entry point - Two captures (one via "p05", one via "p05-interferometer") write to the same row - get_state via either alias or the canonical id finds the same row Fixes the highest-priority gap codex flagged because Trusted Project State is supposed to be the most dependable layer in the AtoCore trust hierarchy. P2.a — Interaction capture project canonicalization ---------------------------------------------------- src/atocore/interactions/service.py: record_interaction now canonicalizes project before storing, so interaction.project is always the canonical id regardless of what the client passed. Downstream effects: - reinforce_from_interaction queries memories by interaction.project -> previously missed memories stored under canonical id -> now consistent because interaction.project IS the canonical id - the extractor stamps candidates with interaction.project -> previously created candidates in alias buckets -> now creates candidates in the canonical bucket - list_interactions(project=alias) was already broken, now fixed by canonicalizing the filter input on the read side too Memory service applied the same fix: - src/atocore/memory/service.py: create_memory and get_memories both canonicalize project through resolve_project_name - This keeps stored memory.project consistent with the reinforcement query path P2.b — Interaction `since` filter format normalization ------------------------------------------------------ src/atocore/interactions/service.py: new _normalize_since helper. The bug: - created_at is stored as 'YYYY-MM-DD HH:MM:SS' (no timezone, UTC by convention) so it sorts lexically and compares cleanly with the SQLite CURRENT_TIMESTAMP default - The `since` parameter was documented as ISO 8601 but compared as a raw string against the storage format - The lexically-greater 'T' separator means an ISO timestamp like '2026-04-07T12:00:00Z' is GREATER than the storage form '2026-04-07 12:00:00' for the same instant - Result: a client passing ISO `since` got an empty result for any row from the same day, even though those rows existed and were technically "after" the cutoff in real-world time The fix: - _normalize_since accepts ISO 8601 with T, optional Z suffix, optional fractional seconds, optional +HH:MM offsets - Uses datetime.fromisoformat for parsing (Python 3.11+) - Converts to UTC and reformats as the storage format before the SQL comparison - The bare storage format still works (backwards compat path is a regex match that returns the input unchanged) - Unparseable input is returned as-is so the comparison degrades gracefully (rows just don't match) instead of raising and breaking the listing endpoint builder.py refactor ------------------- The previous P1 fix had inline canonicalization. Now it uses the shared helper for consistency: - import changed from get_registered_project to resolve_project_name - the inline lookup is replaced with a single helper call - the comment block now points at representation-authority.md for the canonicalization contract New shared test fixture: tests/conftest.py::project_registry ------------------------------------------------------------ - Standardizes the registry-setup pattern that was duplicated across test_context_builder.py, test_project_state.py, test_interactions.py, and test_reinforcement.py - Returns a callable that takes (project_id, [aliases]) tuples and writes them into a temp registry file with the env var pointed at it and config.settings reloaded - Used by all 12 new regression tests in this commit Tests (12 new, all green on first run) -------------------------------------- test_project_state.py: - test_set_state_canonicalizes_alias: write via alias, read via every alias and the canonical id, verify same row id - test_get_state_canonicalizes_alias_after_canonical_write - test_invalidate_state_canonicalizes_alias - test_unregistered_project_state_still_works (backwards compat) test_interactions.py: - test_record_interaction_canonicalizes_project - test_list_interactions_canonicalizes_project_filter - test_list_interactions_since_accepts_iso_with_t_separator - test_list_interactions_since_accepts_z_suffix - test_list_interactions_since_accepts_offset - test_list_interactions_since_storage_format_still_works test_reinforcement.py: - test_reinforcement_works_when_capture_uses_alias (end-to-end: capture under alias, seed memory under canonical, verify reinforcement matches) - test_get_memories_filter_by_alias Full suite: 174 passing (was 162), 1 warning. The +12 is the new regression tests, no existing tests regressed. What's still NOT canonicalized (and why) ---------------------------------------- - _rank_chunks's secondary substring boost in builder.py — the retriever already does the right thing via its own _project_match_boost which calls get_registered_project. The redundant secondary boost still uses the raw hint but it's a multiplicative factor on top of correct retrieval, not a filter, so it can't drop relevant chunks. Tracked as a future cleanup but not a P1. - update_memory's project field (you can't change a memory's project after creation in the API anyway). - The retriever's project_hint parameter on direct /query calls — same reasoning as the builder boost, plus the retriever's own get_registered_project call already handles aliases there.
2026-04-07 08:29:33 -04:00
memory_id = str(uuid.uuid4())
now = datetime.now(timezone.utc).isoformat()
with get_connection() as conn:
existing = conn.execute(
"SELECT id FROM memories "
feat(phase9-B): reinforce active memories from captured interactions Phase 9 Commit B from the agreed plan. With Commit A capturing what AtoCore fed to the LLM and what came back, this commit closes the weakest part of the loop: when a memory is actually referenced in a response, its confidence should drift up, and stale memories that nobody ever mentions should stay where they are. This is reinforcement only — nothing is promoted into trusted state and no candidates are created. Extraction is Commit C. Schema (additive migration): - memories.last_referenced_at DATETIME (null by default) - memories.reference_count INTEGER DEFAULT 0 - idx_memories_last_referenced on last_referenced_at - memories.status now accepts the new "candidate" value so Commit C has the status slot to land on. Existing active/superseded/invalid rows are untouched. New module: src/atocore/memory/reinforcement.py - reinforce_from_interaction(interaction): scans the interaction's response + response_summary for echoes of active memories and bumps confidence / reference_count for each match - matching is intentionally simple and explainable: * normalize both sides (lowercase, collapse whitespace) * require >= 12 chars of memory content to match * compare the leading 80-char window of each memory - the candidate pool is project-scoped memories for the interaction's project + global identity + preference memories, deduplicated - candidates and invalidated memories are NEVER reinforced; only active memories move Memory service changes: - MEMORY_STATUSES = ["candidate", "active", "superseded", "invalid"] - create_memory(status="candidate"|"active"|...) with per-status duplicate scoping so a candidate and an active with identical text can legitimately coexist during review - get_memories(status=...) explicit override of the legacy active_only flag; callers can now list the review queue cleanly - update_memory accepts any valid status including "candidate" - reinforce_memory(id, delta): low-level primitive that bumps confidence (capped at 1.0), increments reference_count, and sets last_referenced_at. Only active memories; returns (applied, old, new) - promote_memory / reject_candidate_memory helpers prepping Commit C Interactions service: - record_interaction(reinforce=True) runs reinforce_from_interaction automatically when the interaction has response content. reinforcement errors are logged but never raised back to the caller so capture itself is never blocked by a flaky downstream. - circular import between interactions service and memory.reinforcement avoided by lazy import inside the function API: - POST /interactions now accepts a reinforce bool field (default true) - POST /interactions/{id}/reinforce runs reinforcement on an existing captured interaction — useful for backfilling or for retrying after a transient error in the automatic pass - response lists which memory ids were reinforced with old / new confidence for audit Tests (17 new, all green): - reinforce_memory bumps, caps at 1.0, accumulates reference_count - reinforce_memory rejects candidates and missing ids - reinforce_memory rejects negative delta - reinforce_from_interaction matches active memory - reinforce_from_interaction ignores candidates and inactive - reinforce_from_interaction requires minimum content length - reinforce_from_interaction handles empty response cleanly - reinforce_from_interaction normalizes casing and whitespace - reinforce_from_interaction deduplicates across memory buckets - record_interaction auto-reinforces by default - record_interaction reinforce=False skips the pass - record_interaction handles empty response - POST /interactions/{id}/reinforce runs against stored interaction - POST /interactions/{id}/reinforce returns 404 for missing id - POST /interactions accepts reinforce=false Full suite: 135 passing (was 118). Trust model unchanged: - reinforcement only moves confidence within the existing active set - the candidate lifecycle is declared but only Commit C will actually create candidate memories - trusted project state is never touched by reinforcement Next: Commit C adds the rule-based extractor that produces candidate memories from captured interactions plus the promote/reject review queue endpoints.
2026-04-06 21:18:38 -04:00
"WHERE memory_type = ? AND content = ? AND project = ? AND status = ?",
(memory_type, content, project, status),
).fetchone()
if existing:
feat(phase9-B): reinforce active memories from captured interactions Phase 9 Commit B from the agreed plan. With Commit A capturing what AtoCore fed to the LLM and what came back, this commit closes the weakest part of the loop: when a memory is actually referenced in a response, its confidence should drift up, and stale memories that nobody ever mentions should stay where they are. This is reinforcement only — nothing is promoted into trusted state and no candidates are created. Extraction is Commit C. Schema (additive migration): - memories.last_referenced_at DATETIME (null by default) - memories.reference_count INTEGER DEFAULT 0 - idx_memories_last_referenced on last_referenced_at - memories.status now accepts the new "candidate" value so Commit C has the status slot to land on. Existing active/superseded/invalid rows are untouched. New module: src/atocore/memory/reinforcement.py - reinforce_from_interaction(interaction): scans the interaction's response + response_summary for echoes of active memories and bumps confidence / reference_count for each match - matching is intentionally simple and explainable: * normalize both sides (lowercase, collapse whitespace) * require >= 12 chars of memory content to match * compare the leading 80-char window of each memory - the candidate pool is project-scoped memories for the interaction's project + global identity + preference memories, deduplicated - candidates and invalidated memories are NEVER reinforced; only active memories move Memory service changes: - MEMORY_STATUSES = ["candidate", "active", "superseded", "invalid"] - create_memory(status="candidate"|"active"|...) with per-status duplicate scoping so a candidate and an active with identical text can legitimately coexist during review - get_memories(status=...) explicit override of the legacy active_only flag; callers can now list the review queue cleanly - update_memory accepts any valid status including "candidate" - reinforce_memory(id, delta): low-level primitive that bumps confidence (capped at 1.0), increments reference_count, and sets last_referenced_at. Only active memories; returns (applied, old, new) - promote_memory / reject_candidate_memory helpers prepping Commit C Interactions service: - record_interaction(reinforce=True) runs reinforce_from_interaction automatically when the interaction has response content. reinforcement errors are logged but never raised back to the caller so capture itself is never blocked by a flaky downstream. - circular import between interactions service and memory.reinforcement avoided by lazy import inside the function API: - POST /interactions now accepts a reinforce bool field (default true) - POST /interactions/{id}/reinforce runs reinforcement on an existing captured interaction — useful for backfilling or for retrying after a transient error in the automatic pass - response lists which memory ids were reinforced with old / new confidence for audit Tests (17 new, all green): - reinforce_memory bumps, caps at 1.0, accumulates reference_count - reinforce_memory rejects candidates and missing ids - reinforce_memory rejects negative delta - reinforce_from_interaction matches active memory - reinforce_from_interaction ignores candidates and inactive - reinforce_from_interaction requires minimum content length - reinforce_from_interaction handles empty response cleanly - reinforce_from_interaction normalizes casing and whitespace - reinforce_from_interaction deduplicates across memory buckets - record_interaction auto-reinforces by default - record_interaction reinforce=False skips the pass - record_interaction handles empty response - POST /interactions/{id}/reinforce runs against stored interaction - POST /interactions/{id}/reinforce returns 404 for missing id - POST /interactions accepts reinforce=false Full suite: 135 passing (was 118). Trust model unchanged: - reinforcement only moves confidence within the existing active set - the candidate lifecycle is declared but only Commit C will actually create candidate memories - trusted project state is never touched by reinforcement Next: Commit C adds the rule-based extractor that produces candidate memories from captured interactions plus the promote/reject review queue endpoints.
2026-04-06 21:18:38 -04:00
log.info(
"memory_duplicate_skipped",
memory_type=memory_type,
status=status,
content_preview=content[:80],
)
return _row_to_memory(
conn.execute("SELECT * FROM memories WHERE id = ?", (existing["id"],)).fetchone()
)
conn.execute(
feat: Phase 3 V1 — Auto-Organization (domain_tags + valid_until) Adds structural metadata that the LLM triage was already implicitly reasoning about ("stale snapshot" → reject). Phase 3 captures that reasoning as fields so it can DRIVE retrieval, not just rejection. Schema (src/atocore/models/database.py): - domain_tags TEXT DEFAULT '[]' JSON array of lowercase topic keywords - valid_until DATETIME ISO date; null = permanent - idx_memories_valid_until index for efficient expiry queries Memory service (src/atocore/memory/service.py): - Memory dataclass gains domain_tags + valid_until - create_memory, update_memory accept/persist both - _row_to_memory safely reads both (JSON-decode + null handling) - _normalize_tags helper: lowercase, dedup, strip, cap at 10 - get_memories_for_context filters expired (valid_until < today UTC) - _rank_memories_for_query adds tag-boost: memories whose domain_tags appear as substrings in query text rank higher (tertiary key after content-overlap density + absolute overlap, before confidence) LLM extractor (_llm_prompt.py → llm-0.5.0): - SYSTEM_PROMPT documents domain_tags (2-5 keywords) + valid_until (time-bounded facts get expiry dates; durable facts stay null) - normalize_candidate_item parses both fields from model output with graceful fallback for string/null/missing LLM triage (scripts/auto_triage.py): - TRIAGE_SYSTEM_PROMPT documents same two fields - parse_verdict extracts them from verdict JSON - On promote: PUT /memory/{id} with tags + valid_until BEFORE POST /memory/{id}/promote, so active memories carry them API (src/atocore/api/routes.py): - MemoryCreateRequest: adds domain_tags, valid_until - MemoryUpdateRequest: adds domain_tags, valid_until, memory_type - GET /memory response exposes domain_tags + valid_until + created_at Triage UI (src/atocore/engineering/triage_ui.py): - Renders existing tags as colored badges - Adds inline text field for tags (comma-separated) + date picker for valid_until on every candidate card - Save&Promote button persists edits via PUT then promotes - Plain Promote (and Y shortcut) also saves tags/expiry if edited Wiki (src/atocore/engineering/wiki.py): - Search now matches memory content OR domain_tags - Search results render tags as clickable badges linking to /wiki/search?q=<tag> for cross-project navigation - valid_until shown as amber "valid until YYYY-MM-DD" hint Tests: 303 → 308 (5 new for Phase 3 behavior): - test_create_memory_with_tags_and_valid_until - test_create_memory_normalizes_tags - test_update_memory_sets_tags_and_valid_until - test_get_memories_for_context_excludes_expired - test_context_builder_tag_boost_orders_results Deferred (explicitly): temporal_scope enum, source_refs memory graph, HDBSCAN clustering, memory detail wiki page, backfill of existing actives. See docs/MASTER-BRAIN-PLAN.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:37:01 -04:00
"INSERT INTO memories (id, memory_type, content, project, source_chunk_id, "
"confidence, status, domain_tags, valid_until) "
"VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)",
(memory_id, memory_type, content, project, source_chunk_id or None,
confidence, status, tags_json, valid_until),
)
feat(phase9-B): reinforce active memories from captured interactions Phase 9 Commit B from the agreed plan. With Commit A capturing what AtoCore fed to the LLM and what came back, this commit closes the weakest part of the loop: when a memory is actually referenced in a response, its confidence should drift up, and stale memories that nobody ever mentions should stay where they are. This is reinforcement only — nothing is promoted into trusted state and no candidates are created. Extraction is Commit C. Schema (additive migration): - memories.last_referenced_at DATETIME (null by default) - memories.reference_count INTEGER DEFAULT 0 - idx_memories_last_referenced on last_referenced_at - memories.status now accepts the new "candidate" value so Commit C has the status slot to land on. Existing active/superseded/invalid rows are untouched. New module: src/atocore/memory/reinforcement.py - reinforce_from_interaction(interaction): scans the interaction's response + response_summary for echoes of active memories and bumps confidence / reference_count for each match - matching is intentionally simple and explainable: * normalize both sides (lowercase, collapse whitespace) * require >= 12 chars of memory content to match * compare the leading 80-char window of each memory - the candidate pool is project-scoped memories for the interaction's project + global identity + preference memories, deduplicated - candidates and invalidated memories are NEVER reinforced; only active memories move Memory service changes: - MEMORY_STATUSES = ["candidate", "active", "superseded", "invalid"] - create_memory(status="candidate"|"active"|...) with per-status duplicate scoping so a candidate and an active with identical text can legitimately coexist during review - get_memories(status=...) explicit override of the legacy active_only flag; callers can now list the review queue cleanly - update_memory accepts any valid status including "candidate" - reinforce_memory(id, delta): low-level primitive that bumps confidence (capped at 1.0), increments reference_count, and sets last_referenced_at. Only active memories; returns (applied, old, new) - promote_memory / reject_candidate_memory helpers prepping Commit C Interactions service: - record_interaction(reinforce=True) runs reinforce_from_interaction automatically when the interaction has response content. reinforcement errors are logged but never raised back to the caller so capture itself is never blocked by a flaky downstream. - circular import between interactions service and memory.reinforcement avoided by lazy import inside the function API: - POST /interactions now accepts a reinforce bool field (default true) - POST /interactions/{id}/reinforce runs reinforcement on an existing captured interaction — useful for backfilling or for retrying after a transient error in the automatic pass - response lists which memory ids were reinforced with old / new confidence for audit Tests (17 new, all green): - reinforce_memory bumps, caps at 1.0, accumulates reference_count - reinforce_memory rejects candidates and missing ids - reinforce_memory rejects negative delta - reinforce_from_interaction matches active memory - reinforce_from_interaction ignores candidates and inactive - reinforce_from_interaction requires minimum content length - reinforce_from_interaction handles empty response cleanly - reinforce_from_interaction normalizes casing and whitespace - reinforce_from_interaction deduplicates across memory buckets - record_interaction auto-reinforces by default - record_interaction reinforce=False skips the pass - record_interaction handles empty response - POST /interactions/{id}/reinforce runs against stored interaction - POST /interactions/{id}/reinforce returns 404 for missing id - POST /interactions accepts reinforce=false Full suite: 135 passing (was 118). Trust model unchanged: - reinforcement only moves confidence within the existing active set - the candidate lifecycle is declared but only Commit C will actually create candidate memories - trusted project state is never touched by reinforcement Next: Commit C adds the rule-based extractor that produces candidate memories from captured interactions plus the promote/reject review queue endpoints.
2026-04-06 21:18:38 -04:00
log.info(
"memory_created",
memory_type=memory_type,
status=status,
content_preview=content[:80],
feat: Phase 3 V1 — Auto-Organization (domain_tags + valid_until) Adds structural metadata that the LLM triage was already implicitly reasoning about ("stale snapshot" → reject). Phase 3 captures that reasoning as fields so it can DRIVE retrieval, not just rejection. Schema (src/atocore/models/database.py): - domain_tags TEXT DEFAULT '[]' JSON array of lowercase topic keywords - valid_until DATETIME ISO date; null = permanent - idx_memories_valid_until index for efficient expiry queries Memory service (src/atocore/memory/service.py): - Memory dataclass gains domain_tags + valid_until - create_memory, update_memory accept/persist both - _row_to_memory safely reads both (JSON-decode + null handling) - _normalize_tags helper: lowercase, dedup, strip, cap at 10 - get_memories_for_context filters expired (valid_until < today UTC) - _rank_memories_for_query adds tag-boost: memories whose domain_tags appear as substrings in query text rank higher (tertiary key after content-overlap density + absolute overlap, before confidence) LLM extractor (_llm_prompt.py → llm-0.5.0): - SYSTEM_PROMPT documents domain_tags (2-5 keywords) + valid_until (time-bounded facts get expiry dates; durable facts stay null) - normalize_candidate_item parses both fields from model output with graceful fallback for string/null/missing LLM triage (scripts/auto_triage.py): - TRIAGE_SYSTEM_PROMPT documents same two fields - parse_verdict extracts them from verdict JSON - On promote: PUT /memory/{id} with tags + valid_until BEFORE POST /memory/{id}/promote, so active memories carry them API (src/atocore/api/routes.py): - MemoryCreateRequest: adds domain_tags, valid_until - MemoryUpdateRequest: adds domain_tags, valid_until, memory_type - GET /memory response exposes domain_tags + valid_until + created_at Triage UI (src/atocore/engineering/triage_ui.py): - Renders existing tags as colored badges - Adds inline text field for tags (comma-separated) + date picker for valid_until on every candidate card - Save&Promote button persists edits via PUT then promotes - Plain Promote (and Y shortcut) also saves tags/expiry if edited Wiki (src/atocore/engineering/wiki.py): - Search now matches memory content OR domain_tags - Search results render tags as clickable badges linking to /wiki/search?q=<tag> for cross-project navigation - valid_until shown as amber "valid until YYYY-MM-DD" hint Tests: 303 → 308 (5 new for Phase 3 behavior): - test_create_memory_with_tags_and_valid_until - test_create_memory_normalizes_tags - test_update_memory_sets_tags_and_valid_until - test_get_memories_for_context_excludes_expired - test_context_builder_tag_boost_orders_results Deferred (explicitly): temporal_scope enum, source_refs memory graph, HDBSCAN clustering, memory detail wiki page, backfill of existing actives. See docs/MASTER-BRAIN-PLAN.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:37:01 -04:00
tags=tags,
valid_until=valid_until or "",
feat(phase9-B): reinforce active memories from captured interactions Phase 9 Commit B from the agreed plan. With Commit A capturing what AtoCore fed to the LLM and what came back, this commit closes the weakest part of the loop: when a memory is actually referenced in a response, its confidence should drift up, and stale memories that nobody ever mentions should stay where they are. This is reinforcement only — nothing is promoted into trusted state and no candidates are created. Extraction is Commit C. Schema (additive migration): - memories.last_referenced_at DATETIME (null by default) - memories.reference_count INTEGER DEFAULT 0 - idx_memories_last_referenced on last_referenced_at - memories.status now accepts the new "candidate" value so Commit C has the status slot to land on. Existing active/superseded/invalid rows are untouched. New module: src/atocore/memory/reinforcement.py - reinforce_from_interaction(interaction): scans the interaction's response + response_summary for echoes of active memories and bumps confidence / reference_count for each match - matching is intentionally simple and explainable: * normalize both sides (lowercase, collapse whitespace) * require >= 12 chars of memory content to match * compare the leading 80-char window of each memory - the candidate pool is project-scoped memories for the interaction's project + global identity + preference memories, deduplicated - candidates and invalidated memories are NEVER reinforced; only active memories move Memory service changes: - MEMORY_STATUSES = ["candidate", "active", "superseded", "invalid"] - create_memory(status="candidate"|"active"|...) with per-status duplicate scoping so a candidate and an active with identical text can legitimately coexist during review - get_memories(status=...) explicit override of the legacy active_only flag; callers can now list the review queue cleanly - update_memory accepts any valid status including "candidate" - reinforce_memory(id, delta): low-level primitive that bumps confidence (capped at 1.0), increments reference_count, and sets last_referenced_at. Only active memories; returns (applied, old, new) - promote_memory / reject_candidate_memory helpers prepping Commit C Interactions service: - record_interaction(reinforce=True) runs reinforce_from_interaction automatically when the interaction has response content. reinforcement errors are logged but never raised back to the caller so capture itself is never blocked by a flaky downstream. - circular import between interactions service and memory.reinforcement avoided by lazy import inside the function API: - POST /interactions now accepts a reinforce bool field (default true) - POST /interactions/{id}/reinforce runs reinforcement on an existing captured interaction — useful for backfilling or for retrying after a transient error in the automatic pass - response lists which memory ids were reinforced with old / new confidence for audit Tests (17 new, all green): - reinforce_memory bumps, caps at 1.0, accumulates reference_count - reinforce_memory rejects candidates and missing ids - reinforce_memory rejects negative delta - reinforce_from_interaction matches active memory - reinforce_from_interaction ignores candidates and inactive - reinforce_from_interaction requires minimum content length - reinforce_from_interaction handles empty response cleanly - reinforce_from_interaction normalizes casing and whitespace - reinforce_from_interaction deduplicates across memory buckets - record_interaction auto-reinforces by default - record_interaction reinforce=False skips the pass - record_interaction handles empty response - POST /interactions/{id}/reinforce runs against stored interaction - POST /interactions/{id}/reinforce returns 404 for missing id - POST /interactions accepts reinforce=false Full suite: 135 passing (was 118). Trust model unchanged: - reinforcement only moves confidence within the existing active set - the candidate lifecycle is declared but only Commit C will actually create candidate memories - trusted project state is never touched by reinforcement Next: Commit C adds the rule-based extractor that produces candidate memories from captured interactions plus the promote/reject review queue endpoints.
2026-04-06 21:18:38 -04:00
)
feat: Phase 4 V1 — Robustness Hardening Adds the observability + safety layer that turns AtoCore from "works until something silently breaks" into "every mutation is traceable, drift is detected, failures raise alerts." 1. Audit log (memory_audit table): - New table with id, memory_id, action, actor, before/after JSON, note, timestamp; 3 indexes for memory_id/timestamp/action - _audit_memory() helper called from every mutation: create_memory, update_memory, promote_memory, reject_candidate_memory, invalidate_memory, supersede_memory, reinforce_memory, auto_promote_reinforced, expire_stale_candidates - Action verb auto-selected: promoted/rejected/invalidated/ superseded/updated based on state transition - "actor" threaded through: api-http, human-triage, phase10-auto- promote, candidate-expiry, reinforcement, etc. - Fail-open: audit write failure logs but never breaks the mutation - GET /memory/{id}/audit: full history for one memory - GET /admin/audit/recent: last 50 mutations across the system 2. Alerts framework (src/atocore/observability/alerts.py): - emit_alert(severity, title, message, context) fans out to: - structlog logger (always) - ~/atocore-logs/alerts.log append (configurable via ATOCORE_ALERT_LOG) - project_state atocore/alert/last_{severity} (dashboard surface) - ATOCORE_ALERT_WEBHOOK POST if set (auto-detects Discord webhook format for nice embeds; generic JSON otherwise) - Every sink fail-open — one failure doesn't prevent the others - Pipeline alert step in nightly cron: harness < 85% → warning; candidate queue > 200 → warning 3. Integrity checks (scripts/integrity_check.py): - Nightly scan for drift: - Memories → missing source_chunk_id references - Duplicate active memories (same type+content+project) - project_state → missing projects - Orphaned source_chunks (no parent document) - Results persisted to atocore/status/integrity_check_result - Any finding emits a warning alert - Added as Step G in deploy/dalidou/batch-extract.sh nightly cron 4. Dashboard surfaces it all: - integrity (findings + details) - alerts (last info/warning/critical per severity) - recent_audit (last 10 mutations with actor + action + preview) Tests: 308 → 317 (9 new): - test_audit_create_logs_entry - test_audit_promote_logs_entry - test_audit_reject_logs_entry - test_audit_update_captures_before_after - test_audit_reinforce_logs_entry - test_recent_audit_returns_cross_memory_entries - test_emit_alert_writes_log_file - test_emit_alert_invalid_severity_falls_back_to_info - test_emit_alert_fails_open_on_log_write_error Deferred: formal migration framework with rollback (current additive pattern is fine for V1); memory detail wiki page with audit view (quick follow-up). To enable Discord alerts: set ATOCORE_ALERT_WEBHOOK to a Discord webhook URL in Dalidou's environment. Default = log-only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:54:10 -04:00
_audit_memory(
memory_id=memory_id,
action="created",
actor=actor,
after={
"memory_type": memory_type,
"content": content,
"project": project,
"status": status,
"confidence": confidence,
"domain_tags": tags,
"valid_until": valid_until or "",
},
)
return Memory(
id=memory_id,
memory_type=memory_type,
content=content,
project=project,
source_chunk_id=source_chunk_id,
confidence=confidence,
feat(phase9-B): reinforce active memories from captured interactions Phase 9 Commit B from the agreed plan. With Commit A capturing what AtoCore fed to the LLM and what came back, this commit closes the weakest part of the loop: when a memory is actually referenced in a response, its confidence should drift up, and stale memories that nobody ever mentions should stay where they are. This is reinforcement only — nothing is promoted into trusted state and no candidates are created. Extraction is Commit C. Schema (additive migration): - memories.last_referenced_at DATETIME (null by default) - memories.reference_count INTEGER DEFAULT 0 - idx_memories_last_referenced on last_referenced_at - memories.status now accepts the new "candidate" value so Commit C has the status slot to land on. Existing active/superseded/invalid rows are untouched. New module: src/atocore/memory/reinforcement.py - reinforce_from_interaction(interaction): scans the interaction's response + response_summary for echoes of active memories and bumps confidence / reference_count for each match - matching is intentionally simple and explainable: * normalize both sides (lowercase, collapse whitespace) * require >= 12 chars of memory content to match * compare the leading 80-char window of each memory - the candidate pool is project-scoped memories for the interaction's project + global identity + preference memories, deduplicated - candidates and invalidated memories are NEVER reinforced; only active memories move Memory service changes: - MEMORY_STATUSES = ["candidate", "active", "superseded", "invalid"] - create_memory(status="candidate"|"active"|...) with per-status duplicate scoping so a candidate and an active with identical text can legitimately coexist during review - get_memories(status=...) explicit override of the legacy active_only flag; callers can now list the review queue cleanly - update_memory accepts any valid status including "candidate" - reinforce_memory(id, delta): low-level primitive that bumps confidence (capped at 1.0), increments reference_count, and sets last_referenced_at. Only active memories; returns (applied, old, new) - promote_memory / reject_candidate_memory helpers prepping Commit C Interactions service: - record_interaction(reinforce=True) runs reinforce_from_interaction automatically when the interaction has response content. reinforcement errors are logged but never raised back to the caller so capture itself is never blocked by a flaky downstream. - circular import between interactions service and memory.reinforcement avoided by lazy import inside the function API: - POST /interactions now accepts a reinforce bool field (default true) - POST /interactions/{id}/reinforce runs reinforcement on an existing captured interaction — useful for backfilling or for retrying after a transient error in the automatic pass - response lists which memory ids were reinforced with old / new confidence for audit Tests (17 new, all green): - reinforce_memory bumps, caps at 1.0, accumulates reference_count - reinforce_memory rejects candidates and missing ids - reinforce_memory rejects negative delta - reinforce_from_interaction matches active memory - reinforce_from_interaction ignores candidates and inactive - reinforce_from_interaction requires minimum content length - reinforce_from_interaction handles empty response cleanly - reinforce_from_interaction normalizes casing and whitespace - reinforce_from_interaction deduplicates across memory buckets - record_interaction auto-reinforces by default - record_interaction reinforce=False skips the pass - record_interaction handles empty response - POST /interactions/{id}/reinforce runs against stored interaction - POST /interactions/{id}/reinforce returns 404 for missing id - POST /interactions accepts reinforce=false Full suite: 135 passing (was 118). Trust model unchanged: - reinforcement only moves confidence within the existing active set - the candidate lifecycle is declared but only Commit C will actually create candidate memories - trusted project state is never touched by reinforcement Next: Commit C adds the rule-based extractor that produces candidate memories from captured interactions plus the promote/reject review queue endpoints.
2026-04-06 21:18:38 -04:00
status=status,
created_at=now,
updated_at=now,
feat(phase9-B): reinforce active memories from captured interactions Phase 9 Commit B from the agreed plan. With Commit A capturing what AtoCore fed to the LLM and what came back, this commit closes the weakest part of the loop: when a memory is actually referenced in a response, its confidence should drift up, and stale memories that nobody ever mentions should stay where they are. This is reinforcement only — nothing is promoted into trusted state and no candidates are created. Extraction is Commit C. Schema (additive migration): - memories.last_referenced_at DATETIME (null by default) - memories.reference_count INTEGER DEFAULT 0 - idx_memories_last_referenced on last_referenced_at - memories.status now accepts the new "candidate" value so Commit C has the status slot to land on. Existing active/superseded/invalid rows are untouched. New module: src/atocore/memory/reinforcement.py - reinforce_from_interaction(interaction): scans the interaction's response + response_summary for echoes of active memories and bumps confidence / reference_count for each match - matching is intentionally simple and explainable: * normalize both sides (lowercase, collapse whitespace) * require >= 12 chars of memory content to match * compare the leading 80-char window of each memory - the candidate pool is project-scoped memories for the interaction's project + global identity + preference memories, deduplicated - candidates and invalidated memories are NEVER reinforced; only active memories move Memory service changes: - MEMORY_STATUSES = ["candidate", "active", "superseded", "invalid"] - create_memory(status="candidate"|"active"|...) with per-status duplicate scoping so a candidate and an active with identical text can legitimately coexist during review - get_memories(status=...) explicit override of the legacy active_only flag; callers can now list the review queue cleanly - update_memory accepts any valid status including "candidate" - reinforce_memory(id, delta): low-level primitive that bumps confidence (capped at 1.0), increments reference_count, and sets last_referenced_at. Only active memories; returns (applied, old, new) - promote_memory / reject_candidate_memory helpers prepping Commit C Interactions service: - record_interaction(reinforce=True) runs reinforce_from_interaction automatically when the interaction has response content. reinforcement errors are logged but never raised back to the caller so capture itself is never blocked by a flaky downstream. - circular import between interactions service and memory.reinforcement avoided by lazy import inside the function API: - POST /interactions now accepts a reinforce bool field (default true) - POST /interactions/{id}/reinforce runs reinforcement on an existing captured interaction — useful for backfilling or for retrying after a transient error in the automatic pass - response lists which memory ids were reinforced with old / new confidence for audit Tests (17 new, all green): - reinforce_memory bumps, caps at 1.0, accumulates reference_count - reinforce_memory rejects candidates and missing ids - reinforce_memory rejects negative delta - reinforce_from_interaction matches active memory - reinforce_from_interaction ignores candidates and inactive - reinforce_from_interaction requires minimum content length - reinforce_from_interaction handles empty response cleanly - reinforce_from_interaction normalizes casing and whitespace - reinforce_from_interaction deduplicates across memory buckets - record_interaction auto-reinforces by default - record_interaction reinforce=False skips the pass - record_interaction handles empty response - POST /interactions/{id}/reinforce runs against stored interaction - POST /interactions/{id}/reinforce returns 404 for missing id - POST /interactions accepts reinforce=false Full suite: 135 passing (was 118). Trust model unchanged: - reinforcement only moves confidence within the existing active set - the candidate lifecycle is declared but only Commit C will actually create candidate memories - trusted project state is never touched by reinforcement Next: Commit C adds the rule-based extractor that produces candidate memories from captured interactions plus the promote/reject review queue endpoints.
2026-04-06 21:18:38 -04:00
last_referenced_at="",
reference_count=0,
feat: Phase 3 V1 — Auto-Organization (domain_tags + valid_until) Adds structural metadata that the LLM triage was already implicitly reasoning about ("stale snapshot" → reject). Phase 3 captures that reasoning as fields so it can DRIVE retrieval, not just rejection. Schema (src/atocore/models/database.py): - domain_tags TEXT DEFAULT '[]' JSON array of lowercase topic keywords - valid_until DATETIME ISO date; null = permanent - idx_memories_valid_until index for efficient expiry queries Memory service (src/atocore/memory/service.py): - Memory dataclass gains domain_tags + valid_until - create_memory, update_memory accept/persist both - _row_to_memory safely reads both (JSON-decode + null handling) - _normalize_tags helper: lowercase, dedup, strip, cap at 10 - get_memories_for_context filters expired (valid_until < today UTC) - _rank_memories_for_query adds tag-boost: memories whose domain_tags appear as substrings in query text rank higher (tertiary key after content-overlap density + absolute overlap, before confidence) LLM extractor (_llm_prompt.py → llm-0.5.0): - SYSTEM_PROMPT documents domain_tags (2-5 keywords) + valid_until (time-bounded facts get expiry dates; durable facts stay null) - normalize_candidate_item parses both fields from model output with graceful fallback for string/null/missing LLM triage (scripts/auto_triage.py): - TRIAGE_SYSTEM_PROMPT documents same two fields - parse_verdict extracts them from verdict JSON - On promote: PUT /memory/{id} with tags + valid_until BEFORE POST /memory/{id}/promote, so active memories carry them API (src/atocore/api/routes.py): - MemoryCreateRequest: adds domain_tags, valid_until - MemoryUpdateRequest: adds domain_tags, valid_until, memory_type - GET /memory response exposes domain_tags + valid_until + created_at Triage UI (src/atocore/engineering/triage_ui.py): - Renders existing tags as colored badges - Adds inline text field for tags (comma-separated) + date picker for valid_until on every candidate card - Save&Promote button persists edits via PUT then promotes - Plain Promote (and Y shortcut) also saves tags/expiry if edited Wiki (src/atocore/engineering/wiki.py): - Search now matches memory content OR domain_tags - Search results render tags as clickable badges linking to /wiki/search?q=<tag> for cross-project navigation - valid_until shown as amber "valid until YYYY-MM-DD" hint Tests: 303 → 308 (5 new for Phase 3 behavior): - test_create_memory_with_tags_and_valid_until - test_create_memory_normalizes_tags - test_update_memory_sets_tags_and_valid_until - test_get_memories_for_context_excludes_expired - test_context_builder_tag_boost_orders_results Deferred (explicitly): temporal_scope enum, source_refs memory graph, HDBSCAN clustering, memory detail wiki page, backfill of existing actives. See docs/MASTER-BRAIN-PLAN.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:37:01 -04:00
domain_tags=tags,
valid_until=valid_until or "",
)
def get_memories(
memory_type: str | None = None,
project: str | None = None,
active_only: bool = True,
min_confidence: float = 0.0,
limit: int = 50,
feat(phase9-B): reinforce active memories from captured interactions Phase 9 Commit B from the agreed plan. With Commit A capturing what AtoCore fed to the LLM and what came back, this commit closes the weakest part of the loop: when a memory is actually referenced in a response, its confidence should drift up, and stale memories that nobody ever mentions should stay where they are. This is reinforcement only — nothing is promoted into trusted state and no candidates are created. Extraction is Commit C. Schema (additive migration): - memories.last_referenced_at DATETIME (null by default) - memories.reference_count INTEGER DEFAULT 0 - idx_memories_last_referenced on last_referenced_at - memories.status now accepts the new "candidate" value so Commit C has the status slot to land on. Existing active/superseded/invalid rows are untouched. New module: src/atocore/memory/reinforcement.py - reinforce_from_interaction(interaction): scans the interaction's response + response_summary for echoes of active memories and bumps confidence / reference_count for each match - matching is intentionally simple and explainable: * normalize both sides (lowercase, collapse whitespace) * require >= 12 chars of memory content to match * compare the leading 80-char window of each memory - the candidate pool is project-scoped memories for the interaction's project + global identity + preference memories, deduplicated - candidates and invalidated memories are NEVER reinforced; only active memories move Memory service changes: - MEMORY_STATUSES = ["candidate", "active", "superseded", "invalid"] - create_memory(status="candidate"|"active"|...) with per-status duplicate scoping so a candidate and an active with identical text can legitimately coexist during review - get_memories(status=...) explicit override of the legacy active_only flag; callers can now list the review queue cleanly - update_memory accepts any valid status including "candidate" - reinforce_memory(id, delta): low-level primitive that bumps confidence (capped at 1.0), increments reference_count, and sets last_referenced_at. Only active memories; returns (applied, old, new) - promote_memory / reject_candidate_memory helpers prepping Commit C Interactions service: - record_interaction(reinforce=True) runs reinforce_from_interaction automatically when the interaction has response content. reinforcement errors are logged but never raised back to the caller so capture itself is never blocked by a flaky downstream. - circular import between interactions service and memory.reinforcement avoided by lazy import inside the function API: - POST /interactions now accepts a reinforce bool field (default true) - POST /interactions/{id}/reinforce runs reinforcement on an existing captured interaction — useful for backfilling or for retrying after a transient error in the automatic pass - response lists which memory ids were reinforced with old / new confidence for audit Tests (17 new, all green): - reinforce_memory bumps, caps at 1.0, accumulates reference_count - reinforce_memory rejects candidates and missing ids - reinforce_memory rejects negative delta - reinforce_from_interaction matches active memory - reinforce_from_interaction ignores candidates and inactive - reinforce_from_interaction requires minimum content length - reinforce_from_interaction handles empty response cleanly - reinforce_from_interaction normalizes casing and whitespace - reinforce_from_interaction deduplicates across memory buckets - record_interaction auto-reinforces by default - record_interaction reinforce=False skips the pass - record_interaction handles empty response - POST /interactions/{id}/reinforce runs against stored interaction - POST /interactions/{id}/reinforce returns 404 for missing id - POST /interactions accepts reinforce=false Full suite: 135 passing (was 118). Trust model unchanged: - reinforcement only moves confidence within the existing active set - the candidate lifecycle is declared but only Commit C will actually create candidate memories - trusted project state is never touched by reinforcement Next: Commit C adds the rule-based extractor that produces candidate memories from captured interactions plus the promote/reject review queue endpoints.
2026-04-06 21:18:38 -04:00
status: str | None = None,
) -> list[Memory]:
feat(phase9-B): reinforce active memories from captured interactions Phase 9 Commit B from the agreed plan. With Commit A capturing what AtoCore fed to the LLM and what came back, this commit closes the weakest part of the loop: when a memory is actually referenced in a response, its confidence should drift up, and stale memories that nobody ever mentions should stay where they are. This is reinforcement only — nothing is promoted into trusted state and no candidates are created. Extraction is Commit C. Schema (additive migration): - memories.last_referenced_at DATETIME (null by default) - memories.reference_count INTEGER DEFAULT 0 - idx_memories_last_referenced on last_referenced_at - memories.status now accepts the new "candidate" value so Commit C has the status slot to land on. Existing active/superseded/invalid rows are untouched. New module: src/atocore/memory/reinforcement.py - reinforce_from_interaction(interaction): scans the interaction's response + response_summary for echoes of active memories and bumps confidence / reference_count for each match - matching is intentionally simple and explainable: * normalize both sides (lowercase, collapse whitespace) * require >= 12 chars of memory content to match * compare the leading 80-char window of each memory - the candidate pool is project-scoped memories for the interaction's project + global identity + preference memories, deduplicated - candidates and invalidated memories are NEVER reinforced; only active memories move Memory service changes: - MEMORY_STATUSES = ["candidate", "active", "superseded", "invalid"] - create_memory(status="candidate"|"active"|...) with per-status duplicate scoping so a candidate and an active with identical text can legitimately coexist during review - get_memories(status=...) explicit override of the legacy active_only flag; callers can now list the review queue cleanly - update_memory accepts any valid status including "candidate" - reinforce_memory(id, delta): low-level primitive that bumps confidence (capped at 1.0), increments reference_count, and sets last_referenced_at. Only active memories; returns (applied, old, new) - promote_memory / reject_candidate_memory helpers prepping Commit C Interactions service: - record_interaction(reinforce=True) runs reinforce_from_interaction automatically when the interaction has response content. reinforcement errors are logged but never raised back to the caller so capture itself is never blocked by a flaky downstream. - circular import between interactions service and memory.reinforcement avoided by lazy import inside the function API: - POST /interactions now accepts a reinforce bool field (default true) - POST /interactions/{id}/reinforce runs reinforcement on an existing captured interaction — useful for backfilling or for retrying after a transient error in the automatic pass - response lists which memory ids were reinforced with old / new confidence for audit Tests (17 new, all green): - reinforce_memory bumps, caps at 1.0, accumulates reference_count - reinforce_memory rejects candidates and missing ids - reinforce_memory rejects negative delta - reinforce_from_interaction matches active memory - reinforce_from_interaction ignores candidates and inactive - reinforce_from_interaction requires minimum content length - reinforce_from_interaction handles empty response cleanly - reinforce_from_interaction normalizes casing and whitespace - reinforce_from_interaction deduplicates across memory buckets - record_interaction auto-reinforces by default - record_interaction reinforce=False skips the pass - record_interaction handles empty response - POST /interactions/{id}/reinforce runs against stored interaction - POST /interactions/{id}/reinforce returns 404 for missing id - POST /interactions accepts reinforce=false Full suite: 135 passing (was 118). Trust model unchanged: - reinforcement only moves confidence within the existing active set - the candidate lifecycle is declared but only Commit C will actually create candidate memories - trusted project state is never touched by reinforcement Next: Commit C adds the rule-based extractor that produces candidate memories from captured interactions plus the promote/reject review queue endpoints.
2026-04-06 21:18:38 -04:00
"""Retrieve memories, optionally filtered.
When ``status`` is provided explicitly, it takes precedence over
``active_only`` so callers can list the candidate review queue via
``get_memories(status='candidate')``. When ``status`` is omitted the
legacy ``active_only`` behaviour still applies.
"""
if status is not None and status not in MEMORY_STATUSES:
raise ValueError(f"Invalid status '{status}'. Must be one of: {MEMORY_STATUSES}")
query = "SELECT * FROM memories WHERE 1=1"
params: list = []
if memory_type:
query += " AND memory_type = ?"
params.append(memory_type)
if project is not None:
fix(P1+P2): canonicalize project names at every trust boundary Three findings from codex's review of the previous P1+P2 fix. The earlier commit (f2372ef) only fixed alias resolution at the context builder. Codex correctly pointed out that the same fragmentation applies at every other place a project name crosses a boundary — project_state writes/reads, interaction capture/listing/filtering, memory create/queries, and reinforcement's downstream queries. Plus a real bug in the interaction `since` filter where the storage format and the documented ISO format don't compare cleanly. The fix is one helper used at every boundary instead of duplicating the resolution inline. New helper: src/atocore/projects/registry.py::resolve_project_name --------------------------------------------------------------- - Single canonicalization boundary for project names - Returns the canonical project_id when the input matches any registered id or alias - Returns the input unchanged for empty/None and for unregistered names (preserves backwards compat with hand-curated state that predates the registry) - Documented as the contract that every read/write at the trust boundary should pass through P1 — Trusted Project State endpoints ------------------------------------ src/atocore/context/project_state.py: set_state, get_state, and invalidate_state now all canonicalize project_name through resolve_project_name BEFORE looking up or creating the project row. Before this fix: - POST /project/state with project="p05" called ensure_project("p05") which created a separate row in the projects table - The state row was attached to that alias project_id - Later context builds canonicalized "p05" -> "p05-interferometer" via the builder fix from f2372ef and never found the state - Result: trusted state silently fragmented across alias rows After this fix: - The alias is resolved to the canonical id at every entry point - Two captures (one via "p05", one via "p05-interferometer") write to the same row - get_state via either alias or the canonical id finds the same row Fixes the highest-priority gap codex flagged because Trusted Project State is supposed to be the most dependable layer in the AtoCore trust hierarchy. P2.a — Interaction capture project canonicalization ---------------------------------------------------- src/atocore/interactions/service.py: record_interaction now canonicalizes project before storing, so interaction.project is always the canonical id regardless of what the client passed. Downstream effects: - reinforce_from_interaction queries memories by interaction.project -> previously missed memories stored under canonical id -> now consistent because interaction.project IS the canonical id - the extractor stamps candidates with interaction.project -> previously created candidates in alias buckets -> now creates candidates in the canonical bucket - list_interactions(project=alias) was already broken, now fixed by canonicalizing the filter input on the read side too Memory service applied the same fix: - src/atocore/memory/service.py: create_memory and get_memories both canonicalize project through resolve_project_name - This keeps stored memory.project consistent with the reinforcement query path P2.b — Interaction `since` filter format normalization ------------------------------------------------------ src/atocore/interactions/service.py: new _normalize_since helper. The bug: - created_at is stored as 'YYYY-MM-DD HH:MM:SS' (no timezone, UTC by convention) so it sorts lexically and compares cleanly with the SQLite CURRENT_TIMESTAMP default - The `since` parameter was documented as ISO 8601 but compared as a raw string against the storage format - The lexically-greater 'T' separator means an ISO timestamp like '2026-04-07T12:00:00Z' is GREATER than the storage form '2026-04-07 12:00:00' for the same instant - Result: a client passing ISO `since` got an empty result for any row from the same day, even though those rows existed and were technically "after" the cutoff in real-world time The fix: - _normalize_since accepts ISO 8601 with T, optional Z suffix, optional fractional seconds, optional +HH:MM offsets - Uses datetime.fromisoformat for parsing (Python 3.11+) - Converts to UTC and reformats as the storage format before the SQL comparison - The bare storage format still works (backwards compat path is a regex match that returns the input unchanged) - Unparseable input is returned as-is so the comparison degrades gracefully (rows just don't match) instead of raising and breaking the listing endpoint builder.py refactor ------------------- The previous P1 fix had inline canonicalization. Now it uses the shared helper for consistency: - import changed from get_registered_project to resolve_project_name - the inline lookup is replaced with a single helper call - the comment block now points at representation-authority.md for the canonicalization contract New shared test fixture: tests/conftest.py::project_registry ------------------------------------------------------------ - Standardizes the registry-setup pattern that was duplicated across test_context_builder.py, test_project_state.py, test_interactions.py, and test_reinforcement.py - Returns a callable that takes (project_id, [aliases]) tuples and writes them into a temp registry file with the env var pointed at it and config.settings reloaded - Used by all 12 new regression tests in this commit Tests (12 new, all green on first run) -------------------------------------- test_project_state.py: - test_set_state_canonicalizes_alias: write via alias, read via every alias and the canonical id, verify same row id - test_get_state_canonicalizes_alias_after_canonical_write - test_invalidate_state_canonicalizes_alias - test_unregistered_project_state_still_works (backwards compat) test_interactions.py: - test_record_interaction_canonicalizes_project - test_list_interactions_canonicalizes_project_filter - test_list_interactions_since_accepts_iso_with_t_separator - test_list_interactions_since_accepts_z_suffix - test_list_interactions_since_accepts_offset - test_list_interactions_since_storage_format_still_works test_reinforcement.py: - test_reinforcement_works_when_capture_uses_alias (end-to-end: capture under alias, seed memory under canonical, verify reinforcement matches) - test_get_memories_filter_by_alias Full suite: 174 passing (was 162), 1 warning. The +12 is the new regression tests, no existing tests regressed. What's still NOT canonicalized (and why) ---------------------------------------- - _rank_chunks's secondary substring boost in builder.py — the retriever already does the right thing via its own _project_match_boost which calls get_registered_project. The redundant secondary boost still uses the raw hint but it's a multiplicative factor on top of correct retrieval, not a filter, so it can't drop relevant chunks. Tracked as a future cleanup but not a P1. - update_memory's project field (you can't change a memory's project after creation in the API anyway). - The retriever's project_hint parameter on direct /query calls — same reasoning as the builder boost, plus the retriever's own get_registered_project call already handles aliases there.
2026-04-07 08:29:33 -04:00
# Canonicalize on the read side so a caller passing an alias
# finds rows that were stored under the canonical id (and
# vice versa). resolve_project_name returns the input
# unchanged for unregistered names so empty-string queries
# for "no project scope" still work.
query += " AND project = ?"
fix(P1+P2): canonicalize project names at every trust boundary Three findings from codex's review of the previous P1+P2 fix. The earlier commit (f2372ef) only fixed alias resolution at the context builder. Codex correctly pointed out that the same fragmentation applies at every other place a project name crosses a boundary — project_state writes/reads, interaction capture/listing/filtering, memory create/queries, and reinforcement's downstream queries. Plus a real bug in the interaction `since` filter where the storage format and the documented ISO format don't compare cleanly. The fix is one helper used at every boundary instead of duplicating the resolution inline. New helper: src/atocore/projects/registry.py::resolve_project_name --------------------------------------------------------------- - Single canonicalization boundary for project names - Returns the canonical project_id when the input matches any registered id or alias - Returns the input unchanged for empty/None and for unregistered names (preserves backwards compat with hand-curated state that predates the registry) - Documented as the contract that every read/write at the trust boundary should pass through P1 — Trusted Project State endpoints ------------------------------------ src/atocore/context/project_state.py: set_state, get_state, and invalidate_state now all canonicalize project_name through resolve_project_name BEFORE looking up or creating the project row. Before this fix: - POST /project/state with project="p05" called ensure_project("p05") which created a separate row in the projects table - The state row was attached to that alias project_id - Later context builds canonicalized "p05" -> "p05-interferometer" via the builder fix from f2372ef and never found the state - Result: trusted state silently fragmented across alias rows After this fix: - The alias is resolved to the canonical id at every entry point - Two captures (one via "p05", one via "p05-interferometer") write to the same row - get_state via either alias or the canonical id finds the same row Fixes the highest-priority gap codex flagged because Trusted Project State is supposed to be the most dependable layer in the AtoCore trust hierarchy. P2.a — Interaction capture project canonicalization ---------------------------------------------------- src/atocore/interactions/service.py: record_interaction now canonicalizes project before storing, so interaction.project is always the canonical id regardless of what the client passed. Downstream effects: - reinforce_from_interaction queries memories by interaction.project -> previously missed memories stored under canonical id -> now consistent because interaction.project IS the canonical id - the extractor stamps candidates with interaction.project -> previously created candidates in alias buckets -> now creates candidates in the canonical bucket - list_interactions(project=alias) was already broken, now fixed by canonicalizing the filter input on the read side too Memory service applied the same fix: - src/atocore/memory/service.py: create_memory and get_memories both canonicalize project through resolve_project_name - This keeps stored memory.project consistent with the reinforcement query path P2.b — Interaction `since` filter format normalization ------------------------------------------------------ src/atocore/interactions/service.py: new _normalize_since helper. The bug: - created_at is stored as 'YYYY-MM-DD HH:MM:SS' (no timezone, UTC by convention) so it sorts lexically and compares cleanly with the SQLite CURRENT_TIMESTAMP default - The `since` parameter was documented as ISO 8601 but compared as a raw string against the storage format - The lexically-greater 'T' separator means an ISO timestamp like '2026-04-07T12:00:00Z' is GREATER than the storage form '2026-04-07 12:00:00' for the same instant - Result: a client passing ISO `since` got an empty result for any row from the same day, even though those rows existed and were technically "after" the cutoff in real-world time The fix: - _normalize_since accepts ISO 8601 with T, optional Z suffix, optional fractional seconds, optional +HH:MM offsets - Uses datetime.fromisoformat for parsing (Python 3.11+) - Converts to UTC and reformats as the storage format before the SQL comparison - The bare storage format still works (backwards compat path is a regex match that returns the input unchanged) - Unparseable input is returned as-is so the comparison degrades gracefully (rows just don't match) instead of raising and breaking the listing endpoint builder.py refactor ------------------- The previous P1 fix had inline canonicalization. Now it uses the shared helper for consistency: - import changed from get_registered_project to resolve_project_name - the inline lookup is replaced with a single helper call - the comment block now points at representation-authority.md for the canonicalization contract New shared test fixture: tests/conftest.py::project_registry ------------------------------------------------------------ - Standardizes the registry-setup pattern that was duplicated across test_context_builder.py, test_project_state.py, test_interactions.py, and test_reinforcement.py - Returns a callable that takes (project_id, [aliases]) tuples and writes them into a temp registry file with the env var pointed at it and config.settings reloaded - Used by all 12 new regression tests in this commit Tests (12 new, all green on first run) -------------------------------------- test_project_state.py: - test_set_state_canonicalizes_alias: write via alias, read via every alias and the canonical id, verify same row id - test_get_state_canonicalizes_alias_after_canonical_write - test_invalidate_state_canonicalizes_alias - test_unregistered_project_state_still_works (backwards compat) test_interactions.py: - test_record_interaction_canonicalizes_project - test_list_interactions_canonicalizes_project_filter - test_list_interactions_since_accepts_iso_with_t_separator - test_list_interactions_since_accepts_z_suffix - test_list_interactions_since_accepts_offset - test_list_interactions_since_storage_format_still_works test_reinforcement.py: - test_reinforcement_works_when_capture_uses_alias (end-to-end: capture under alias, seed memory under canonical, verify reinforcement matches) - test_get_memories_filter_by_alias Full suite: 174 passing (was 162), 1 warning. The +12 is the new regression tests, no existing tests regressed. What's still NOT canonicalized (and why) ---------------------------------------- - _rank_chunks's secondary substring boost in builder.py — the retriever already does the right thing via its own _project_match_boost which calls get_registered_project. The redundant secondary boost still uses the raw hint but it's a multiplicative factor on top of correct retrieval, not a filter, so it can't drop relevant chunks. Tracked as a future cleanup but not a P1. - update_memory's project field (you can't change a memory's project after creation in the API anyway). - The retriever's project_hint parameter on direct /query calls — same reasoning as the builder boost, plus the retriever's own get_registered_project call already handles aliases there.
2026-04-07 08:29:33 -04:00
params.append(resolve_project_name(project))
feat(phase9-B): reinforce active memories from captured interactions Phase 9 Commit B from the agreed plan. With Commit A capturing what AtoCore fed to the LLM and what came back, this commit closes the weakest part of the loop: when a memory is actually referenced in a response, its confidence should drift up, and stale memories that nobody ever mentions should stay where they are. This is reinforcement only — nothing is promoted into trusted state and no candidates are created. Extraction is Commit C. Schema (additive migration): - memories.last_referenced_at DATETIME (null by default) - memories.reference_count INTEGER DEFAULT 0 - idx_memories_last_referenced on last_referenced_at - memories.status now accepts the new "candidate" value so Commit C has the status slot to land on. Existing active/superseded/invalid rows are untouched. New module: src/atocore/memory/reinforcement.py - reinforce_from_interaction(interaction): scans the interaction's response + response_summary for echoes of active memories and bumps confidence / reference_count for each match - matching is intentionally simple and explainable: * normalize both sides (lowercase, collapse whitespace) * require >= 12 chars of memory content to match * compare the leading 80-char window of each memory - the candidate pool is project-scoped memories for the interaction's project + global identity + preference memories, deduplicated - candidates and invalidated memories are NEVER reinforced; only active memories move Memory service changes: - MEMORY_STATUSES = ["candidate", "active", "superseded", "invalid"] - create_memory(status="candidate"|"active"|...) with per-status duplicate scoping so a candidate and an active with identical text can legitimately coexist during review - get_memories(status=...) explicit override of the legacy active_only flag; callers can now list the review queue cleanly - update_memory accepts any valid status including "candidate" - reinforce_memory(id, delta): low-level primitive that bumps confidence (capped at 1.0), increments reference_count, and sets last_referenced_at. Only active memories; returns (applied, old, new) - promote_memory / reject_candidate_memory helpers prepping Commit C Interactions service: - record_interaction(reinforce=True) runs reinforce_from_interaction automatically when the interaction has response content. reinforcement errors are logged but never raised back to the caller so capture itself is never blocked by a flaky downstream. - circular import between interactions service and memory.reinforcement avoided by lazy import inside the function API: - POST /interactions now accepts a reinforce bool field (default true) - POST /interactions/{id}/reinforce runs reinforcement on an existing captured interaction — useful for backfilling or for retrying after a transient error in the automatic pass - response lists which memory ids were reinforced with old / new confidence for audit Tests (17 new, all green): - reinforce_memory bumps, caps at 1.0, accumulates reference_count - reinforce_memory rejects candidates and missing ids - reinforce_memory rejects negative delta - reinforce_from_interaction matches active memory - reinforce_from_interaction ignores candidates and inactive - reinforce_from_interaction requires minimum content length - reinforce_from_interaction handles empty response cleanly - reinforce_from_interaction normalizes casing and whitespace - reinforce_from_interaction deduplicates across memory buckets - record_interaction auto-reinforces by default - record_interaction reinforce=False skips the pass - record_interaction handles empty response - POST /interactions/{id}/reinforce runs against stored interaction - POST /interactions/{id}/reinforce returns 404 for missing id - POST /interactions accepts reinforce=false Full suite: 135 passing (was 118). Trust model unchanged: - reinforcement only moves confidence within the existing active set - the candidate lifecycle is declared but only Commit C will actually create candidate memories - trusted project state is never touched by reinforcement Next: Commit C adds the rule-based extractor that produces candidate memories from captured interactions plus the promote/reject review queue endpoints.
2026-04-06 21:18:38 -04:00
if status is not None:
query += " AND status = ?"
params.append(status)
elif active_only:
query += " AND status = 'active'"
if min_confidence > 0:
query += " AND confidence >= ?"
params.append(min_confidence)
query += " ORDER BY confidence DESC, updated_at DESC LIMIT ?"
params.append(limit)
with get_connection() as conn:
rows = conn.execute(query, params).fetchall()
return [_row_to_memory(r) for r in rows]
def update_memory(
memory_id: str,
content: str | None = None,
confidence: float | None = None,
status: str | None = None,
feat: Phase 3 V1 — Auto-Organization (domain_tags + valid_until) Adds structural metadata that the LLM triage was already implicitly reasoning about ("stale snapshot" → reject). Phase 3 captures that reasoning as fields so it can DRIVE retrieval, not just rejection. Schema (src/atocore/models/database.py): - domain_tags TEXT DEFAULT '[]' JSON array of lowercase topic keywords - valid_until DATETIME ISO date; null = permanent - idx_memories_valid_until index for efficient expiry queries Memory service (src/atocore/memory/service.py): - Memory dataclass gains domain_tags + valid_until - create_memory, update_memory accept/persist both - _row_to_memory safely reads both (JSON-decode + null handling) - _normalize_tags helper: lowercase, dedup, strip, cap at 10 - get_memories_for_context filters expired (valid_until < today UTC) - _rank_memories_for_query adds tag-boost: memories whose domain_tags appear as substrings in query text rank higher (tertiary key after content-overlap density + absolute overlap, before confidence) LLM extractor (_llm_prompt.py → llm-0.5.0): - SYSTEM_PROMPT documents domain_tags (2-5 keywords) + valid_until (time-bounded facts get expiry dates; durable facts stay null) - normalize_candidate_item parses both fields from model output with graceful fallback for string/null/missing LLM triage (scripts/auto_triage.py): - TRIAGE_SYSTEM_PROMPT documents same two fields - parse_verdict extracts them from verdict JSON - On promote: PUT /memory/{id} with tags + valid_until BEFORE POST /memory/{id}/promote, so active memories carry them API (src/atocore/api/routes.py): - MemoryCreateRequest: adds domain_tags, valid_until - MemoryUpdateRequest: adds domain_tags, valid_until, memory_type - GET /memory response exposes domain_tags + valid_until + created_at Triage UI (src/atocore/engineering/triage_ui.py): - Renders existing tags as colored badges - Adds inline text field for tags (comma-separated) + date picker for valid_until on every candidate card - Save&Promote button persists edits via PUT then promotes - Plain Promote (and Y shortcut) also saves tags/expiry if edited Wiki (src/atocore/engineering/wiki.py): - Search now matches memory content OR domain_tags - Search results render tags as clickable badges linking to /wiki/search?q=<tag> for cross-project navigation - valid_until shown as amber "valid until YYYY-MM-DD" hint Tests: 303 → 308 (5 new for Phase 3 behavior): - test_create_memory_with_tags_and_valid_until - test_create_memory_normalizes_tags - test_update_memory_sets_tags_and_valid_until - test_get_memories_for_context_excludes_expired - test_context_builder_tag_boost_orders_results Deferred (explicitly): temporal_scope enum, source_refs memory graph, HDBSCAN clustering, memory detail wiki page, backfill of existing actives. See docs/MASTER-BRAIN-PLAN.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:37:01 -04:00
memory_type: str | None = None,
domain_tags: list[str] | None = None,
valid_until: str | None = None,
feat: Phase 4 V1 — Robustness Hardening Adds the observability + safety layer that turns AtoCore from "works until something silently breaks" into "every mutation is traceable, drift is detected, failures raise alerts." 1. Audit log (memory_audit table): - New table with id, memory_id, action, actor, before/after JSON, note, timestamp; 3 indexes for memory_id/timestamp/action - _audit_memory() helper called from every mutation: create_memory, update_memory, promote_memory, reject_candidate_memory, invalidate_memory, supersede_memory, reinforce_memory, auto_promote_reinforced, expire_stale_candidates - Action verb auto-selected: promoted/rejected/invalidated/ superseded/updated based on state transition - "actor" threaded through: api-http, human-triage, phase10-auto- promote, candidate-expiry, reinforcement, etc. - Fail-open: audit write failure logs but never breaks the mutation - GET /memory/{id}/audit: full history for one memory - GET /admin/audit/recent: last 50 mutations across the system 2. Alerts framework (src/atocore/observability/alerts.py): - emit_alert(severity, title, message, context) fans out to: - structlog logger (always) - ~/atocore-logs/alerts.log append (configurable via ATOCORE_ALERT_LOG) - project_state atocore/alert/last_{severity} (dashboard surface) - ATOCORE_ALERT_WEBHOOK POST if set (auto-detects Discord webhook format for nice embeds; generic JSON otherwise) - Every sink fail-open — one failure doesn't prevent the others - Pipeline alert step in nightly cron: harness < 85% → warning; candidate queue > 200 → warning 3. Integrity checks (scripts/integrity_check.py): - Nightly scan for drift: - Memories → missing source_chunk_id references - Duplicate active memories (same type+content+project) - project_state → missing projects - Orphaned source_chunks (no parent document) - Results persisted to atocore/status/integrity_check_result - Any finding emits a warning alert - Added as Step G in deploy/dalidou/batch-extract.sh nightly cron 4. Dashboard surfaces it all: - integrity (findings + details) - alerts (last info/warning/critical per severity) - recent_audit (last 10 mutations with actor + action + preview) Tests: 308 → 317 (9 new): - test_audit_create_logs_entry - test_audit_promote_logs_entry - test_audit_reject_logs_entry - test_audit_update_captures_before_after - test_audit_reinforce_logs_entry - test_recent_audit_returns_cross_memory_entries - test_emit_alert_writes_log_file - test_emit_alert_invalid_severity_falls_back_to_info - test_emit_alert_fails_open_on_log_write_error Deferred: formal migration framework with rollback (current additive pattern is fine for V1); memory detail wiki page with audit view (quick follow-up). To enable Discord alerts: set ATOCORE_ALERT_WEBHOOK to a Discord webhook URL in Dalidou's environment. Default = log-only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:54:10 -04:00
actor: str = "api",
note: str = "",
) -> bool:
"""Update an existing memory."""
feat: Phase 3 V1 — Auto-Organization (domain_tags + valid_until) Adds structural metadata that the LLM triage was already implicitly reasoning about ("stale snapshot" → reject). Phase 3 captures that reasoning as fields so it can DRIVE retrieval, not just rejection. Schema (src/atocore/models/database.py): - domain_tags TEXT DEFAULT '[]' JSON array of lowercase topic keywords - valid_until DATETIME ISO date; null = permanent - idx_memories_valid_until index for efficient expiry queries Memory service (src/atocore/memory/service.py): - Memory dataclass gains domain_tags + valid_until - create_memory, update_memory accept/persist both - _row_to_memory safely reads both (JSON-decode + null handling) - _normalize_tags helper: lowercase, dedup, strip, cap at 10 - get_memories_for_context filters expired (valid_until < today UTC) - _rank_memories_for_query adds tag-boost: memories whose domain_tags appear as substrings in query text rank higher (tertiary key after content-overlap density + absolute overlap, before confidence) LLM extractor (_llm_prompt.py → llm-0.5.0): - SYSTEM_PROMPT documents domain_tags (2-5 keywords) + valid_until (time-bounded facts get expiry dates; durable facts stay null) - normalize_candidate_item parses both fields from model output with graceful fallback for string/null/missing LLM triage (scripts/auto_triage.py): - TRIAGE_SYSTEM_PROMPT documents same two fields - parse_verdict extracts them from verdict JSON - On promote: PUT /memory/{id} with tags + valid_until BEFORE POST /memory/{id}/promote, so active memories carry them API (src/atocore/api/routes.py): - MemoryCreateRequest: adds domain_tags, valid_until - MemoryUpdateRequest: adds domain_tags, valid_until, memory_type - GET /memory response exposes domain_tags + valid_until + created_at Triage UI (src/atocore/engineering/triage_ui.py): - Renders existing tags as colored badges - Adds inline text field for tags (comma-separated) + date picker for valid_until on every candidate card - Save&Promote button persists edits via PUT then promotes - Plain Promote (and Y shortcut) also saves tags/expiry if edited Wiki (src/atocore/engineering/wiki.py): - Search now matches memory content OR domain_tags - Search results render tags as clickable badges linking to /wiki/search?q=<tag> for cross-project navigation - valid_until shown as amber "valid until YYYY-MM-DD" hint Tests: 303 → 308 (5 new for Phase 3 behavior): - test_create_memory_with_tags_and_valid_until - test_create_memory_normalizes_tags - test_update_memory_sets_tags_and_valid_until - test_get_memories_for_context_excludes_expired - test_context_builder_tag_boost_orders_results Deferred (explicitly): temporal_scope enum, source_refs memory graph, HDBSCAN clustering, memory detail wiki page, backfill of existing actives. See docs/MASTER-BRAIN-PLAN.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:37:01 -04:00
import json as _json
with get_connection() as conn:
existing = conn.execute("SELECT * FROM memories WHERE id = ?", (memory_id,)).fetchone()
if existing is None:
return False
next_content = content if content is not None else existing["content"]
next_status = status if status is not None else existing["status"]
if confidence is not None:
_validate_confidence(confidence)
if next_status == "active":
duplicate = conn.execute(
"SELECT id FROM memories "
"WHERE memory_type = ? AND content = ? AND project = ? AND status = 'active' AND id != ?",
(existing["memory_type"], next_content, existing["project"] or "", memory_id),
).fetchone()
if duplicate:
raise ValueError("Update would create a duplicate active memory")
feat: Phase 4 V1 — Robustness Hardening Adds the observability + safety layer that turns AtoCore from "works until something silently breaks" into "every mutation is traceable, drift is detected, failures raise alerts." 1. Audit log (memory_audit table): - New table with id, memory_id, action, actor, before/after JSON, note, timestamp; 3 indexes for memory_id/timestamp/action - _audit_memory() helper called from every mutation: create_memory, update_memory, promote_memory, reject_candidate_memory, invalidate_memory, supersede_memory, reinforce_memory, auto_promote_reinforced, expire_stale_candidates - Action verb auto-selected: promoted/rejected/invalidated/ superseded/updated based on state transition - "actor" threaded through: api-http, human-triage, phase10-auto- promote, candidate-expiry, reinforcement, etc. - Fail-open: audit write failure logs but never breaks the mutation - GET /memory/{id}/audit: full history for one memory - GET /admin/audit/recent: last 50 mutations across the system 2. Alerts framework (src/atocore/observability/alerts.py): - emit_alert(severity, title, message, context) fans out to: - structlog logger (always) - ~/atocore-logs/alerts.log append (configurable via ATOCORE_ALERT_LOG) - project_state atocore/alert/last_{severity} (dashboard surface) - ATOCORE_ALERT_WEBHOOK POST if set (auto-detects Discord webhook format for nice embeds; generic JSON otherwise) - Every sink fail-open — one failure doesn't prevent the others - Pipeline alert step in nightly cron: harness < 85% → warning; candidate queue > 200 → warning 3. Integrity checks (scripts/integrity_check.py): - Nightly scan for drift: - Memories → missing source_chunk_id references - Duplicate active memories (same type+content+project) - project_state → missing projects - Orphaned source_chunks (no parent document) - Results persisted to atocore/status/integrity_check_result - Any finding emits a warning alert - Added as Step G in deploy/dalidou/batch-extract.sh nightly cron 4. Dashboard surfaces it all: - integrity (findings + details) - alerts (last info/warning/critical per severity) - recent_audit (last 10 mutations with actor + action + preview) Tests: 308 → 317 (9 new): - test_audit_create_logs_entry - test_audit_promote_logs_entry - test_audit_reject_logs_entry - test_audit_update_captures_before_after - test_audit_reinforce_logs_entry - test_recent_audit_returns_cross_memory_entries - test_emit_alert_writes_log_file - test_emit_alert_invalid_severity_falls_back_to_info - test_emit_alert_fails_open_on_log_write_error Deferred: formal migration framework with rollback (current additive pattern is fine for V1); memory detail wiki page with audit view (quick follow-up). To enable Discord alerts: set ATOCORE_ALERT_WEBHOOK to a Discord webhook URL in Dalidou's environment. Default = log-only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:54:10 -04:00
# Capture before-state for audit
before_snapshot = {
"content": existing["content"],
"status": existing["status"],
"confidence": existing["confidence"],
"memory_type": existing["memory_type"],
}
after_snapshot = dict(before_snapshot)
updates = []
params: list = []
if content is not None:
updates.append("content = ?")
params.append(content)
feat: Phase 4 V1 — Robustness Hardening Adds the observability + safety layer that turns AtoCore from "works until something silently breaks" into "every mutation is traceable, drift is detected, failures raise alerts." 1. Audit log (memory_audit table): - New table with id, memory_id, action, actor, before/after JSON, note, timestamp; 3 indexes for memory_id/timestamp/action - _audit_memory() helper called from every mutation: create_memory, update_memory, promote_memory, reject_candidate_memory, invalidate_memory, supersede_memory, reinforce_memory, auto_promote_reinforced, expire_stale_candidates - Action verb auto-selected: promoted/rejected/invalidated/ superseded/updated based on state transition - "actor" threaded through: api-http, human-triage, phase10-auto- promote, candidate-expiry, reinforcement, etc. - Fail-open: audit write failure logs but never breaks the mutation - GET /memory/{id}/audit: full history for one memory - GET /admin/audit/recent: last 50 mutations across the system 2. Alerts framework (src/atocore/observability/alerts.py): - emit_alert(severity, title, message, context) fans out to: - structlog logger (always) - ~/atocore-logs/alerts.log append (configurable via ATOCORE_ALERT_LOG) - project_state atocore/alert/last_{severity} (dashboard surface) - ATOCORE_ALERT_WEBHOOK POST if set (auto-detects Discord webhook format for nice embeds; generic JSON otherwise) - Every sink fail-open — one failure doesn't prevent the others - Pipeline alert step in nightly cron: harness < 85% → warning; candidate queue > 200 → warning 3. Integrity checks (scripts/integrity_check.py): - Nightly scan for drift: - Memories → missing source_chunk_id references - Duplicate active memories (same type+content+project) - project_state → missing projects - Orphaned source_chunks (no parent document) - Results persisted to atocore/status/integrity_check_result - Any finding emits a warning alert - Added as Step G in deploy/dalidou/batch-extract.sh nightly cron 4. Dashboard surfaces it all: - integrity (findings + details) - alerts (last info/warning/critical per severity) - recent_audit (last 10 mutations with actor + action + preview) Tests: 308 → 317 (9 new): - test_audit_create_logs_entry - test_audit_promote_logs_entry - test_audit_reject_logs_entry - test_audit_update_captures_before_after - test_audit_reinforce_logs_entry - test_recent_audit_returns_cross_memory_entries - test_emit_alert_writes_log_file - test_emit_alert_invalid_severity_falls_back_to_info - test_emit_alert_fails_open_on_log_write_error Deferred: formal migration framework with rollback (current additive pattern is fine for V1); memory detail wiki page with audit view (quick follow-up). To enable Discord alerts: set ATOCORE_ALERT_WEBHOOK to a Discord webhook URL in Dalidou's environment. Default = log-only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:54:10 -04:00
after_snapshot["content"] = content
if confidence is not None:
updates.append("confidence = ?")
params.append(confidence)
feat: Phase 4 V1 — Robustness Hardening Adds the observability + safety layer that turns AtoCore from "works until something silently breaks" into "every mutation is traceable, drift is detected, failures raise alerts." 1. Audit log (memory_audit table): - New table with id, memory_id, action, actor, before/after JSON, note, timestamp; 3 indexes for memory_id/timestamp/action - _audit_memory() helper called from every mutation: create_memory, update_memory, promote_memory, reject_candidate_memory, invalidate_memory, supersede_memory, reinforce_memory, auto_promote_reinforced, expire_stale_candidates - Action verb auto-selected: promoted/rejected/invalidated/ superseded/updated based on state transition - "actor" threaded through: api-http, human-triage, phase10-auto- promote, candidate-expiry, reinforcement, etc. - Fail-open: audit write failure logs but never breaks the mutation - GET /memory/{id}/audit: full history for one memory - GET /admin/audit/recent: last 50 mutations across the system 2. Alerts framework (src/atocore/observability/alerts.py): - emit_alert(severity, title, message, context) fans out to: - structlog logger (always) - ~/atocore-logs/alerts.log append (configurable via ATOCORE_ALERT_LOG) - project_state atocore/alert/last_{severity} (dashboard surface) - ATOCORE_ALERT_WEBHOOK POST if set (auto-detects Discord webhook format for nice embeds; generic JSON otherwise) - Every sink fail-open — one failure doesn't prevent the others - Pipeline alert step in nightly cron: harness < 85% → warning; candidate queue > 200 → warning 3. Integrity checks (scripts/integrity_check.py): - Nightly scan for drift: - Memories → missing source_chunk_id references - Duplicate active memories (same type+content+project) - project_state → missing projects - Orphaned source_chunks (no parent document) - Results persisted to atocore/status/integrity_check_result - Any finding emits a warning alert - Added as Step G in deploy/dalidou/batch-extract.sh nightly cron 4. Dashboard surfaces it all: - integrity (findings + details) - alerts (last info/warning/critical per severity) - recent_audit (last 10 mutations with actor + action + preview) Tests: 308 → 317 (9 new): - test_audit_create_logs_entry - test_audit_promote_logs_entry - test_audit_reject_logs_entry - test_audit_update_captures_before_after - test_audit_reinforce_logs_entry - test_recent_audit_returns_cross_memory_entries - test_emit_alert_writes_log_file - test_emit_alert_invalid_severity_falls_back_to_info - test_emit_alert_fails_open_on_log_write_error Deferred: formal migration framework with rollback (current additive pattern is fine for V1); memory detail wiki page with audit view (quick follow-up). To enable Discord alerts: set ATOCORE_ALERT_WEBHOOK to a Discord webhook URL in Dalidou's environment. Default = log-only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:54:10 -04:00
after_snapshot["confidence"] = confidence
if status is not None:
feat(phase9-B): reinforce active memories from captured interactions Phase 9 Commit B from the agreed plan. With Commit A capturing what AtoCore fed to the LLM and what came back, this commit closes the weakest part of the loop: when a memory is actually referenced in a response, its confidence should drift up, and stale memories that nobody ever mentions should stay where they are. This is reinforcement only — nothing is promoted into trusted state and no candidates are created. Extraction is Commit C. Schema (additive migration): - memories.last_referenced_at DATETIME (null by default) - memories.reference_count INTEGER DEFAULT 0 - idx_memories_last_referenced on last_referenced_at - memories.status now accepts the new "candidate" value so Commit C has the status slot to land on. Existing active/superseded/invalid rows are untouched. New module: src/atocore/memory/reinforcement.py - reinforce_from_interaction(interaction): scans the interaction's response + response_summary for echoes of active memories and bumps confidence / reference_count for each match - matching is intentionally simple and explainable: * normalize both sides (lowercase, collapse whitespace) * require >= 12 chars of memory content to match * compare the leading 80-char window of each memory - the candidate pool is project-scoped memories for the interaction's project + global identity + preference memories, deduplicated - candidates and invalidated memories are NEVER reinforced; only active memories move Memory service changes: - MEMORY_STATUSES = ["candidate", "active", "superseded", "invalid"] - create_memory(status="candidate"|"active"|...) with per-status duplicate scoping so a candidate and an active with identical text can legitimately coexist during review - get_memories(status=...) explicit override of the legacy active_only flag; callers can now list the review queue cleanly - update_memory accepts any valid status including "candidate" - reinforce_memory(id, delta): low-level primitive that bumps confidence (capped at 1.0), increments reference_count, and sets last_referenced_at. Only active memories; returns (applied, old, new) - promote_memory / reject_candidate_memory helpers prepping Commit C Interactions service: - record_interaction(reinforce=True) runs reinforce_from_interaction automatically when the interaction has response content. reinforcement errors are logged but never raised back to the caller so capture itself is never blocked by a flaky downstream. - circular import between interactions service and memory.reinforcement avoided by lazy import inside the function API: - POST /interactions now accepts a reinforce bool field (default true) - POST /interactions/{id}/reinforce runs reinforcement on an existing captured interaction — useful for backfilling or for retrying after a transient error in the automatic pass - response lists which memory ids were reinforced with old / new confidence for audit Tests (17 new, all green): - reinforce_memory bumps, caps at 1.0, accumulates reference_count - reinforce_memory rejects candidates and missing ids - reinforce_memory rejects negative delta - reinforce_from_interaction matches active memory - reinforce_from_interaction ignores candidates and inactive - reinforce_from_interaction requires minimum content length - reinforce_from_interaction handles empty response cleanly - reinforce_from_interaction normalizes casing and whitespace - reinforce_from_interaction deduplicates across memory buckets - record_interaction auto-reinforces by default - record_interaction reinforce=False skips the pass - record_interaction handles empty response - POST /interactions/{id}/reinforce runs against stored interaction - POST /interactions/{id}/reinforce returns 404 for missing id - POST /interactions accepts reinforce=false Full suite: 135 passing (was 118). Trust model unchanged: - reinforcement only moves confidence within the existing active set - the candidate lifecycle is declared but only Commit C will actually create candidate memories - trusted project state is never touched by reinforcement Next: Commit C adds the rule-based extractor that produces candidate memories from captured interactions plus the promote/reject review queue endpoints.
2026-04-06 21:18:38 -04:00
if status not in MEMORY_STATUSES:
raise ValueError(f"Invalid status '{status}'. Must be one of: {MEMORY_STATUSES}")
updates.append("status = ?")
params.append(status)
feat: Phase 4 V1 — Robustness Hardening Adds the observability + safety layer that turns AtoCore from "works until something silently breaks" into "every mutation is traceable, drift is detected, failures raise alerts." 1. Audit log (memory_audit table): - New table with id, memory_id, action, actor, before/after JSON, note, timestamp; 3 indexes for memory_id/timestamp/action - _audit_memory() helper called from every mutation: create_memory, update_memory, promote_memory, reject_candidate_memory, invalidate_memory, supersede_memory, reinforce_memory, auto_promote_reinforced, expire_stale_candidates - Action verb auto-selected: promoted/rejected/invalidated/ superseded/updated based on state transition - "actor" threaded through: api-http, human-triage, phase10-auto- promote, candidate-expiry, reinforcement, etc. - Fail-open: audit write failure logs but never breaks the mutation - GET /memory/{id}/audit: full history for one memory - GET /admin/audit/recent: last 50 mutations across the system 2. Alerts framework (src/atocore/observability/alerts.py): - emit_alert(severity, title, message, context) fans out to: - structlog logger (always) - ~/atocore-logs/alerts.log append (configurable via ATOCORE_ALERT_LOG) - project_state atocore/alert/last_{severity} (dashboard surface) - ATOCORE_ALERT_WEBHOOK POST if set (auto-detects Discord webhook format for nice embeds; generic JSON otherwise) - Every sink fail-open — one failure doesn't prevent the others - Pipeline alert step in nightly cron: harness < 85% → warning; candidate queue > 200 → warning 3. Integrity checks (scripts/integrity_check.py): - Nightly scan for drift: - Memories → missing source_chunk_id references - Duplicate active memories (same type+content+project) - project_state → missing projects - Orphaned source_chunks (no parent document) - Results persisted to atocore/status/integrity_check_result - Any finding emits a warning alert - Added as Step G in deploy/dalidou/batch-extract.sh nightly cron 4. Dashboard surfaces it all: - integrity (findings + details) - alerts (last info/warning/critical per severity) - recent_audit (last 10 mutations with actor + action + preview) Tests: 308 → 317 (9 new): - test_audit_create_logs_entry - test_audit_promote_logs_entry - test_audit_reject_logs_entry - test_audit_update_captures_before_after - test_audit_reinforce_logs_entry - test_recent_audit_returns_cross_memory_entries - test_emit_alert_writes_log_file - test_emit_alert_invalid_severity_falls_back_to_info - test_emit_alert_fails_open_on_log_write_error Deferred: formal migration framework with rollback (current additive pattern is fine for V1); memory detail wiki page with audit view (quick follow-up). To enable Discord alerts: set ATOCORE_ALERT_WEBHOOK to a Discord webhook URL in Dalidou's environment. Default = log-only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:54:10 -04:00
after_snapshot["status"] = status
feat: Phase 3 V1 — Auto-Organization (domain_tags + valid_until) Adds structural metadata that the LLM triage was already implicitly reasoning about ("stale snapshot" → reject). Phase 3 captures that reasoning as fields so it can DRIVE retrieval, not just rejection. Schema (src/atocore/models/database.py): - domain_tags TEXT DEFAULT '[]' JSON array of lowercase topic keywords - valid_until DATETIME ISO date; null = permanent - idx_memories_valid_until index for efficient expiry queries Memory service (src/atocore/memory/service.py): - Memory dataclass gains domain_tags + valid_until - create_memory, update_memory accept/persist both - _row_to_memory safely reads both (JSON-decode + null handling) - _normalize_tags helper: lowercase, dedup, strip, cap at 10 - get_memories_for_context filters expired (valid_until < today UTC) - _rank_memories_for_query adds tag-boost: memories whose domain_tags appear as substrings in query text rank higher (tertiary key after content-overlap density + absolute overlap, before confidence) LLM extractor (_llm_prompt.py → llm-0.5.0): - SYSTEM_PROMPT documents domain_tags (2-5 keywords) + valid_until (time-bounded facts get expiry dates; durable facts stay null) - normalize_candidate_item parses both fields from model output with graceful fallback for string/null/missing LLM triage (scripts/auto_triage.py): - TRIAGE_SYSTEM_PROMPT documents same two fields - parse_verdict extracts them from verdict JSON - On promote: PUT /memory/{id} with tags + valid_until BEFORE POST /memory/{id}/promote, so active memories carry them API (src/atocore/api/routes.py): - MemoryCreateRequest: adds domain_tags, valid_until - MemoryUpdateRequest: adds domain_tags, valid_until, memory_type - GET /memory response exposes domain_tags + valid_until + created_at Triage UI (src/atocore/engineering/triage_ui.py): - Renders existing tags as colored badges - Adds inline text field for tags (comma-separated) + date picker for valid_until on every candidate card - Save&Promote button persists edits via PUT then promotes - Plain Promote (and Y shortcut) also saves tags/expiry if edited Wiki (src/atocore/engineering/wiki.py): - Search now matches memory content OR domain_tags - Search results render tags as clickable badges linking to /wiki/search?q=<tag> for cross-project navigation - valid_until shown as amber "valid until YYYY-MM-DD" hint Tests: 303 → 308 (5 new for Phase 3 behavior): - test_create_memory_with_tags_and_valid_until - test_create_memory_normalizes_tags - test_update_memory_sets_tags_and_valid_until - test_get_memories_for_context_excludes_expired - test_context_builder_tag_boost_orders_results Deferred (explicitly): temporal_scope enum, source_refs memory graph, HDBSCAN clustering, memory detail wiki page, backfill of existing actives. See docs/MASTER-BRAIN-PLAN.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:37:01 -04:00
if memory_type is not None:
if memory_type not in MEMORY_TYPES:
raise ValueError(f"Invalid memory type '{memory_type}'. Must be one of: {MEMORY_TYPES}")
updates.append("memory_type = ?")
params.append(memory_type)
feat: Phase 4 V1 — Robustness Hardening Adds the observability + safety layer that turns AtoCore from "works until something silently breaks" into "every mutation is traceable, drift is detected, failures raise alerts." 1. Audit log (memory_audit table): - New table with id, memory_id, action, actor, before/after JSON, note, timestamp; 3 indexes for memory_id/timestamp/action - _audit_memory() helper called from every mutation: create_memory, update_memory, promote_memory, reject_candidate_memory, invalidate_memory, supersede_memory, reinforce_memory, auto_promote_reinforced, expire_stale_candidates - Action verb auto-selected: promoted/rejected/invalidated/ superseded/updated based on state transition - "actor" threaded through: api-http, human-triage, phase10-auto- promote, candidate-expiry, reinforcement, etc. - Fail-open: audit write failure logs but never breaks the mutation - GET /memory/{id}/audit: full history for one memory - GET /admin/audit/recent: last 50 mutations across the system 2. Alerts framework (src/atocore/observability/alerts.py): - emit_alert(severity, title, message, context) fans out to: - structlog logger (always) - ~/atocore-logs/alerts.log append (configurable via ATOCORE_ALERT_LOG) - project_state atocore/alert/last_{severity} (dashboard surface) - ATOCORE_ALERT_WEBHOOK POST if set (auto-detects Discord webhook format for nice embeds; generic JSON otherwise) - Every sink fail-open — one failure doesn't prevent the others - Pipeline alert step in nightly cron: harness < 85% → warning; candidate queue > 200 → warning 3. Integrity checks (scripts/integrity_check.py): - Nightly scan for drift: - Memories → missing source_chunk_id references - Duplicate active memories (same type+content+project) - project_state → missing projects - Orphaned source_chunks (no parent document) - Results persisted to atocore/status/integrity_check_result - Any finding emits a warning alert - Added as Step G in deploy/dalidou/batch-extract.sh nightly cron 4. Dashboard surfaces it all: - integrity (findings + details) - alerts (last info/warning/critical per severity) - recent_audit (last 10 mutations with actor + action + preview) Tests: 308 → 317 (9 new): - test_audit_create_logs_entry - test_audit_promote_logs_entry - test_audit_reject_logs_entry - test_audit_update_captures_before_after - test_audit_reinforce_logs_entry - test_recent_audit_returns_cross_memory_entries - test_emit_alert_writes_log_file - test_emit_alert_invalid_severity_falls_back_to_info - test_emit_alert_fails_open_on_log_write_error Deferred: formal migration framework with rollback (current additive pattern is fine for V1); memory detail wiki page with audit view (quick follow-up). To enable Discord alerts: set ATOCORE_ALERT_WEBHOOK to a Discord webhook URL in Dalidou's environment. Default = log-only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:54:10 -04:00
after_snapshot["memory_type"] = memory_type
feat: Phase 3 V1 — Auto-Organization (domain_tags + valid_until) Adds structural metadata that the LLM triage was already implicitly reasoning about ("stale snapshot" → reject). Phase 3 captures that reasoning as fields so it can DRIVE retrieval, not just rejection. Schema (src/atocore/models/database.py): - domain_tags TEXT DEFAULT '[]' JSON array of lowercase topic keywords - valid_until DATETIME ISO date; null = permanent - idx_memories_valid_until index for efficient expiry queries Memory service (src/atocore/memory/service.py): - Memory dataclass gains domain_tags + valid_until - create_memory, update_memory accept/persist both - _row_to_memory safely reads both (JSON-decode + null handling) - _normalize_tags helper: lowercase, dedup, strip, cap at 10 - get_memories_for_context filters expired (valid_until < today UTC) - _rank_memories_for_query adds tag-boost: memories whose domain_tags appear as substrings in query text rank higher (tertiary key after content-overlap density + absolute overlap, before confidence) LLM extractor (_llm_prompt.py → llm-0.5.0): - SYSTEM_PROMPT documents domain_tags (2-5 keywords) + valid_until (time-bounded facts get expiry dates; durable facts stay null) - normalize_candidate_item parses both fields from model output with graceful fallback for string/null/missing LLM triage (scripts/auto_triage.py): - TRIAGE_SYSTEM_PROMPT documents same two fields - parse_verdict extracts them from verdict JSON - On promote: PUT /memory/{id} with tags + valid_until BEFORE POST /memory/{id}/promote, so active memories carry them API (src/atocore/api/routes.py): - MemoryCreateRequest: adds domain_tags, valid_until - MemoryUpdateRequest: adds domain_tags, valid_until, memory_type - GET /memory response exposes domain_tags + valid_until + created_at Triage UI (src/atocore/engineering/triage_ui.py): - Renders existing tags as colored badges - Adds inline text field for tags (comma-separated) + date picker for valid_until on every candidate card - Save&Promote button persists edits via PUT then promotes - Plain Promote (and Y shortcut) also saves tags/expiry if edited Wiki (src/atocore/engineering/wiki.py): - Search now matches memory content OR domain_tags - Search results render tags as clickable badges linking to /wiki/search?q=<tag> for cross-project navigation - valid_until shown as amber "valid until YYYY-MM-DD" hint Tests: 303 → 308 (5 new for Phase 3 behavior): - test_create_memory_with_tags_and_valid_until - test_create_memory_normalizes_tags - test_update_memory_sets_tags_and_valid_until - test_get_memories_for_context_excludes_expired - test_context_builder_tag_boost_orders_results Deferred (explicitly): temporal_scope enum, source_refs memory graph, HDBSCAN clustering, memory detail wiki page, backfill of existing actives. See docs/MASTER-BRAIN-PLAN.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:37:01 -04:00
if domain_tags is not None:
feat: Phase 4 V1 — Robustness Hardening Adds the observability + safety layer that turns AtoCore from "works until something silently breaks" into "every mutation is traceable, drift is detected, failures raise alerts." 1. Audit log (memory_audit table): - New table with id, memory_id, action, actor, before/after JSON, note, timestamp; 3 indexes for memory_id/timestamp/action - _audit_memory() helper called from every mutation: create_memory, update_memory, promote_memory, reject_candidate_memory, invalidate_memory, supersede_memory, reinforce_memory, auto_promote_reinforced, expire_stale_candidates - Action verb auto-selected: promoted/rejected/invalidated/ superseded/updated based on state transition - "actor" threaded through: api-http, human-triage, phase10-auto- promote, candidate-expiry, reinforcement, etc. - Fail-open: audit write failure logs but never breaks the mutation - GET /memory/{id}/audit: full history for one memory - GET /admin/audit/recent: last 50 mutations across the system 2. Alerts framework (src/atocore/observability/alerts.py): - emit_alert(severity, title, message, context) fans out to: - structlog logger (always) - ~/atocore-logs/alerts.log append (configurable via ATOCORE_ALERT_LOG) - project_state atocore/alert/last_{severity} (dashboard surface) - ATOCORE_ALERT_WEBHOOK POST if set (auto-detects Discord webhook format for nice embeds; generic JSON otherwise) - Every sink fail-open — one failure doesn't prevent the others - Pipeline alert step in nightly cron: harness < 85% → warning; candidate queue > 200 → warning 3. Integrity checks (scripts/integrity_check.py): - Nightly scan for drift: - Memories → missing source_chunk_id references - Duplicate active memories (same type+content+project) - project_state → missing projects - Orphaned source_chunks (no parent document) - Results persisted to atocore/status/integrity_check_result - Any finding emits a warning alert - Added as Step G in deploy/dalidou/batch-extract.sh nightly cron 4. Dashboard surfaces it all: - integrity (findings + details) - alerts (last info/warning/critical per severity) - recent_audit (last 10 mutations with actor + action + preview) Tests: 308 → 317 (9 new): - test_audit_create_logs_entry - test_audit_promote_logs_entry - test_audit_reject_logs_entry - test_audit_update_captures_before_after - test_audit_reinforce_logs_entry - test_recent_audit_returns_cross_memory_entries - test_emit_alert_writes_log_file - test_emit_alert_invalid_severity_falls_back_to_info - test_emit_alert_fails_open_on_log_write_error Deferred: formal migration framework with rollback (current additive pattern is fine for V1); memory detail wiki page with audit view (quick follow-up). To enable Discord alerts: set ATOCORE_ALERT_WEBHOOK to a Discord webhook URL in Dalidou's environment. Default = log-only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:54:10 -04:00
norm_tags = _normalize_tags(domain_tags)
feat: Phase 3 V1 — Auto-Organization (domain_tags + valid_until) Adds structural metadata that the LLM triage was already implicitly reasoning about ("stale snapshot" → reject). Phase 3 captures that reasoning as fields so it can DRIVE retrieval, not just rejection. Schema (src/atocore/models/database.py): - domain_tags TEXT DEFAULT '[]' JSON array of lowercase topic keywords - valid_until DATETIME ISO date; null = permanent - idx_memories_valid_until index for efficient expiry queries Memory service (src/atocore/memory/service.py): - Memory dataclass gains domain_tags + valid_until - create_memory, update_memory accept/persist both - _row_to_memory safely reads both (JSON-decode + null handling) - _normalize_tags helper: lowercase, dedup, strip, cap at 10 - get_memories_for_context filters expired (valid_until < today UTC) - _rank_memories_for_query adds tag-boost: memories whose domain_tags appear as substrings in query text rank higher (tertiary key after content-overlap density + absolute overlap, before confidence) LLM extractor (_llm_prompt.py → llm-0.5.0): - SYSTEM_PROMPT documents domain_tags (2-5 keywords) + valid_until (time-bounded facts get expiry dates; durable facts stay null) - normalize_candidate_item parses both fields from model output with graceful fallback for string/null/missing LLM triage (scripts/auto_triage.py): - TRIAGE_SYSTEM_PROMPT documents same two fields - parse_verdict extracts them from verdict JSON - On promote: PUT /memory/{id} with tags + valid_until BEFORE POST /memory/{id}/promote, so active memories carry them API (src/atocore/api/routes.py): - MemoryCreateRequest: adds domain_tags, valid_until - MemoryUpdateRequest: adds domain_tags, valid_until, memory_type - GET /memory response exposes domain_tags + valid_until + created_at Triage UI (src/atocore/engineering/triage_ui.py): - Renders existing tags as colored badges - Adds inline text field for tags (comma-separated) + date picker for valid_until on every candidate card - Save&Promote button persists edits via PUT then promotes - Plain Promote (and Y shortcut) also saves tags/expiry if edited Wiki (src/atocore/engineering/wiki.py): - Search now matches memory content OR domain_tags - Search results render tags as clickable badges linking to /wiki/search?q=<tag> for cross-project navigation - valid_until shown as amber "valid until YYYY-MM-DD" hint Tests: 303 → 308 (5 new for Phase 3 behavior): - test_create_memory_with_tags_and_valid_until - test_create_memory_normalizes_tags - test_update_memory_sets_tags_and_valid_until - test_get_memories_for_context_excludes_expired - test_context_builder_tag_boost_orders_results Deferred (explicitly): temporal_scope enum, source_refs memory graph, HDBSCAN clustering, memory detail wiki page, backfill of existing actives. See docs/MASTER-BRAIN-PLAN.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:37:01 -04:00
updates.append("domain_tags = ?")
feat: Phase 4 V1 — Robustness Hardening Adds the observability + safety layer that turns AtoCore from "works until something silently breaks" into "every mutation is traceable, drift is detected, failures raise alerts." 1. Audit log (memory_audit table): - New table with id, memory_id, action, actor, before/after JSON, note, timestamp; 3 indexes for memory_id/timestamp/action - _audit_memory() helper called from every mutation: create_memory, update_memory, promote_memory, reject_candidate_memory, invalidate_memory, supersede_memory, reinforce_memory, auto_promote_reinforced, expire_stale_candidates - Action verb auto-selected: promoted/rejected/invalidated/ superseded/updated based on state transition - "actor" threaded through: api-http, human-triage, phase10-auto- promote, candidate-expiry, reinforcement, etc. - Fail-open: audit write failure logs but never breaks the mutation - GET /memory/{id}/audit: full history for one memory - GET /admin/audit/recent: last 50 mutations across the system 2. Alerts framework (src/atocore/observability/alerts.py): - emit_alert(severity, title, message, context) fans out to: - structlog logger (always) - ~/atocore-logs/alerts.log append (configurable via ATOCORE_ALERT_LOG) - project_state atocore/alert/last_{severity} (dashboard surface) - ATOCORE_ALERT_WEBHOOK POST if set (auto-detects Discord webhook format for nice embeds; generic JSON otherwise) - Every sink fail-open — one failure doesn't prevent the others - Pipeline alert step in nightly cron: harness < 85% → warning; candidate queue > 200 → warning 3. Integrity checks (scripts/integrity_check.py): - Nightly scan for drift: - Memories → missing source_chunk_id references - Duplicate active memories (same type+content+project) - project_state → missing projects - Orphaned source_chunks (no parent document) - Results persisted to atocore/status/integrity_check_result - Any finding emits a warning alert - Added as Step G in deploy/dalidou/batch-extract.sh nightly cron 4. Dashboard surfaces it all: - integrity (findings + details) - alerts (last info/warning/critical per severity) - recent_audit (last 10 mutations with actor + action + preview) Tests: 308 → 317 (9 new): - test_audit_create_logs_entry - test_audit_promote_logs_entry - test_audit_reject_logs_entry - test_audit_update_captures_before_after - test_audit_reinforce_logs_entry - test_recent_audit_returns_cross_memory_entries - test_emit_alert_writes_log_file - test_emit_alert_invalid_severity_falls_back_to_info - test_emit_alert_fails_open_on_log_write_error Deferred: formal migration framework with rollback (current additive pattern is fine for V1); memory detail wiki page with audit view (quick follow-up). To enable Discord alerts: set ATOCORE_ALERT_WEBHOOK to a Discord webhook URL in Dalidou's environment. Default = log-only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:54:10 -04:00
params.append(_json.dumps(norm_tags))
after_snapshot["domain_tags"] = norm_tags
feat: Phase 3 V1 — Auto-Organization (domain_tags + valid_until) Adds structural metadata that the LLM triage was already implicitly reasoning about ("stale snapshot" → reject). Phase 3 captures that reasoning as fields so it can DRIVE retrieval, not just rejection. Schema (src/atocore/models/database.py): - domain_tags TEXT DEFAULT '[]' JSON array of lowercase topic keywords - valid_until DATETIME ISO date; null = permanent - idx_memories_valid_until index for efficient expiry queries Memory service (src/atocore/memory/service.py): - Memory dataclass gains domain_tags + valid_until - create_memory, update_memory accept/persist both - _row_to_memory safely reads both (JSON-decode + null handling) - _normalize_tags helper: lowercase, dedup, strip, cap at 10 - get_memories_for_context filters expired (valid_until < today UTC) - _rank_memories_for_query adds tag-boost: memories whose domain_tags appear as substrings in query text rank higher (tertiary key after content-overlap density + absolute overlap, before confidence) LLM extractor (_llm_prompt.py → llm-0.5.0): - SYSTEM_PROMPT documents domain_tags (2-5 keywords) + valid_until (time-bounded facts get expiry dates; durable facts stay null) - normalize_candidate_item parses both fields from model output with graceful fallback for string/null/missing LLM triage (scripts/auto_triage.py): - TRIAGE_SYSTEM_PROMPT documents same two fields - parse_verdict extracts them from verdict JSON - On promote: PUT /memory/{id} with tags + valid_until BEFORE POST /memory/{id}/promote, so active memories carry them API (src/atocore/api/routes.py): - MemoryCreateRequest: adds domain_tags, valid_until - MemoryUpdateRequest: adds domain_tags, valid_until, memory_type - GET /memory response exposes domain_tags + valid_until + created_at Triage UI (src/atocore/engineering/triage_ui.py): - Renders existing tags as colored badges - Adds inline text field for tags (comma-separated) + date picker for valid_until on every candidate card - Save&Promote button persists edits via PUT then promotes - Plain Promote (and Y shortcut) also saves tags/expiry if edited Wiki (src/atocore/engineering/wiki.py): - Search now matches memory content OR domain_tags - Search results render tags as clickable badges linking to /wiki/search?q=<tag> for cross-project navigation - valid_until shown as amber "valid until YYYY-MM-DD" hint Tests: 303 → 308 (5 new for Phase 3 behavior): - test_create_memory_with_tags_and_valid_until - test_create_memory_normalizes_tags - test_update_memory_sets_tags_and_valid_until - test_get_memories_for_context_excludes_expired - test_context_builder_tag_boost_orders_results Deferred (explicitly): temporal_scope enum, source_refs memory graph, HDBSCAN clustering, memory detail wiki page, backfill of existing actives. See docs/MASTER-BRAIN-PLAN.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:37:01 -04:00
if valid_until is not None:
feat: Phase 4 V1 — Robustness Hardening Adds the observability + safety layer that turns AtoCore from "works until something silently breaks" into "every mutation is traceable, drift is detected, failures raise alerts." 1. Audit log (memory_audit table): - New table with id, memory_id, action, actor, before/after JSON, note, timestamp; 3 indexes for memory_id/timestamp/action - _audit_memory() helper called from every mutation: create_memory, update_memory, promote_memory, reject_candidate_memory, invalidate_memory, supersede_memory, reinforce_memory, auto_promote_reinforced, expire_stale_candidates - Action verb auto-selected: promoted/rejected/invalidated/ superseded/updated based on state transition - "actor" threaded through: api-http, human-triage, phase10-auto- promote, candidate-expiry, reinforcement, etc. - Fail-open: audit write failure logs but never breaks the mutation - GET /memory/{id}/audit: full history for one memory - GET /admin/audit/recent: last 50 mutations across the system 2. Alerts framework (src/atocore/observability/alerts.py): - emit_alert(severity, title, message, context) fans out to: - structlog logger (always) - ~/atocore-logs/alerts.log append (configurable via ATOCORE_ALERT_LOG) - project_state atocore/alert/last_{severity} (dashboard surface) - ATOCORE_ALERT_WEBHOOK POST if set (auto-detects Discord webhook format for nice embeds; generic JSON otherwise) - Every sink fail-open — one failure doesn't prevent the others - Pipeline alert step in nightly cron: harness < 85% → warning; candidate queue > 200 → warning 3. Integrity checks (scripts/integrity_check.py): - Nightly scan for drift: - Memories → missing source_chunk_id references - Duplicate active memories (same type+content+project) - project_state → missing projects - Orphaned source_chunks (no parent document) - Results persisted to atocore/status/integrity_check_result - Any finding emits a warning alert - Added as Step G in deploy/dalidou/batch-extract.sh nightly cron 4. Dashboard surfaces it all: - integrity (findings + details) - alerts (last info/warning/critical per severity) - recent_audit (last 10 mutations with actor + action + preview) Tests: 308 → 317 (9 new): - test_audit_create_logs_entry - test_audit_promote_logs_entry - test_audit_reject_logs_entry - test_audit_update_captures_before_after - test_audit_reinforce_logs_entry - test_recent_audit_returns_cross_memory_entries - test_emit_alert_writes_log_file - test_emit_alert_invalid_severity_falls_back_to_info - test_emit_alert_fails_open_on_log_write_error Deferred: formal migration framework with rollback (current additive pattern is fine for V1); memory detail wiki page with audit view (quick follow-up). To enable Discord alerts: set ATOCORE_ALERT_WEBHOOK to a Discord webhook URL in Dalidou's environment. Default = log-only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:54:10 -04:00
vu = valid_until.strip() or None
feat: Phase 3 V1 — Auto-Organization (domain_tags + valid_until) Adds structural metadata that the LLM triage was already implicitly reasoning about ("stale snapshot" → reject). Phase 3 captures that reasoning as fields so it can DRIVE retrieval, not just rejection. Schema (src/atocore/models/database.py): - domain_tags TEXT DEFAULT '[]' JSON array of lowercase topic keywords - valid_until DATETIME ISO date; null = permanent - idx_memories_valid_until index for efficient expiry queries Memory service (src/atocore/memory/service.py): - Memory dataclass gains domain_tags + valid_until - create_memory, update_memory accept/persist both - _row_to_memory safely reads both (JSON-decode + null handling) - _normalize_tags helper: lowercase, dedup, strip, cap at 10 - get_memories_for_context filters expired (valid_until < today UTC) - _rank_memories_for_query adds tag-boost: memories whose domain_tags appear as substrings in query text rank higher (tertiary key after content-overlap density + absolute overlap, before confidence) LLM extractor (_llm_prompt.py → llm-0.5.0): - SYSTEM_PROMPT documents domain_tags (2-5 keywords) + valid_until (time-bounded facts get expiry dates; durable facts stay null) - normalize_candidate_item parses both fields from model output with graceful fallback for string/null/missing LLM triage (scripts/auto_triage.py): - TRIAGE_SYSTEM_PROMPT documents same two fields - parse_verdict extracts them from verdict JSON - On promote: PUT /memory/{id} with tags + valid_until BEFORE POST /memory/{id}/promote, so active memories carry them API (src/atocore/api/routes.py): - MemoryCreateRequest: adds domain_tags, valid_until - MemoryUpdateRequest: adds domain_tags, valid_until, memory_type - GET /memory response exposes domain_tags + valid_until + created_at Triage UI (src/atocore/engineering/triage_ui.py): - Renders existing tags as colored badges - Adds inline text field for tags (comma-separated) + date picker for valid_until on every candidate card - Save&Promote button persists edits via PUT then promotes - Plain Promote (and Y shortcut) also saves tags/expiry if edited Wiki (src/atocore/engineering/wiki.py): - Search now matches memory content OR domain_tags - Search results render tags as clickable badges linking to /wiki/search?q=<tag> for cross-project navigation - valid_until shown as amber "valid until YYYY-MM-DD" hint Tests: 303 → 308 (5 new for Phase 3 behavior): - test_create_memory_with_tags_and_valid_until - test_create_memory_normalizes_tags - test_update_memory_sets_tags_and_valid_until - test_get_memories_for_context_excludes_expired - test_context_builder_tag_boost_orders_results Deferred (explicitly): temporal_scope enum, source_refs memory graph, HDBSCAN clustering, memory detail wiki page, backfill of existing actives. See docs/MASTER-BRAIN-PLAN.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:37:01 -04:00
updates.append("valid_until = ?")
feat: Phase 4 V1 — Robustness Hardening Adds the observability + safety layer that turns AtoCore from "works until something silently breaks" into "every mutation is traceable, drift is detected, failures raise alerts." 1. Audit log (memory_audit table): - New table with id, memory_id, action, actor, before/after JSON, note, timestamp; 3 indexes for memory_id/timestamp/action - _audit_memory() helper called from every mutation: create_memory, update_memory, promote_memory, reject_candidate_memory, invalidate_memory, supersede_memory, reinforce_memory, auto_promote_reinforced, expire_stale_candidates - Action verb auto-selected: promoted/rejected/invalidated/ superseded/updated based on state transition - "actor" threaded through: api-http, human-triage, phase10-auto- promote, candidate-expiry, reinforcement, etc. - Fail-open: audit write failure logs but never breaks the mutation - GET /memory/{id}/audit: full history for one memory - GET /admin/audit/recent: last 50 mutations across the system 2. Alerts framework (src/atocore/observability/alerts.py): - emit_alert(severity, title, message, context) fans out to: - structlog logger (always) - ~/atocore-logs/alerts.log append (configurable via ATOCORE_ALERT_LOG) - project_state atocore/alert/last_{severity} (dashboard surface) - ATOCORE_ALERT_WEBHOOK POST if set (auto-detects Discord webhook format for nice embeds; generic JSON otherwise) - Every sink fail-open — one failure doesn't prevent the others - Pipeline alert step in nightly cron: harness < 85% → warning; candidate queue > 200 → warning 3. Integrity checks (scripts/integrity_check.py): - Nightly scan for drift: - Memories → missing source_chunk_id references - Duplicate active memories (same type+content+project) - project_state → missing projects - Orphaned source_chunks (no parent document) - Results persisted to atocore/status/integrity_check_result - Any finding emits a warning alert - Added as Step G in deploy/dalidou/batch-extract.sh nightly cron 4. Dashboard surfaces it all: - integrity (findings + details) - alerts (last info/warning/critical per severity) - recent_audit (last 10 mutations with actor + action + preview) Tests: 308 → 317 (9 new): - test_audit_create_logs_entry - test_audit_promote_logs_entry - test_audit_reject_logs_entry - test_audit_update_captures_before_after - test_audit_reinforce_logs_entry - test_recent_audit_returns_cross_memory_entries - test_emit_alert_writes_log_file - test_emit_alert_invalid_severity_falls_back_to_info - test_emit_alert_fails_open_on_log_write_error Deferred: formal migration framework with rollback (current additive pattern is fine for V1); memory detail wiki page with audit view (quick follow-up). To enable Discord alerts: set ATOCORE_ALERT_WEBHOOK to a Discord webhook URL in Dalidou's environment. Default = log-only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:54:10 -04:00
params.append(vu)
after_snapshot["valid_until"] = vu or ""
if not updates:
return False
updates.append("updated_at = CURRENT_TIMESTAMP")
params.append(memory_id)
result = conn.execute(
f"UPDATE memories SET {', '.join(updates)} WHERE id = ?",
params,
)
if result.rowcount > 0:
log.info("memory_updated", memory_id=memory_id)
feat: Phase 4 V1 — Robustness Hardening Adds the observability + safety layer that turns AtoCore from "works until something silently breaks" into "every mutation is traceable, drift is detected, failures raise alerts." 1. Audit log (memory_audit table): - New table with id, memory_id, action, actor, before/after JSON, note, timestamp; 3 indexes for memory_id/timestamp/action - _audit_memory() helper called from every mutation: create_memory, update_memory, promote_memory, reject_candidate_memory, invalidate_memory, supersede_memory, reinforce_memory, auto_promote_reinforced, expire_stale_candidates - Action verb auto-selected: promoted/rejected/invalidated/ superseded/updated based on state transition - "actor" threaded through: api-http, human-triage, phase10-auto- promote, candidate-expiry, reinforcement, etc. - Fail-open: audit write failure logs but never breaks the mutation - GET /memory/{id}/audit: full history for one memory - GET /admin/audit/recent: last 50 mutations across the system 2. Alerts framework (src/atocore/observability/alerts.py): - emit_alert(severity, title, message, context) fans out to: - structlog logger (always) - ~/atocore-logs/alerts.log append (configurable via ATOCORE_ALERT_LOG) - project_state atocore/alert/last_{severity} (dashboard surface) - ATOCORE_ALERT_WEBHOOK POST if set (auto-detects Discord webhook format for nice embeds; generic JSON otherwise) - Every sink fail-open — one failure doesn't prevent the others - Pipeline alert step in nightly cron: harness < 85% → warning; candidate queue > 200 → warning 3. Integrity checks (scripts/integrity_check.py): - Nightly scan for drift: - Memories → missing source_chunk_id references - Duplicate active memories (same type+content+project) - project_state → missing projects - Orphaned source_chunks (no parent document) - Results persisted to atocore/status/integrity_check_result - Any finding emits a warning alert - Added as Step G in deploy/dalidou/batch-extract.sh nightly cron 4. Dashboard surfaces it all: - integrity (findings + details) - alerts (last info/warning/critical per severity) - recent_audit (last 10 mutations with actor + action + preview) Tests: 308 → 317 (9 new): - test_audit_create_logs_entry - test_audit_promote_logs_entry - test_audit_reject_logs_entry - test_audit_update_captures_before_after - test_audit_reinforce_logs_entry - test_recent_audit_returns_cross_memory_entries - test_emit_alert_writes_log_file - test_emit_alert_invalid_severity_falls_back_to_info - test_emit_alert_fails_open_on_log_write_error Deferred: formal migration framework with rollback (current additive pattern is fine for V1); memory detail wiki page with audit view (quick follow-up). To enable Discord alerts: set ATOCORE_ALERT_WEBHOOK to a Discord webhook URL in Dalidou's environment. Default = log-only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:54:10 -04:00
# Action verb is driven by status change when applicable; otherwise "updated"
if status == "active" and before_snapshot["status"] == "candidate":
action = "promoted"
elif status == "invalid" and before_snapshot["status"] == "candidate":
action = "rejected"
elif status == "invalid":
action = "invalidated"
elif status == "superseded":
action = "superseded"
else:
action = "updated"
_audit_memory(
memory_id=memory_id,
action=action,
actor=actor,
before=before_snapshot,
after=after_snapshot,
note=note,
)
return True
return False
feat: Phase 4 V1 — Robustness Hardening Adds the observability + safety layer that turns AtoCore from "works until something silently breaks" into "every mutation is traceable, drift is detected, failures raise alerts." 1. Audit log (memory_audit table): - New table with id, memory_id, action, actor, before/after JSON, note, timestamp; 3 indexes for memory_id/timestamp/action - _audit_memory() helper called from every mutation: create_memory, update_memory, promote_memory, reject_candidate_memory, invalidate_memory, supersede_memory, reinforce_memory, auto_promote_reinforced, expire_stale_candidates - Action verb auto-selected: promoted/rejected/invalidated/ superseded/updated based on state transition - "actor" threaded through: api-http, human-triage, phase10-auto- promote, candidate-expiry, reinforcement, etc. - Fail-open: audit write failure logs but never breaks the mutation - GET /memory/{id}/audit: full history for one memory - GET /admin/audit/recent: last 50 mutations across the system 2. Alerts framework (src/atocore/observability/alerts.py): - emit_alert(severity, title, message, context) fans out to: - structlog logger (always) - ~/atocore-logs/alerts.log append (configurable via ATOCORE_ALERT_LOG) - project_state atocore/alert/last_{severity} (dashboard surface) - ATOCORE_ALERT_WEBHOOK POST if set (auto-detects Discord webhook format for nice embeds; generic JSON otherwise) - Every sink fail-open — one failure doesn't prevent the others - Pipeline alert step in nightly cron: harness < 85% → warning; candidate queue > 200 → warning 3. Integrity checks (scripts/integrity_check.py): - Nightly scan for drift: - Memories → missing source_chunk_id references - Duplicate active memories (same type+content+project) - project_state → missing projects - Orphaned source_chunks (no parent document) - Results persisted to atocore/status/integrity_check_result - Any finding emits a warning alert - Added as Step G in deploy/dalidou/batch-extract.sh nightly cron 4. Dashboard surfaces it all: - integrity (findings + details) - alerts (last info/warning/critical per severity) - recent_audit (last 10 mutations with actor + action + preview) Tests: 308 → 317 (9 new): - test_audit_create_logs_entry - test_audit_promote_logs_entry - test_audit_reject_logs_entry - test_audit_update_captures_before_after - test_audit_reinforce_logs_entry - test_recent_audit_returns_cross_memory_entries - test_emit_alert_writes_log_file - test_emit_alert_invalid_severity_falls_back_to_info - test_emit_alert_fails_open_on_log_write_error Deferred: formal migration framework with rollback (current additive pattern is fine for V1); memory detail wiki page with audit view (quick follow-up). To enable Discord alerts: set ATOCORE_ALERT_WEBHOOK to a Discord webhook URL in Dalidou's environment. Default = log-only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:54:10 -04:00
def invalidate_memory(memory_id: str, actor: str = "api") -> bool:
"""Mark a memory as invalid (error correction)."""
feat: Phase 4 V1 — Robustness Hardening Adds the observability + safety layer that turns AtoCore from "works until something silently breaks" into "every mutation is traceable, drift is detected, failures raise alerts." 1. Audit log (memory_audit table): - New table with id, memory_id, action, actor, before/after JSON, note, timestamp; 3 indexes for memory_id/timestamp/action - _audit_memory() helper called from every mutation: create_memory, update_memory, promote_memory, reject_candidate_memory, invalidate_memory, supersede_memory, reinforce_memory, auto_promote_reinforced, expire_stale_candidates - Action verb auto-selected: promoted/rejected/invalidated/ superseded/updated based on state transition - "actor" threaded through: api-http, human-triage, phase10-auto- promote, candidate-expiry, reinforcement, etc. - Fail-open: audit write failure logs but never breaks the mutation - GET /memory/{id}/audit: full history for one memory - GET /admin/audit/recent: last 50 mutations across the system 2. Alerts framework (src/atocore/observability/alerts.py): - emit_alert(severity, title, message, context) fans out to: - structlog logger (always) - ~/atocore-logs/alerts.log append (configurable via ATOCORE_ALERT_LOG) - project_state atocore/alert/last_{severity} (dashboard surface) - ATOCORE_ALERT_WEBHOOK POST if set (auto-detects Discord webhook format for nice embeds; generic JSON otherwise) - Every sink fail-open — one failure doesn't prevent the others - Pipeline alert step in nightly cron: harness < 85% → warning; candidate queue > 200 → warning 3. Integrity checks (scripts/integrity_check.py): - Nightly scan for drift: - Memories → missing source_chunk_id references - Duplicate active memories (same type+content+project) - project_state → missing projects - Orphaned source_chunks (no parent document) - Results persisted to atocore/status/integrity_check_result - Any finding emits a warning alert - Added as Step G in deploy/dalidou/batch-extract.sh nightly cron 4. Dashboard surfaces it all: - integrity (findings + details) - alerts (last info/warning/critical per severity) - recent_audit (last 10 mutations with actor + action + preview) Tests: 308 → 317 (9 new): - test_audit_create_logs_entry - test_audit_promote_logs_entry - test_audit_reject_logs_entry - test_audit_update_captures_before_after - test_audit_reinforce_logs_entry - test_recent_audit_returns_cross_memory_entries - test_emit_alert_writes_log_file - test_emit_alert_invalid_severity_falls_back_to_info - test_emit_alert_fails_open_on_log_write_error Deferred: formal migration framework with rollback (current additive pattern is fine for V1); memory detail wiki page with audit view (quick follow-up). To enable Discord alerts: set ATOCORE_ALERT_WEBHOOK to a Discord webhook URL in Dalidou's environment. Default = log-only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:54:10 -04:00
return update_memory(memory_id, status="invalid", actor=actor)
feat: Phase 4 V1 — Robustness Hardening Adds the observability + safety layer that turns AtoCore from "works until something silently breaks" into "every mutation is traceable, drift is detected, failures raise alerts." 1. Audit log (memory_audit table): - New table with id, memory_id, action, actor, before/after JSON, note, timestamp; 3 indexes for memory_id/timestamp/action - _audit_memory() helper called from every mutation: create_memory, update_memory, promote_memory, reject_candidate_memory, invalidate_memory, supersede_memory, reinforce_memory, auto_promote_reinforced, expire_stale_candidates - Action verb auto-selected: promoted/rejected/invalidated/ superseded/updated based on state transition - "actor" threaded through: api-http, human-triage, phase10-auto- promote, candidate-expiry, reinforcement, etc. - Fail-open: audit write failure logs but never breaks the mutation - GET /memory/{id}/audit: full history for one memory - GET /admin/audit/recent: last 50 mutations across the system 2. Alerts framework (src/atocore/observability/alerts.py): - emit_alert(severity, title, message, context) fans out to: - structlog logger (always) - ~/atocore-logs/alerts.log append (configurable via ATOCORE_ALERT_LOG) - project_state atocore/alert/last_{severity} (dashboard surface) - ATOCORE_ALERT_WEBHOOK POST if set (auto-detects Discord webhook format for nice embeds; generic JSON otherwise) - Every sink fail-open — one failure doesn't prevent the others - Pipeline alert step in nightly cron: harness < 85% → warning; candidate queue > 200 → warning 3. Integrity checks (scripts/integrity_check.py): - Nightly scan for drift: - Memories → missing source_chunk_id references - Duplicate active memories (same type+content+project) - project_state → missing projects - Orphaned source_chunks (no parent document) - Results persisted to atocore/status/integrity_check_result - Any finding emits a warning alert - Added as Step G in deploy/dalidou/batch-extract.sh nightly cron 4. Dashboard surfaces it all: - integrity (findings + details) - alerts (last info/warning/critical per severity) - recent_audit (last 10 mutations with actor + action + preview) Tests: 308 → 317 (9 new): - test_audit_create_logs_entry - test_audit_promote_logs_entry - test_audit_reject_logs_entry - test_audit_update_captures_before_after - test_audit_reinforce_logs_entry - test_recent_audit_returns_cross_memory_entries - test_emit_alert_writes_log_file - test_emit_alert_invalid_severity_falls_back_to_info - test_emit_alert_fails_open_on_log_write_error Deferred: formal migration framework with rollback (current additive pattern is fine for V1); memory detail wiki page with audit view (quick follow-up). To enable Discord alerts: set ATOCORE_ALERT_WEBHOOK to a Discord webhook URL in Dalidou's environment. Default = log-only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:54:10 -04:00
def supersede_memory(memory_id: str, actor: str = "api") -> bool:
"""Mark a memory as superseded (replaced by newer info)."""
feat: Phase 4 V1 — Robustness Hardening Adds the observability + safety layer that turns AtoCore from "works until something silently breaks" into "every mutation is traceable, drift is detected, failures raise alerts." 1. Audit log (memory_audit table): - New table with id, memory_id, action, actor, before/after JSON, note, timestamp; 3 indexes for memory_id/timestamp/action - _audit_memory() helper called from every mutation: create_memory, update_memory, promote_memory, reject_candidate_memory, invalidate_memory, supersede_memory, reinforce_memory, auto_promote_reinforced, expire_stale_candidates - Action verb auto-selected: promoted/rejected/invalidated/ superseded/updated based on state transition - "actor" threaded through: api-http, human-triage, phase10-auto- promote, candidate-expiry, reinforcement, etc. - Fail-open: audit write failure logs but never breaks the mutation - GET /memory/{id}/audit: full history for one memory - GET /admin/audit/recent: last 50 mutations across the system 2. Alerts framework (src/atocore/observability/alerts.py): - emit_alert(severity, title, message, context) fans out to: - structlog logger (always) - ~/atocore-logs/alerts.log append (configurable via ATOCORE_ALERT_LOG) - project_state atocore/alert/last_{severity} (dashboard surface) - ATOCORE_ALERT_WEBHOOK POST if set (auto-detects Discord webhook format for nice embeds; generic JSON otherwise) - Every sink fail-open — one failure doesn't prevent the others - Pipeline alert step in nightly cron: harness < 85% → warning; candidate queue > 200 → warning 3. Integrity checks (scripts/integrity_check.py): - Nightly scan for drift: - Memories → missing source_chunk_id references - Duplicate active memories (same type+content+project) - project_state → missing projects - Orphaned source_chunks (no parent document) - Results persisted to atocore/status/integrity_check_result - Any finding emits a warning alert - Added as Step G in deploy/dalidou/batch-extract.sh nightly cron 4. Dashboard surfaces it all: - integrity (findings + details) - alerts (last info/warning/critical per severity) - recent_audit (last 10 mutations with actor + action + preview) Tests: 308 → 317 (9 new): - test_audit_create_logs_entry - test_audit_promote_logs_entry - test_audit_reject_logs_entry - test_audit_update_captures_before_after - test_audit_reinforce_logs_entry - test_recent_audit_returns_cross_memory_entries - test_emit_alert_writes_log_file - test_emit_alert_invalid_severity_falls_back_to_info - test_emit_alert_fails_open_on_log_write_error Deferred: formal migration framework with rollback (current additive pattern is fine for V1); memory detail wiki page with audit view (quick follow-up). To enable Discord alerts: set ATOCORE_ALERT_WEBHOOK to a Discord webhook URL in Dalidou's environment. Default = log-only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:54:10 -04:00
return update_memory(memory_id, status="superseded", actor=actor)
feat: Phase 4 V1 — Robustness Hardening Adds the observability + safety layer that turns AtoCore from "works until something silently breaks" into "every mutation is traceable, drift is detected, failures raise alerts." 1. Audit log (memory_audit table): - New table with id, memory_id, action, actor, before/after JSON, note, timestamp; 3 indexes for memory_id/timestamp/action - _audit_memory() helper called from every mutation: create_memory, update_memory, promote_memory, reject_candidate_memory, invalidate_memory, supersede_memory, reinforce_memory, auto_promote_reinforced, expire_stale_candidates - Action verb auto-selected: promoted/rejected/invalidated/ superseded/updated based on state transition - "actor" threaded through: api-http, human-triage, phase10-auto- promote, candidate-expiry, reinforcement, etc. - Fail-open: audit write failure logs but never breaks the mutation - GET /memory/{id}/audit: full history for one memory - GET /admin/audit/recent: last 50 mutations across the system 2. Alerts framework (src/atocore/observability/alerts.py): - emit_alert(severity, title, message, context) fans out to: - structlog logger (always) - ~/atocore-logs/alerts.log append (configurable via ATOCORE_ALERT_LOG) - project_state atocore/alert/last_{severity} (dashboard surface) - ATOCORE_ALERT_WEBHOOK POST if set (auto-detects Discord webhook format for nice embeds; generic JSON otherwise) - Every sink fail-open — one failure doesn't prevent the others - Pipeline alert step in nightly cron: harness < 85% → warning; candidate queue > 200 → warning 3. Integrity checks (scripts/integrity_check.py): - Nightly scan for drift: - Memories → missing source_chunk_id references - Duplicate active memories (same type+content+project) - project_state → missing projects - Orphaned source_chunks (no parent document) - Results persisted to atocore/status/integrity_check_result - Any finding emits a warning alert - Added as Step G in deploy/dalidou/batch-extract.sh nightly cron 4. Dashboard surfaces it all: - integrity (findings + details) - alerts (last info/warning/critical per severity) - recent_audit (last 10 mutations with actor + action + preview) Tests: 308 → 317 (9 new): - test_audit_create_logs_entry - test_audit_promote_logs_entry - test_audit_reject_logs_entry - test_audit_update_captures_before_after - test_audit_reinforce_logs_entry - test_recent_audit_returns_cross_memory_entries - test_emit_alert_writes_log_file - test_emit_alert_invalid_severity_falls_back_to_info - test_emit_alert_fails_open_on_log_write_error Deferred: formal migration framework with rollback (current additive pattern is fine for V1); memory detail wiki page with audit view (quick follow-up). To enable Discord alerts: set ATOCORE_ALERT_WEBHOOK to a Discord webhook URL in Dalidou's environment. Default = log-only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:54:10 -04:00
def promote_memory(memory_id: str, actor: str = "api", note: str = "") -> bool:
feat(phase9-B): reinforce active memories from captured interactions Phase 9 Commit B from the agreed plan. With Commit A capturing what AtoCore fed to the LLM and what came back, this commit closes the weakest part of the loop: when a memory is actually referenced in a response, its confidence should drift up, and stale memories that nobody ever mentions should stay where they are. This is reinforcement only — nothing is promoted into trusted state and no candidates are created. Extraction is Commit C. Schema (additive migration): - memories.last_referenced_at DATETIME (null by default) - memories.reference_count INTEGER DEFAULT 0 - idx_memories_last_referenced on last_referenced_at - memories.status now accepts the new "candidate" value so Commit C has the status slot to land on. Existing active/superseded/invalid rows are untouched. New module: src/atocore/memory/reinforcement.py - reinforce_from_interaction(interaction): scans the interaction's response + response_summary for echoes of active memories and bumps confidence / reference_count for each match - matching is intentionally simple and explainable: * normalize both sides (lowercase, collapse whitespace) * require >= 12 chars of memory content to match * compare the leading 80-char window of each memory - the candidate pool is project-scoped memories for the interaction's project + global identity + preference memories, deduplicated - candidates and invalidated memories are NEVER reinforced; only active memories move Memory service changes: - MEMORY_STATUSES = ["candidate", "active", "superseded", "invalid"] - create_memory(status="candidate"|"active"|...) with per-status duplicate scoping so a candidate and an active with identical text can legitimately coexist during review - get_memories(status=...) explicit override of the legacy active_only flag; callers can now list the review queue cleanly - update_memory accepts any valid status including "candidate" - reinforce_memory(id, delta): low-level primitive that bumps confidence (capped at 1.0), increments reference_count, and sets last_referenced_at. Only active memories; returns (applied, old, new) - promote_memory / reject_candidate_memory helpers prepping Commit C Interactions service: - record_interaction(reinforce=True) runs reinforce_from_interaction automatically when the interaction has response content. reinforcement errors are logged but never raised back to the caller so capture itself is never blocked by a flaky downstream. - circular import between interactions service and memory.reinforcement avoided by lazy import inside the function API: - POST /interactions now accepts a reinforce bool field (default true) - POST /interactions/{id}/reinforce runs reinforcement on an existing captured interaction — useful for backfilling or for retrying after a transient error in the automatic pass - response lists which memory ids were reinforced with old / new confidence for audit Tests (17 new, all green): - reinforce_memory bumps, caps at 1.0, accumulates reference_count - reinforce_memory rejects candidates and missing ids - reinforce_memory rejects negative delta - reinforce_from_interaction matches active memory - reinforce_from_interaction ignores candidates and inactive - reinforce_from_interaction requires minimum content length - reinforce_from_interaction handles empty response cleanly - reinforce_from_interaction normalizes casing and whitespace - reinforce_from_interaction deduplicates across memory buckets - record_interaction auto-reinforces by default - record_interaction reinforce=False skips the pass - record_interaction handles empty response - POST /interactions/{id}/reinforce runs against stored interaction - POST /interactions/{id}/reinforce returns 404 for missing id - POST /interactions accepts reinforce=false Full suite: 135 passing (was 118). Trust model unchanged: - reinforcement only moves confidence within the existing active set - the candidate lifecycle is declared but only Commit C will actually create candidate memories - trusted project state is never touched by reinforcement Next: Commit C adds the rule-based extractor that produces candidate memories from captured interactions plus the promote/reject review queue endpoints.
2026-04-06 21:18:38 -04:00
"""Promote a candidate memory to active (Phase 9 Commit C review queue).
Returns False if the memory does not exist or is not currently a
candidate. Raises ValueError only if the promotion would create a
duplicate active memory (delegates to update_memory's existing check).
"""
with get_connection() as conn:
row = conn.execute(
"SELECT status FROM memories WHERE id = ?", (memory_id,)
).fetchone()
if row is None:
return False
if row["status"] != "candidate":
return False
feat: Phase 4 V1 — Robustness Hardening Adds the observability + safety layer that turns AtoCore from "works until something silently breaks" into "every mutation is traceable, drift is detected, failures raise alerts." 1. Audit log (memory_audit table): - New table with id, memory_id, action, actor, before/after JSON, note, timestamp; 3 indexes for memory_id/timestamp/action - _audit_memory() helper called from every mutation: create_memory, update_memory, promote_memory, reject_candidate_memory, invalidate_memory, supersede_memory, reinforce_memory, auto_promote_reinforced, expire_stale_candidates - Action verb auto-selected: promoted/rejected/invalidated/ superseded/updated based on state transition - "actor" threaded through: api-http, human-triage, phase10-auto- promote, candidate-expiry, reinforcement, etc. - Fail-open: audit write failure logs but never breaks the mutation - GET /memory/{id}/audit: full history for one memory - GET /admin/audit/recent: last 50 mutations across the system 2. Alerts framework (src/atocore/observability/alerts.py): - emit_alert(severity, title, message, context) fans out to: - structlog logger (always) - ~/atocore-logs/alerts.log append (configurable via ATOCORE_ALERT_LOG) - project_state atocore/alert/last_{severity} (dashboard surface) - ATOCORE_ALERT_WEBHOOK POST if set (auto-detects Discord webhook format for nice embeds; generic JSON otherwise) - Every sink fail-open — one failure doesn't prevent the others - Pipeline alert step in nightly cron: harness < 85% → warning; candidate queue > 200 → warning 3. Integrity checks (scripts/integrity_check.py): - Nightly scan for drift: - Memories → missing source_chunk_id references - Duplicate active memories (same type+content+project) - project_state → missing projects - Orphaned source_chunks (no parent document) - Results persisted to atocore/status/integrity_check_result - Any finding emits a warning alert - Added as Step G in deploy/dalidou/batch-extract.sh nightly cron 4. Dashboard surfaces it all: - integrity (findings + details) - alerts (last info/warning/critical per severity) - recent_audit (last 10 mutations with actor + action + preview) Tests: 308 → 317 (9 new): - test_audit_create_logs_entry - test_audit_promote_logs_entry - test_audit_reject_logs_entry - test_audit_update_captures_before_after - test_audit_reinforce_logs_entry - test_recent_audit_returns_cross_memory_entries - test_emit_alert_writes_log_file - test_emit_alert_invalid_severity_falls_back_to_info - test_emit_alert_fails_open_on_log_write_error Deferred: formal migration framework with rollback (current additive pattern is fine for V1); memory detail wiki page with audit view (quick follow-up). To enable Discord alerts: set ATOCORE_ALERT_WEBHOOK to a Discord webhook URL in Dalidou's environment. Default = log-only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:54:10 -04:00
return update_memory(memory_id, status="active", actor=actor, note=note)
feat(phase9-B): reinforce active memories from captured interactions Phase 9 Commit B from the agreed plan. With Commit A capturing what AtoCore fed to the LLM and what came back, this commit closes the weakest part of the loop: when a memory is actually referenced in a response, its confidence should drift up, and stale memories that nobody ever mentions should stay where they are. This is reinforcement only — nothing is promoted into trusted state and no candidates are created. Extraction is Commit C. Schema (additive migration): - memories.last_referenced_at DATETIME (null by default) - memories.reference_count INTEGER DEFAULT 0 - idx_memories_last_referenced on last_referenced_at - memories.status now accepts the new "candidate" value so Commit C has the status slot to land on. Existing active/superseded/invalid rows are untouched. New module: src/atocore/memory/reinforcement.py - reinforce_from_interaction(interaction): scans the interaction's response + response_summary for echoes of active memories and bumps confidence / reference_count for each match - matching is intentionally simple and explainable: * normalize both sides (lowercase, collapse whitespace) * require >= 12 chars of memory content to match * compare the leading 80-char window of each memory - the candidate pool is project-scoped memories for the interaction's project + global identity + preference memories, deduplicated - candidates and invalidated memories are NEVER reinforced; only active memories move Memory service changes: - MEMORY_STATUSES = ["candidate", "active", "superseded", "invalid"] - create_memory(status="candidate"|"active"|...) with per-status duplicate scoping so a candidate and an active with identical text can legitimately coexist during review - get_memories(status=...) explicit override of the legacy active_only flag; callers can now list the review queue cleanly - update_memory accepts any valid status including "candidate" - reinforce_memory(id, delta): low-level primitive that bumps confidence (capped at 1.0), increments reference_count, and sets last_referenced_at. Only active memories; returns (applied, old, new) - promote_memory / reject_candidate_memory helpers prepping Commit C Interactions service: - record_interaction(reinforce=True) runs reinforce_from_interaction automatically when the interaction has response content. reinforcement errors are logged but never raised back to the caller so capture itself is never blocked by a flaky downstream. - circular import between interactions service and memory.reinforcement avoided by lazy import inside the function API: - POST /interactions now accepts a reinforce bool field (default true) - POST /interactions/{id}/reinforce runs reinforcement on an existing captured interaction — useful for backfilling or for retrying after a transient error in the automatic pass - response lists which memory ids were reinforced with old / new confidence for audit Tests (17 new, all green): - reinforce_memory bumps, caps at 1.0, accumulates reference_count - reinforce_memory rejects candidates and missing ids - reinforce_memory rejects negative delta - reinforce_from_interaction matches active memory - reinforce_from_interaction ignores candidates and inactive - reinforce_from_interaction requires minimum content length - reinforce_from_interaction handles empty response cleanly - reinforce_from_interaction normalizes casing and whitespace - reinforce_from_interaction deduplicates across memory buckets - record_interaction auto-reinforces by default - record_interaction reinforce=False skips the pass - record_interaction handles empty response - POST /interactions/{id}/reinforce runs against stored interaction - POST /interactions/{id}/reinforce returns 404 for missing id - POST /interactions accepts reinforce=false Full suite: 135 passing (was 118). Trust model unchanged: - reinforcement only moves confidence within the existing active set - the candidate lifecycle is declared but only Commit C will actually create candidate memories - trusted project state is never touched by reinforcement Next: Commit C adds the rule-based extractor that produces candidate memories from captured interactions plus the promote/reject review queue endpoints.
2026-04-06 21:18:38 -04:00
feat: Phase 4 V1 — Robustness Hardening Adds the observability + safety layer that turns AtoCore from "works until something silently breaks" into "every mutation is traceable, drift is detected, failures raise alerts." 1. Audit log (memory_audit table): - New table with id, memory_id, action, actor, before/after JSON, note, timestamp; 3 indexes for memory_id/timestamp/action - _audit_memory() helper called from every mutation: create_memory, update_memory, promote_memory, reject_candidate_memory, invalidate_memory, supersede_memory, reinforce_memory, auto_promote_reinforced, expire_stale_candidates - Action verb auto-selected: promoted/rejected/invalidated/ superseded/updated based on state transition - "actor" threaded through: api-http, human-triage, phase10-auto- promote, candidate-expiry, reinforcement, etc. - Fail-open: audit write failure logs but never breaks the mutation - GET /memory/{id}/audit: full history for one memory - GET /admin/audit/recent: last 50 mutations across the system 2. Alerts framework (src/atocore/observability/alerts.py): - emit_alert(severity, title, message, context) fans out to: - structlog logger (always) - ~/atocore-logs/alerts.log append (configurable via ATOCORE_ALERT_LOG) - project_state atocore/alert/last_{severity} (dashboard surface) - ATOCORE_ALERT_WEBHOOK POST if set (auto-detects Discord webhook format for nice embeds; generic JSON otherwise) - Every sink fail-open — one failure doesn't prevent the others - Pipeline alert step in nightly cron: harness < 85% → warning; candidate queue > 200 → warning 3. Integrity checks (scripts/integrity_check.py): - Nightly scan for drift: - Memories → missing source_chunk_id references - Duplicate active memories (same type+content+project) - project_state → missing projects - Orphaned source_chunks (no parent document) - Results persisted to atocore/status/integrity_check_result - Any finding emits a warning alert - Added as Step G in deploy/dalidou/batch-extract.sh nightly cron 4. Dashboard surfaces it all: - integrity (findings + details) - alerts (last info/warning/critical per severity) - recent_audit (last 10 mutations with actor + action + preview) Tests: 308 → 317 (9 new): - test_audit_create_logs_entry - test_audit_promote_logs_entry - test_audit_reject_logs_entry - test_audit_update_captures_before_after - test_audit_reinforce_logs_entry - test_recent_audit_returns_cross_memory_entries - test_emit_alert_writes_log_file - test_emit_alert_invalid_severity_falls_back_to_info - test_emit_alert_fails_open_on_log_write_error Deferred: formal migration framework with rollback (current additive pattern is fine for V1); memory detail wiki page with audit view (quick follow-up). To enable Discord alerts: set ATOCORE_ALERT_WEBHOOK to a Discord webhook URL in Dalidou's environment. Default = log-only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:54:10 -04:00
def reject_candidate_memory(memory_id: str, actor: str = "api", note: str = "") -> bool:
feat(phase9-B): reinforce active memories from captured interactions Phase 9 Commit B from the agreed plan. With Commit A capturing what AtoCore fed to the LLM and what came back, this commit closes the weakest part of the loop: when a memory is actually referenced in a response, its confidence should drift up, and stale memories that nobody ever mentions should stay where they are. This is reinforcement only — nothing is promoted into trusted state and no candidates are created. Extraction is Commit C. Schema (additive migration): - memories.last_referenced_at DATETIME (null by default) - memories.reference_count INTEGER DEFAULT 0 - idx_memories_last_referenced on last_referenced_at - memories.status now accepts the new "candidate" value so Commit C has the status slot to land on. Existing active/superseded/invalid rows are untouched. New module: src/atocore/memory/reinforcement.py - reinforce_from_interaction(interaction): scans the interaction's response + response_summary for echoes of active memories and bumps confidence / reference_count for each match - matching is intentionally simple and explainable: * normalize both sides (lowercase, collapse whitespace) * require >= 12 chars of memory content to match * compare the leading 80-char window of each memory - the candidate pool is project-scoped memories for the interaction's project + global identity + preference memories, deduplicated - candidates and invalidated memories are NEVER reinforced; only active memories move Memory service changes: - MEMORY_STATUSES = ["candidate", "active", "superseded", "invalid"] - create_memory(status="candidate"|"active"|...) with per-status duplicate scoping so a candidate and an active with identical text can legitimately coexist during review - get_memories(status=...) explicit override of the legacy active_only flag; callers can now list the review queue cleanly - update_memory accepts any valid status including "candidate" - reinforce_memory(id, delta): low-level primitive that bumps confidence (capped at 1.0), increments reference_count, and sets last_referenced_at. Only active memories; returns (applied, old, new) - promote_memory / reject_candidate_memory helpers prepping Commit C Interactions service: - record_interaction(reinforce=True) runs reinforce_from_interaction automatically when the interaction has response content. reinforcement errors are logged but never raised back to the caller so capture itself is never blocked by a flaky downstream. - circular import between interactions service and memory.reinforcement avoided by lazy import inside the function API: - POST /interactions now accepts a reinforce bool field (default true) - POST /interactions/{id}/reinforce runs reinforcement on an existing captured interaction — useful for backfilling or for retrying after a transient error in the automatic pass - response lists which memory ids were reinforced with old / new confidence for audit Tests (17 new, all green): - reinforce_memory bumps, caps at 1.0, accumulates reference_count - reinforce_memory rejects candidates and missing ids - reinforce_memory rejects negative delta - reinforce_from_interaction matches active memory - reinforce_from_interaction ignores candidates and inactive - reinforce_from_interaction requires minimum content length - reinforce_from_interaction handles empty response cleanly - reinforce_from_interaction normalizes casing and whitespace - reinforce_from_interaction deduplicates across memory buckets - record_interaction auto-reinforces by default - record_interaction reinforce=False skips the pass - record_interaction handles empty response - POST /interactions/{id}/reinforce runs against stored interaction - POST /interactions/{id}/reinforce returns 404 for missing id - POST /interactions accepts reinforce=false Full suite: 135 passing (was 118). Trust model unchanged: - reinforcement only moves confidence within the existing active set - the candidate lifecycle is declared but only Commit C will actually create candidate memories - trusted project state is never touched by reinforcement Next: Commit C adds the rule-based extractor that produces candidate memories from captured interactions plus the promote/reject review queue endpoints.
2026-04-06 21:18:38 -04:00
"""Reject a candidate memory (Phase 9 Commit C).
Sets the candidate's status to ``invalid`` so it drops out of the
review queue without polluting the active set. Returns False if the
memory does not exist or is not currently a candidate.
"""
with get_connection() as conn:
row = conn.execute(
"SELECT status FROM memories WHERE id = ?", (memory_id,)
).fetchone()
if row is None:
return False
if row["status"] != "candidate":
return False
feat: Phase 4 V1 — Robustness Hardening Adds the observability + safety layer that turns AtoCore from "works until something silently breaks" into "every mutation is traceable, drift is detected, failures raise alerts." 1. Audit log (memory_audit table): - New table with id, memory_id, action, actor, before/after JSON, note, timestamp; 3 indexes for memory_id/timestamp/action - _audit_memory() helper called from every mutation: create_memory, update_memory, promote_memory, reject_candidate_memory, invalidate_memory, supersede_memory, reinforce_memory, auto_promote_reinforced, expire_stale_candidates - Action verb auto-selected: promoted/rejected/invalidated/ superseded/updated based on state transition - "actor" threaded through: api-http, human-triage, phase10-auto- promote, candidate-expiry, reinforcement, etc. - Fail-open: audit write failure logs but never breaks the mutation - GET /memory/{id}/audit: full history for one memory - GET /admin/audit/recent: last 50 mutations across the system 2. Alerts framework (src/atocore/observability/alerts.py): - emit_alert(severity, title, message, context) fans out to: - structlog logger (always) - ~/atocore-logs/alerts.log append (configurable via ATOCORE_ALERT_LOG) - project_state atocore/alert/last_{severity} (dashboard surface) - ATOCORE_ALERT_WEBHOOK POST if set (auto-detects Discord webhook format for nice embeds; generic JSON otherwise) - Every sink fail-open — one failure doesn't prevent the others - Pipeline alert step in nightly cron: harness < 85% → warning; candidate queue > 200 → warning 3. Integrity checks (scripts/integrity_check.py): - Nightly scan for drift: - Memories → missing source_chunk_id references - Duplicate active memories (same type+content+project) - project_state → missing projects - Orphaned source_chunks (no parent document) - Results persisted to atocore/status/integrity_check_result - Any finding emits a warning alert - Added as Step G in deploy/dalidou/batch-extract.sh nightly cron 4. Dashboard surfaces it all: - integrity (findings + details) - alerts (last info/warning/critical per severity) - recent_audit (last 10 mutations with actor + action + preview) Tests: 308 → 317 (9 new): - test_audit_create_logs_entry - test_audit_promote_logs_entry - test_audit_reject_logs_entry - test_audit_update_captures_before_after - test_audit_reinforce_logs_entry - test_recent_audit_returns_cross_memory_entries - test_emit_alert_writes_log_file - test_emit_alert_invalid_severity_falls_back_to_info - test_emit_alert_fails_open_on_log_write_error Deferred: formal migration framework with rollback (current additive pattern is fine for V1); memory detail wiki page with audit view (quick follow-up). To enable Discord alerts: set ATOCORE_ALERT_WEBHOOK to a Discord webhook URL in Dalidou's environment. Default = log-only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:54:10 -04:00
return update_memory(memory_id, status="invalid", actor=actor, note=note)
feat(phase9-B): reinforce active memories from captured interactions Phase 9 Commit B from the agreed plan. With Commit A capturing what AtoCore fed to the LLM and what came back, this commit closes the weakest part of the loop: when a memory is actually referenced in a response, its confidence should drift up, and stale memories that nobody ever mentions should stay where they are. This is reinforcement only — nothing is promoted into trusted state and no candidates are created. Extraction is Commit C. Schema (additive migration): - memories.last_referenced_at DATETIME (null by default) - memories.reference_count INTEGER DEFAULT 0 - idx_memories_last_referenced on last_referenced_at - memories.status now accepts the new "candidate" value so Commit C has the status slot to land on. Existing active/superseded/invalid rows are untouched. New module: src/atocore/memory/reinforcement.py - reinforce_from_interaction(interaction): scans the interaction's response + response_summary for echoes of active memories and bumps confidence / reference_count for each match - matching is intentionally simple and explainable: * normalize both sides (lowercase, collapse whitespace) * require >= 12 chars of memory content to match * compare the leading 80-char window of each memory - the candidate pool is project-scoped memories for the interaction's project + global identity + preference memories, deduplicated - candidates and invalidated memories are NEVER reinforced; only active memories move Memory service changes: - MEMORY_STATUSES = ["candidate", "active", "superseded", "invalid"] - create_memory(status="candidate"|"active"|...) with per-status duplicate scoping so a candidate and an active with identical text can legitimately coexist during review - get_memories(status=...) explicit override of the legacy active_only flag; callers can now list the review queue cleanly - update_memory accepts any valid status including "candidate" - reinforce_memory(id, delta): low-level primitive that bumps confidence (capped at 1.0), increments reference_count, and sets last_referenced_at. Only active memories; returns (applied, old, new) - promote_memory / reject_candidate_memory helpers prepping Commit C Interactions service: - record_interaction(reinforce=True) runs reinforce_from_interaction automatically when the interaction has response content. reinforcement errors are logged but never raised back to the caller so capture itself is never blocked by a flaky downstream. - circular import between interactions service and memory.reinforcement avoided by lazy import inside the function API: - POST /interactions now accepts a reinforce bool field (default true) - POST /interactions/{id}/reinforce runs reinforcement on an existing captured interaction — useful for backfilling or for retrying after a transient error in the automatic pass - response lists which memory ids were reinforced with old / new confidence for audit Tests (17 new, all green): - reinforce_memory bumps, caps at 1.0, accumulates reference_count - reinforce_memory rejects candidates and missing ids - reinforce_memory rejects negative delta - reinforce_from_interaction matches active memory - reinforce_from_interaction ignores candidates and inactive - reinforce_from_interaction requires minimum content length - reinforce_from_interaction handles empty response cleanly - reinforce_from_interaction normalizes casing and whitespace - reinforce_from_interaction deduplicates across memory buckets - record_interaction auto-reinforces by default - record_interaction reinforce=False skips the pass - record_interaction handles empty response - POST /interactions/{id}/reinforce runs against stored interaction - POST /interactions/{id}/reinforce returns 404 for missing id - POST /interactions accepts reinforce=false Full suite: 135 passing (was 118). Trust model unchanged: - reinforcement only moves confidence within the existing active set - the candidate lifecycle is declared but only Commit C will actually create candidate memories - trusted project state is never touched by reinforcement Next: Commit C adds the rule-based extractor that produces candidate memories from captured interactions plus the promote/reject review queue endpoints.
2026-04-06 21:18:38 -04:00
def reinforce_memory(
memory_id: str,
confidence_delta: float = 0.02,
) -> tuple[bool, float, float]:
"""Bump a memory's confidence and reference count (Phase 9 Commit B).
Returns a 3-tuple ``(applied, old_confidence, new_confidence)``.
``applied`` is False if the memory does not exist or is not in the
``active`` state reinforcement only touches live memories so the
candidate queue and invalidated history are never silently revived.
Confidence is capped at 1.0. last_referenced_at is set to the current
UTC time in SQLite-comparable format. reference_count is incremented
by one per call (not per delta amount).
"""
if confidence_delta < 0:
raise ValueError("confidence_delta must be non-negative for reinforcement")
now = datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S")
with get_connection() as conn:
row = conn.execute(
"SELECT confidence, status FROM memories WHERE id = ?", (memory_id,)
).fetchone()
if row is None or row["status"] != "active":
return False, 0.0, 0.0
old_confidence = float(row["confidence"])
new_confidence = min(1.0, old_confidence + confidence_delta)
conn.execute(
"UPDATE memories SET confidence = ?, last_referenced_at = ?, "
"reference_count = COALESCE(reference_count, 0) + 1 "
"WHERE id = ?",
(new_confidence, now, memory_id),
)
log.info(
"memory_reinforced",
memory_id=memory_id,
old_confidence=round(old_confidence, 4),
new_confidence=round(new_confidence, 4),
)
feat: Phase 4 V1 — Robustness Hardening Adds the observability + safety layer that turns AtoCore from "works until something silently breaks" into "every mutation is traceable, drift is detected, failures raise alerts." 1. Audit log (memory_audit table): - New table with id, memory_id, action, actor, before/after JSON, note, timestamp; 3 indexes for memory_id/timestamp/action - _audit_memory() helper called from every mutation: create_memory, update_memory, promote_memory, reject_candidate_memory, invalidate_memory, supersede_memory, reinforce_memory, auto_promote_reinforced, expire_stale_candidates - Action verb auto-selected: promoted/rejected/invalidated/ superseded/updated based on state transition - "actor" threaded through: api-http, human-triage, phase10-auto- promote, candidate-expiry, reinforcement, etc. - Fail-open: audit write failure logs but never breaks the mutation - GET /memory/{id}/audit: full history for one memory - GET /admin/audit/recent: last 50 mutations across the system 2. Alerts framework (src/atocore/observability/alerts.py): - emit_alert(severity, title, message, context) fans out to: - structlog logger (always) - ~/atocore-logs/alerts.log append (configurable via ATOCORE_ALERT_LOG) - project_state atocore/alert/last_{severity} (dashboard surface) - ATOCORE_ALERT_WEBHOOK POST if set (auto-detects Discord webhook format for nice embeds; generic JSON otherwise) - Every sink fail-open — one failure doesn't prevent the others - Pipeline alert step in nightly cron: harness < 85% → warning; candidate queue > 200 → warning 3. Integrity checks (scripts/integrity_check.py): - Nightly scan for drift: - Memories → missing source_chunk_id references - Duplicate active memories (same type+content+project) - project_state → missing projects - Orphaned source_chunks (no parent document) - Results persisted to atocore/status/integrity_check_result - Any finding emits a warning alert - Added as Step G in deploy/dalidou/batch-extract.sh nightly cron 4. Dashboard surfaces it all: - integrity (findings + details) - alerts (last info/warning/critical per severity) - recent_audit (last 10 mutations with actor + action + preview) Tests: 308 → 317 (9 new): - test_audit_create_logs_entry - test_audit_promote_logs_entry - test_audit_reject_logs_entry - test_audit_update_captures_before_after - test_audit_reinforce_logs_entry - test_recent_audit_returns_cross_memory_entries - test_emit_alert_writes_log_file - test_emit_alert_invalid_severity_falls_back_to_info - test_emit_alert_fails_open_on_log_write_error Deferred: formal migration framework with rollback (current additive pattern is fine for V1); memory detail wiki page with audit view (quick follow-up). To enable Discord alerts: set ATOCORE_ALERT_WEBHOOK to a Discord webhook URL in Dalidou's environment. Default = log-only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:54:10 -04:00
# Reinforcement writes an audit row per bump. Reinforcement fires often
# (every captured interaction); this lets you trace which interactions
# kept which memories alive. Could become chatty but is invaluable for
# decay/cold-memory analysis. If it becomes an issue, throttle here.
_audit_memory(
memory_id=memory_id,
action="reinforced",
actor="reinforcement",
before={"confidence": old_confidence},
after={"confidence": new_confidence},
)
feat(phase9-B): reinforce active memories from captured interactions Phase 9 Commit B from the agreed plan. With Commit A capturing what AtoCore fed to the LLM and what came back, this commit closes the weakest part of the loop: when a memory is actually referenced in a response, its confidence should drift up, and stale memories that nobody ever mentions should stay where they are. This is reinforcement only — nothing is promoted into trusted state and no candidates are created. Extraction is Commit C. Schema (additive migration): - memories.last_referenced_at DATETIME (null by default) - memories.reference_count INTEGER DEFAULT 0 - idx_memories_last_referenced on last_referenced_at - memories.status now accepts the new "candidate" value so Commit C has the status slot to land on. Existing active/superseded/invalid rows are untouched. New module: src/atocore/memory/reinforcement.py - reinforce_from_interaction(interaction): scans the interaction's response + response_summary for echoes of active memories and bumps confidence / reference_count for each match - matching is intentionally simple and explainable: * normalize both sides (lowercase, collapse whitespace) * require >= 12 chars of memory content to match * compare the leading 80-char window of each memory - the candidate pool is project-scoped memories for the interaction's project + global identity + preference memories, deduplicated - candidates and invalidated memories are NEVER reinforced; only active memories move Memory service changes: - MEMORY_STATUSES = ["candidate", "active", "superseded", "invalid"] - create_memory(status="candidate"|"active"|...) with per-status duplicate scoping so a candidate and an active with identical text can legitimately coexist during review - get_memories(status=...) explicit override of the legacy active_only flag; callers can now list the review queue cleanly - update_memory accepts any valid status including "candidate" - reinforce_memory(id, delta): low-level primitive that bumps confidence (capped at 1.0), increments reference_count, and sets last_referenced_at. Only active memories; returns (applied, old, new) - promote_memory / reject_candidate_memory helpers prepping Commit C Interactions service: - record_interaction(reinforce=True) runs reinforce_from_interaction automatically when the interaction has response content. reinforcement errors are logged but never raised back to the caller so capture itself is never blocked by a flaky downstream. - circular import between interactions service and memory.reinforcement avoided by lazy import inside the function API: - POST /interactions now accepts a reinforce bool field (default true) - POST /interactions/{id}/reinforce runs reinforcement on an existing captured interaction — useful for backfilling or for retrying after a transient error in the automatic pass - response lists which memory ids were reinforced with old / new confidence for audit Tests (17 new, all green): - reinforce_memory bumps, caps at 1.0, accumulates reference_count - reinforce_memory rejects candidates and missing ids - reinforce_memory rejects negative delta - reinforce_from_interaction matches active memory - reinforce_from_interaction ignores candidates and inactive - reinforce_from_interaction requires minimum content length - reinforce_from_interaction handles empty response cleanly - reinforce_from_interaction normalizes casing and whitespace - reinforce_from_interaction deduplicates across memory buckets - record_interaction auto-reinforces by default - record_interaction reinforce=False skips the pass - record_interaction handles empty response - POST /interactions/{id}/reinforce runs against stored interaction - POST /interactions/{id}/reinforce returns 404 for missing id - POST /interactions accepts reinforce=false Full suite: 135 passing (was 118). Trust model unchanged: - reinforcement only moves confidence within the existing active set - the candidate lifecycle is declared but only Commit C will actually create candidate memories - trusted project state is never touched by reinforcement Next: Commit C adds the rule-based extractor that produces candidate memories from captured interactions plus the promote/reject review queue endpoints.
2026-04-06 21:18:38 -04:00
return True, old_confidence, new_confidence
def auto_promote_reinforced(
min_reference_count: int = 3,
min_confidence: float = 0.7,
max_age_days: int = 14,
) -> list[str]:
"""Auto-promote candidate memories with strong reinforcement signals.
Phase 10: memories that have been reinforced by multiple interactions
graduate from candidate to active without human review. This rewards
knowledge that the system keeps referencing organically.
Returns a list of promoted memory IDs.
"""
from datetime import timedelta
cutoff = (
datetime.now(timezone.utc) - timedelta(days=max_age_days)
).strftime("%Y-%m-%d %H:%M:%S")
promoted: list[str] = []
with get_connection() as conn:
rows = conn.execute(
"SELECT id, content, memory_type, project, confidence, "
"reference_count FROM memories "
"WHERE status = 'candidate' "
"AND COALESCE(reference_count, 0) >= ? "
"AND confidence >= ? "
"AND last_referenced_at >= ?",
(min_reference_count, min_confidence, cutoff),
).fetchall()
for row in rows:
mid = row["id"]
feat: Phase 4 V1 — Robustness Hardening Adds the observability + safety layer that turns AtoCore from "works until something silently breaks" into "every mutation is traceable, drift is detected, failures raise alerts." 1. Audit log (memory_audit table): - New table with id, memory_id, action, actor, before/after JSON, note, timestamp; 3 indexes for memory_id/timestamp/action - _audit_memory() helper called from every mutation: create_memory, update_memory, promote_memory, reject_candidate_memory, invalidate_memory, supersede_memory, reinforce_memory, auto_promote_reinforced, expire_stale_candidates - Action verb auto-selected: promoted/rejected/invalidated/ superseded/updated based on state transition - "actor" threaded through: api-http, human-triage, phase10-auto- promote, candidate-expiry, reinforcement, etc. - Fail-open: audit write failure logs but never breaks the mutation - GET /memory/{id}/audit: full history for one memory - GET /admin/audit/recent: last 50 mutations across the system 2. Alerts framework (src/atocore/observability/alerts.py): - emit_alert(severity, title, message, context) fans out to: - structlog logger (always) - ~/atocore-logs/alerts.log append (configurable via ATOCORE_ALERT_LOG) - project_state atocore/alert/last_{severity} (dashboard surface) - ATOCORE_ALERT_WEBHOOK POST if set (auto-detects Discord webhook format for nice embeds; generic JSON otherwise) - Every sink fail-open — one failure doesn't prevent the others - Pipeline alert step in nightly cron: harness < 85% → warning; candidate queue > 200 → warning 3. Integrity checks (scripts/integrity_check.py): - Nightly scan for drift: - Memories → missing source_chunk_id references - Duplicate active memories (same type+content+project) - project_state → missing projects - Orphaned source_chunks (no parent document) - Results persisted to atocore/status/integrity_check_result - Any finding emits a warning alert - Added as Step G in deploy/dalidou/batch-extract.sh nightly cron 4. Dashboard surfaces it all: - integrity (findings + details) - alerts (last info/warning/critical per severity) - recent_audit (last 10 mutations with actor + action + preview) Tests: 308 → 317 (9 new): - test_audit_create_logs_entry - test_audit_promote_logs_entry - test_audit_reject_logs_entry - test_audit_update_captures_before_after - test_audit_reinforce_logs_entry - test_recent_audit_returns_cross_memory_entries - test_emit_alert_writes_log_file - test_emit_alert_invalid_severity_falls_back_to_info - test_emit_alert_fails_open_on_log_write_error Deferred: formal migration framework with rollback (current additive pattern is fine for V1); memory detail wiki page with audit view (quick follow-up). To enable Discord alerts: set ATOCORE_ALERT_WEBHOOK to a Discord webhook URL in Dalidou's environment. Default = log-only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:54:10 -04:00
ok = promote_memory(
mid,
actor="phase10-auto-promote",
note=f"ref_count={row['reference_count']} confidence={row['confidence']:.2f}",
)
if ok:
promoted.append(mid)
log.info(
"memory_auto_promoted",
memory_id=mid,
memory_type=row["memory_type"],
project=row["project"] or "(global)",
reference_count=row["reference_count"],
confidence=round(row["confidence"], 3),
)
return promoted
feat: Phase 6 — Living Taxonomy + Universal Capture Closes two real-use gaps: 1. "APM tool" gap: work done outside Claude Code (desktop, web, phone, other machine) was invisible to AtoCore. 2. Project discovery gap: manual JSON-file edits required to promote an emerging theme to a first-class project. B — atocore_remember MCP tool (scripts/atocore_mcp.py): - New MCP tool for universal capture from any MCP-aware client (Claude Desktop, Code, Cursor, Zed, Windsurf, etc.) - Accepts content (required) + memory_type/project/confidence/ valid_until/domain_tags (all optional with sensible defaults) - Creates a candidate memory, goes through the existing 3-tier triage (no bypass — the quality gate catches noise) - Detailed tool description guides Claude on when to invoke: "remember this", "save that for later", "don't lose this fact" - Total tools exposed by MCP server: 14 → 15 C.1 Emerging-concepts detector (scripts/detect_emerging.py): - Nightly scan of active + candidate memories for: * Unregistered project names with ≥3 memory occurrences * Top 20 domain_tags by frequency (emerging categories) * Active memories with reference_count ≥ 5 + valid_until set (reinforced transients — candidates for extension) - Writes findings to atocore/proposals/* project state entries - Emits "warning" alert via Phase 4 framework the FIRST time a new project crosses the 5-memory alert threshold (avoids spam) - Configurable via env vars: ATOCORE_EMERGING_PROJECT_MIN (default 3), ATOCORE_EMERGING_ALERT_THRESHOLD (default 5), TOP_TAGS_LIMIT (20) C.2 Registration surface (src/atocore/api/routes.py + wiki.py): - POST /admin/projects/register-emerging — one-click register with sensible defaults (ingest_roots auto-filled with vault:incoming/projects/<id>/ convention). Clears the proposal from the dashboard list on success. - Dashboard /admin/dashboard: new "proposals" section with unregistered_projects + emerging_categories + reinforced_transients. - Wiki homepage: "📋 Emerging" section rendering each unregistered project as a card with count + 2 sample memory previews + inline "📌 Register as project" button that calls the endpoint via fetch, reloads the page on success. C.3 Transient-to-durable extension (src/atocore/memory/service.py + API + cron): - New extend_reinforced_valid_until() function — scans active memories with valid_until in the next 30 days and reference_count ≥ 5. Extends expiry by 90 days. If reference_count ≥ 10, clears expiry entirely (makes permanent). Writes audit rows via the Phase 4 memory_audit framework with actor="transient-to-durable". - POST /admin/memory/extend-reinforced — API wrapper for cron. - Matches the user's intuition: "something transient becomes important if you keep coming back to it". Nightly cron (deploy/dalidou/batch-extract.sh): - Step F2: detect_emerging.py (after F pipeline summary) - Step F3: /admin/memory/extend-reinforced (before integrity check) - Both fail-open; errors don't break the pipeline. Tests: 366 → 374 (+8 for Phase 6): - 6 tests for extend_reinforced_valid_until covering: extension path, permanent path, skip far-future, skip low-refs, skip permanent memories, audit row write - 2 smoke tests for the detector (imports cleanly, handles empty DB) - MCP tool changes don't need new tests — the wrapper is pure passthrough Design decisions documented in plan file: - atocore_remember deliberately doesn't bypass triage (quality gate) - Detector is passive (surfaces proposals) not active (auto-registers) - Sensible ingest-root defaults ("vault:incoming/projects/<id>/") so registration is one-click with no file-path thinking - Extension adds 90 days rather than clearing expiry (gradual permanence earned through sustained reinforcement) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-18 08:08:55 -04:00
def extend_reinforced_valid_until(
min_reference_count: int = 5,
permanent_reference_count: int = 10,
extension_days: int = 90,
imminent_expiry_days: int = 30,
) -> list[dict]:
"""Phase 6 C.3 — transient-to-durable auto-extension.
For active memories with valid_until within the next N days AND
reference_count >= min_reference_count: extend valid_until by
extension_days. If reference_count >= permanent_reference_count,
clear valid_until entirely (becomes permanent).
Matches the user's intuition: "something transient becomes important
if you keep coming back to it". The system watches reinforcement
signals and extends expiry so context packs keep seeing durable
facts instead of letting them decay out.
Returns a list of {memory_id, action, old, new} dicts for each
memory touched.
"""
from datetime import timedelta
now = datetime.now(timezone.utc)
horizon = (now + timedelta(days=imminent_expiry_days)).strftime("%Y-%m-%d")
new_expiry = (now + timedelta(days=extension_days)).strftime("%Y-%m-%d")
now_str = now.strftime("%Y-%m-%d %H:%M:%S")
extended: list[dict] = []
with get_connection() as conn:
rows = conn.execute(
"SELECT id, valid_until, reference_count FROM memories "
"WHERE status = 'active' "
"AND valid_until IS NOT NULL AND valid_until != '' "
"AND substr(valid_until, 1, 10) <= ? "
"AND COALESCE(reference_count, 0) >= ?",
(horizon, min_reference_count),
).fetchall()
for r in rows:
mid = r["id"]
old_vu = r["valid_until"]
ref_count = int(r["reference_count"] or 0)
if ref_count >= permanent_reference_count:
# Permanent promotion
conn.execute(
"UPDATE memories SET valid_until = NULL, updated_at = ? WHERE id = ?",
(now_str, mid),
)
extended.append({
"memory_id": mid, "action": "made_permanent",
"old_valid_until": old_vu, "new_valid_until": None,
"reference_count": ref_count,
})
else:
# 90-day extension
conn.execute(
"UPDATE memories SET valid_until = ?, updated_at = ? WHERE id = ?",
(new_expiry, now_str, mid),
)
extended.append({
"memory_id": mid, "action": "extended",
"old_valid_until": old_vu, "new_valid_until": new_expiry,
"reference_count": ref_count,
})
# Audit rows via the shared framework (fail-open)
for ex in extended:
try:
_audit_memory(
memory_id=ex["memory_id"],
action="valid_until_extended",
actor="transient-to-durable",
before={"valid_until": ex["old_valid_until"]},
after={"valid_until": ex["new_valid_until"]},
note=f"reinforced {ex['reference_count']}x; {ex['action']}",
)
except Exception:
pass
if extended:
log.info("reinforced_valid_until_extended", count=len(extended))
return extended
def decay_unreferenced_memories(
idle_days_threshold: int = 30,
daily_decay_factor: float = 0.97,
supersede_confidence_floor: float = 0.30,
actor: str = "confidence-decay",
) -> dict[str, list]:
"""Phase 7D — daily confidence decay on cold memories.
For every active, non-graduated memory with ``reference_count == 0``
AND whose last activity (``last_referenced_at`` if set, else
``created_at``) is older than ``idle_days_threshold``: multiply
confidence by ``daily_decay_factor`` (0.97/day 2-month half-life).
If the decayed confidence falls below ``supersede_confidence_floor``,
auto-supersede the memory with note "decayed, no references".
Supersession is non-destructive the row stays queryable via
``status='superseded'`` for audit.
Reinforcement already bumps confidence back up, so a decayed memory
that later gets referenced reverses its trajectory naturally.
The job is idempotent-per-day: running it multiple times in one day
decays extra, but the cron runs once/day so this stays on-policy.
If a day's cron gets skipped, we under-decay (safe direction —
memories age slower, not faster, than the policy).
Returns {"decayed": [...], "superseded": [...]} with per-memory
before/after snapshots for audit/observability.
"""
from datetime import timedelta
if not (0.0 < daily_decay_factor < 1.0):
raise ValueError("daily_decay_factor must be between 0 and 1 (exclusive)")
if not (0.0 <= supersede_confidence_floor <= 1.0):
raise ValueError("supersede_confidence_floor must be in [0,1]")
cutoff_dt = datetime.now(timezone.utc) - timedelta(days=idle_days_threshold)
cutoff_str = cutoff_dt.strftime("%Y-%m-%d %H:%M:%S")
now_str = datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S")
decayed: list[dict] = []
superseded: list[dict] = []
with get_connection() as conn:
# COALESCE(last_referenced_at, created_at) is the effective "last
# activity" — if a memory was never reinforced, we measure age
# from creation. "IS NOT status graduated" is enforced to keep
# graduated memories (which are frozen pointers to entities)
# out of the decay pool.
rows = conn.execute(
"SELECT id, confidence, last_referenced_at, created_at "
"FROM memories "
"WHERE status = 'active' "
"AND COALESCE(reference_count, 0) = 0 "
"AND COALESCE(last_referenced_at, created_at) < ?",
(cutoff_str,),
).fetchall()
for r in rows:
mid = r["id"]
old_conf = float(r["confidence"])
new_conf = max(0.0, old_conf * daily_decay_factor)
if new_conf < supersede_confidence_floor:
# Auto-supersede
conn.execute(
"UPDATE memories SET status = 'superseded', "
"confidence = ?, updated_at = ? WHERE id = ?",
(new_conf, now_str, mid),
)
superseded.append({
"memory_id": mid,
"old_confidence": old_conf,
"new_confidence": new_conf,
})
else:
conn.execute(
"UPDATE memories SET confidence = ?, updated_at = ? WHERE id = ?",
(new_conf, now_str, mid),
)
decayed.append({
"memory_id": mid,
"old_confidence": old_conf,
"new_confidence": new_conf,
})
# Audit rows outside the transaction. We skip per-decay audit because
# it would be too chatty (potentially hundreds of rows/day for no
# human value); supersessions ARE audited because those are
# status-changing events humans may want to review.
for entry in superseded:
_audit_memory(
memory_id=entry["memory_id"],
action="superseded",
actor=actor,
before={"status": "active", "confidence": entry["old_confidence"]},
after={"status": "superseded", "confidence": entry["new_confidence"]},
note=f"decayed below floor {supersede_confidence_floor}, no references",
)
if decayed or superseded:
log.info(
"confidence_decay_run",
decayed=len(decayed),
superseded=len(superseded),
idle_days_threshold=idle_days_threshold,
daily_decay_factor=daily_decay_factor,
)
return {"decayed": decayed, "superseded": superseded}
def expire_stale_candidates(
max_age_days: int = 14,
) -> list[str]:
"""Reject candidate memories that sat in queue too long unreinforced.
Candidates older than ``max_age_days`` with zero reinforcement are
auto-rejected to prevent unbounded queue growth. Returns rejected IDs.
"""
from datetime import timedelta
cutoff = (
datetime.now(timezone.utc) - timedelta(days=max_age_days)
).strftime("%Y-%m-%d %H:%M:%S")
expired: list[str] = []
with get_connection() as conn:
rows = conn.execute(
"SELECT id FROM memories "
"WHERE status = 'candidate' "
"AND COALESCE(reference_count, 0) = 0 "
"AND created_at < ?",
(cutoff,),
).fetchall()
for row in rows:
mid = row["id"]
feat: Phase 4 V1 — Robustness Hardening Adds the observability + safety layer that turns AtoCore from "works until something silently breaks" into "every mutation is traceable, drift is detected, failures raise alerts." 1. Audit log (memory_audit table): - New table with id, memory_id, action, actor, before/after JSON, note, timestamp; 3 indexes for memory_id/timestamp/action - _audit_memory() helper called from every mutation: create_memory, update_memory, promote_memory, reject_candidate_memory, invalidate_memory, supersede_memory, reinforce_memory, auto_promote_reinforced, expire_stale_candidates - Action verb auto-selected: promoted/rejected/invalidated/ superseded/updated based on state transition - "actor" threaded through: api-http, human-triage, phase10-auto- promote, candidate-expiry, reinforcement, etc. - Fail-open: audit write failure logs but never breaks the mutation - GET /memory/{id}/audit: full history for one memory - GET /admin/audit/recent: last 50 mutations across the system 2. Alerts framework (src/atocore/observability/alerts.py): - emit_alert(severity, title, message, context) fans out to: - structlog logger (always) - ~/atocore-logs/alerts.log append (configurable via ATOCORE_ALERT_LOG) - project_state atocore/alert/last_{severity} (dashboard surface) - ATOCORE_ALERT_WEBHOOK POST if set (auto-detects Discord webhook format for nice embeds; generic JSON otherwise) - Every sink fail-open — one failure doesn't prevent the others - Pipeline alert step in nightly cron: harness < 85% → warning; candidate queue > 200 → warning 3. Integrity checks (scripts/integrity_check.py): - Nightly scan for drift: - Memories → missing source_chunk_id references - Duplicate active memories (same type+content+project) - project_state → missing projects - Orphaned source_chunks (no parent document) - Results persisted to atocore/status/integrity_check_result - Any finding emits a warning alert - Added as Step G in deploy/dalidou/batch-extract.sh nightly cron 4. Dashboard surfaces it all: - integrity (findings + details) - alerts (last info/warning/critical per severity) - recent_audit (last 10 mutations with actor + action + preview) Tests: 308 → 317 (9 new): - test_audit_create_logs_entry - test_audit_promote_logs_entry - test_audit_reject_logs_entry - test_audit_update_captures_before_after - test_audit_reinforce_logs_entry - test_recent_audit_returns_cross_memory_entries - test_emit_alert_writes_log_file - test_emit_alert_invalid_severity_falls_back_to_info - test_emit_alert_fails_open_on_log_write_error Deferred: formal migration framework with rollback (current additive pattern is fine for V1); memory detail wiki page with audit view (quick follow-up). To enable Discord alerts: set ATOCORE_ALERT_WEBHOOK to a Discord webhook URL in Dalidou's environment. Default = log-only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:54:10 -04:00
ok = reject_candidate_memory(
mid,
actor="candidate-expiry",
note=f"unreinforced for {max_age_days}+ days",
)
if ok:
expired.append(mid)
log.info("memory_expired", memory_id=mid)
return expired
def get_memories_for_context(
memory_types: list[str] | None = None,
project: str | None = None,
budget: int = 500,
header: str = "--- AtoCore Memory ---",
footer: str = "--- End Memory ---",
query: str | None = None,
) -> tuple[str, int]:
"""Get formatted memories for context injection.
Returns (formatted_text, char_count).
Budget allocation per Master Plan section 9:
identity: 5%, preference: 5%, rest from retrieval budget
The caller can override ``header`` / ``footer`` to distinguish
multiple memory blocks in the same pack (e.g. identity/preference
vs project/knowledge memories).
When ``query`` is provided, candidates within each memory type
are ranked by lexical overlap against the query (stemmed token
intersection, ties broken by confidence). Without a query,
candidates fall through in the order ``get_memories`` returns
them which is effectively "by confidence desc".
"""
if memory_types is None:
memory_types = ["identity", "preference"]
if budget <= 0:
return "", 0
wrapper_chars = len(header) + len(footer) + 2
if budget <= wrapper_chars:
return "", 0
available = budget - wrapper_chars
selected_entries: list[str] = []
used = 0
# Pre-tokenize the query once. ``_score_memory_for_query`` is a
# free function below that reuses the reinforcement tokenizer so
# lexical scoring here matches the reinforcement matcher.
query_tokens: set[str] | None = None
if query:
from atocore.memory.reinforcement import _normalize, _tokenize
query_tokens = _tokenize(_normalize(query))
if not query_tokens:
query_tokens = None
# Collect ALL candidates across the requested types into one
# pool, then rank globally before the budget walk. Ranking per
# type and walking types in order would starve later types when
# the first type's candidates filled the budget — even if a
# later-type candidate matched the query perfectly. Type order
# is preserved as a stable tiebreaker inside
# ``_rank_memories_for_query`` via Python's stable sort.
pool: list[Memory] = []
seen_ids: set[str] = set()
for mtype in memory_types:
for mem in get_memories(
memory_type=mtype,
project=project,
min_confidence=0.5,
limit=30,
):
if mem.id in seen_ids:
continue
seen_ids.add(mem.id)
pool.append(mem)
feat: Phase 3 V1 — Auto-Organization (domain_tags + valid_until) Adds structural metadata that the LLM triage was already implicitly reasoning about ("stale snapshot" → reject). Phase 3 captures that reasoning as fields so it can DRIVE retrieval, not just rejection. Schema (src/atocore/models/database.py): - domain_tags TEXT DEFAULT '[]' JSON array of lowercase topic keywords - valid_until DATETIME ISO date; null = permanent - idx_memories_valid_until index for efficient expiry queries Memory service (src/atocore/memory/service.py): - Memory dataclass gains domain_tags + valid_until - create_memory, update_memory accept/persist both - _row_to_memory safely reads both (JSON-decode + null handling) - _normalize_tags helper: lowercase, dedup, strip, cap at 10 - get_memories_for_context filters expired (valid_until < today UTC) - _rank_memories_for_query adds tag-boost: memories whose domain_tags appear as substrings in query text rank higher (tertiary key after content-overlap density + absolute overlap, before confidence) LLM extractor (_llm_prompt.py → llm-0.5.0): - SYSTEM_PROMPT documents domain_tags (2-5 keywords) + valid_until (time-bounded facts get expiry dates; durable facts stay null) - normalize_candidate_item parses both fields from model output with graceful fallback for string/null/missing LLM triage (scripts/auto_triage.py): - TRIAGE_SYSTEM_PROMPT documents same two fields - parse_verdict extracts them from verdict JSON - On promote: PUT /memory/{id} with tags + valid_until BEFORE POST /memory/{id}/promote, so active memories carry them API (src/atocore/api/routes.py): - MemoryCreateRequest: adds domain_tags, valid_until - MemoryUpdateRequest: adds domain_tags, valid_until, memory_type - GET /memory response exposes domain_tags + valid_until + created_at Triage UI (src/atocore/engineering/triage_ui.py): - Renders existing tags as colored badges - Adds inline text field for tags (comma-separated) + date picker for valid_until on every candidate card - Save&Promote button persists edits via PUT then promotes - Plain Promote (and Y shortcut) also saves tags/expiry if edited Wiki (src/atocore/engineering/wiki.py): - Search now matches memory content OR domain_tags - Search results render tags as clickable badges linking to /wiki/search?q=<tag> for cross-project navigation - valid_until shown as amber "valid until YYYY-MM-DD" hint Tests: 303 → 308 (5 new for Phase 3 behavior): - test_create_memory_with_tags_and_valid_until - test_create_memory_normalizes_tags - test_update_memory_sets_tags_and_valid_until - test_get_memories_for_context_excludes_expired - test_context_builder_tag_boost_orders_results Deferred (explicitly): temporal_scope enum, source_refs memory graph, HDBSCAN clustering, memory detail wiki page, backfill of existing actives. See docs/MASTER-BRAIN-PLAN.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:37:01 -04:00
# Phase 3: filter out expired memories (valid_until in the past).
# Raw API queries still return them (for audit/history) but context
# packs must not surface stale facts.
if pool:
pool = _filter_expired(pool)
if query_tokens is not None:
feat: Phase 3 V1 — Auto-Organization (domain_tags + valid_until) Adds structural metadata that the LLM triage was already implicitly reasoning about ("stale snapshot" → reject). Phase 3 captures that reasoning as fields so it can DRIVE retrieval, not just rejection. Schema (src/atocore/models/database.py): - domain_tags TEXT DEFAULT '[]' JSON array of lowercase topic keywords - valid_until DATETIME ISO date; null = permanent - idx_memories_valid_until index for efficient expiry queries Memory service (src/atocore/memory/service.py): - Memory dataclass gains domain_tags + valid_until - create_memory, update_memory accept/persist both - _row_to_memory safely reads both (JSON-decode + null handling) - _normalize_tags helper: lowercase, dedup, strip, cap at 10 - get_memories_for_context filters expired (valid_until < today UTC) - _rank_memories_for_query adds tag-boost: memories whose domain_tags appear as substrings in query text rank higher (tertiary key after content-overlap density + absolute overlap, before confidence) LLM extractor (_llm_prompt.py → llm-0.5.0): - SYSTEM_PROMPT documents domain_tags (2-5 keywords) + valid_until (time-bounded facts get expiry dates; durable facts stay null) - normalize_candidate_item parses both fields from model output with graceful fallback for string/null/missing LLM triage (scripts/auto_triage.py): - TRIAGE_SYSTEM_PROMPT documents same two fields - parse_verdict extracts them from verdict JSON - On promote: PUT /memory/{id} with tags + valid_until BEFORE POST /memory/{id}/promote, so active memories carry them API (src/atocore/api/routes.py): - MemoryCreateRequest: adds domain_tags, valid_until - MemoryUpdateRequest: adds domain_tags, valid_until, memory_type - GET /memory response exposes domain_tags + valid_until + created_at Triage UI (src/atocore/engineering/triage_ui.py): - Renders existing tags as colored badges - Adds inline text field for tags (comma-separated) + date picker for valid_until on every candidate card - Save&Promote button persists edits via PUT then promotes - Plain Promote (and Y shortcut) also saves tags/expiry if edited Wiki (src/atocore/engineering/wiki.py): - Search now matches memory content OR domain_tags - Search results render tags as clickable badges linking to /wiki/search?q=<tag> for cross-project navigation - valid_until shown as amber "valid until YYYY-MM-DD" hint Tests: 303 → 308 (5 new for Phase 3 behavior): - test_create_memory_with_tags_and_valid_until - test_create_memory_normalizes_tags - test_update_memory_sets_tags_and_valid_until - test_get_memories_for_context_excludes_expired - test_context_builder_tag_boost_orders_results Deferred (explicitly): temporal_scope enum, source_refs memory graph, HDBSCAN clustering, memory detail wiki page, backfill of existing actives. See docs/MASTER-BRAIN-PLAN.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:37:01 -04:00
pool = _rank_memories_for_query(pool, query_tokens, query=query)
# Per-entry cap prevents a single long memory from monopolizing
# the band. With 16 p06 memories competing for ~700 chars, an
# uncapped 530-char overview memory fills the entire budget before
# a query-relevant 150-char memory gets a slot. The cap ensures at
# least 2-3 entries fit regardless of individual memory length.
max_entry_chars = 250
for mem in pool:
content = mem.content
if len(content) > max_entry_chars:
content = content[:max_entry_chars - 3].rstrip() + "..."
entry = f"[{mem.memory_type}] {content}"
entry_len = len(entry) + 1
if entry_len > available - used:
continue
selected_entries.append(entry)
used += entry_len
if not selected_entries:
return "", 0
lines = [header, *selected_entries, footer]
text = "\n".join(lines)
log.info("memories_for_context", count=len(selected_entries), chars=len(text))
return text, len(text)
feat: Phase 3 V1 — Auto-Organization (domain_tags + valid_until) Adds structural metadata that the LLM triage was already implicitly reasoning about ("stale snapshot" → reject). Phase 3 captures that reasoning as fields so it can DRIVE retrieval, not just rejection. Schema (src/atocore/models/database.py): - domain_tags TEXT DEFAULT '[]' JSON array of lowercase topic keywords - valid_until DATETIME ISO date; null = permanent - idx_memories_valid_until index for efficient expiry queries Memory service (src/atocore/memory/service.py): - Memory dataclass gains domain_tags + valid_until - create_memory, update_memory accept/persist both - _row_to_memory safely reads both (JSON-decode + null handling) - _normalize_tags helper: lowercase, dedup, strip, cap at 10 - get_memories_for_context filters expired (valid_until < today UTC) - _rank_memories_for_query adds tag-boost: memories whose domain_tags appear as substrings in query text rank higher (tertiary key after content-overlap density + absolute overlap, before confidence) LLM extractor (_llm_prompt.py → llm-0.5.0): - SYSTEM_PROMPT documents domain_tags (2-5 keywords) + valid_until (time-bounded facts get expiry dates; durable facts stay null) - normalize_candidate_item parses both fields from model output with graceful fallback for string/null/missing LLM triage (scripts/auto_triage.py): - TRIAGE_SYSTEM_PROMPT documents same two fields - parse_verdict extracts them from verdict JSON - On promote: PUT /memory/{id} with tags + valid_until BEFORE POST /memory/{id}/promote, so active memories carry them API (src/atocore/api/routes.py): - MemoryCreateRequest: adds domain_tags, valid_until - MemoryUpdateRequest: adds domain_tags, valid_until, memory_type - GET /memory response exposes domain_tags + valid_until + created_at Triage UI (src/atocore/engineering/triage_ui.py): - Renders existing tags as colored badges - Adds inline text field for tags (comma-separated) + date picker for valid_until on every candidate card - Save&Promote button persists edits via PUT then promotes - Plain Promote (and Y shortcut) also saves tags/expiry if edited Wiki (src/atocore/engineering/wiki.py): - Search now matches memory content OR domain_tags - Search results render tags as clickable badges linking to /wiki/search?q=<tag> for cross-project navigation - valid_until shown as amber "valid until YYYY-MM-DD" hint Tests: 303 → 308 (5 new for Phase 3 behavior): - test_create_memory_with_tags_and_valid_until - test_create_memory_normalizes_tags - test_update_memory_sets_tags_and_valid_until - test_get_memories_for_context_excludes_expired - test_context_builder_tag_boost_orders_results Deferred (explicitly): temporal_scope enum, source_refs memory graph, HDBSCAN clustering, memory detail wiki page, backfill of existing actives. See docs/MASTER-BRAIN-PLAN.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:37:01 -04:00
def _filter_expired(memories: list["Memory"]) -> list["Memory"]:
"""Drop memories whose valid_until is in the past (UTC comparison)."""
now_iso = datetime.now(timezone.utc).strftime("%Y-%m-%d")
out = []
for m in memories:
vu = (m.valid_until or "").strip()
if not vu:
out.append(m)
continue
# Compare as string (ISO dates/timestamps sort correctly lexicographically
# when they have the same format; date-only vs full ts both start YYYY-MM-DD).
if vu[:10] >= now_iso:
out.append(m)
# else: expired, drop silently
return out
def _rank_memories_for_query(
memories: list["Memory"],
query_tokens: set[str],
feat: Phase 3 V1 — Auto-Organization (domain_tags + valid_until) Adds structural metadata that the LLM triage was already implicitly reasoning about ("stale snapshot" → reject). Phase 3 captures that reasoning as fields so it can DRIVE retrieval, not just rejection. Schema (src/atocore/models/database.py): - domain_tags TEXT DEFAULT '[]' JSON array of lowercase topic keywords - valid_until DATETIME ISO date; null = permanent - idx_memories_valid_until index for efficient expiry queries Memory service (src/atocore/memory/service.py): - Memory dataclass gains domain_tags + valid_until - create_memory, update_memory accept/persist both - _row_to_memory safely reads both (JSON-decode + null handling) - _normalize_tags helper: lowercase, dedup, strip, cap at 10 - get_memories_for_context filters expired (valid_until < today UTC) - _rank_memories_for_query adds tag-boost: memories whose domain_tags appear as substrings in query text rank higher (tertiary key after content-overlap density + absolute overlap, before confidence) LLM extractor (_llm_prompt.py → llm-0.5.0): - SYSTEM_PROMPT documents domain_tags (2-5 keywords) + valid_until (time-bounded facts get expiry dates; durable facts stay null) - normalize_candidate_item parses both fields from model output with graceful fallback for string/null/missing LLM triage (scripts/auto_triage.py): - TRIAGE_SYSTEM_PROMPT documents same two fields - parse_verdict extracts them from verdict JSON - On promote: PUT /memory/{id} with tags + valid_until BEFORE POST /memory/{id}/promote, so active memories carry them API (src/atocore/api/routes.py): - MemoryCreateRequest: adds domain_tags, valid_until - MemoryUpdateRequest: adds domain_tags, valid_until, memory_type - GET /memory response exposes domain_tags + valid_until + created_at Triage UI (src/atocore/engineering/triage_ui.py): - Renders existing tags as colored badges - Adds inline text field for tags (comma-separated) + date picker for valid_until on every candidate card - Save&Promote button persists edits via PUT then promotes - Plain Promote (and Y shortcut) also saves tags/expiry if edited Wiki (src/atocore/engineering/wiki.py): - Search now matches memory content OR domain_tags - Search results render tags as clickable badges linking to /wiki/search?q=<tag> for cross-project navigation - valid_until shown as amber "valid until YYYY-MM-DD" hint Tests: 303 → 308 (5 new for Phase 3 behavior): - test_create_memory_with_tags_and_valid_until - test_create_memory_normalizes_tags - test_update_memory_sets_tags_and_valid_until - test_get_memories_for_context_excludes_expired - test_context_builder_tag_boost_orders_results Deferred (explicitly): temporal_scope enum, source_refs memory graph, HDBSCAN clustering, memory detail wiki page, backfill of existing actives. See docs/MASTER-BRAIN-PLAN.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:37:01 -04:00
query: str | None = None,
) -> list["Memory"]:
"""Rerank a memory list by lexical overlap with a pre-tokenized query.
Primary key: overlap_density (overlap_count / memory_token_count),
which rewards short focused memories that match the query precisely
over long overview memories that incidentally share a few tokens.
feat: Phase 3 V1 — Auto-Organization (domain_tags + valid_until) Adds structural metadata that the LLM triage was already implicitly reasoning about ("stale snapshot" → reject). Phase 3 captures that reasoning as fields so it can DRIVE retrieval, not just rejection. Schema (src/atocore/models/database.py): - domain_tags TEXT DEFAULT '[]' JSON array of lowercase topic keywords - valid_until DATETIME ISO date; null = permanent - idx_memories_valid_until index for efficient expiry queries Memory service (src/atocore/memory/service.py): - Memory dataclass gains domain_tags + valid_until - create_memory, update_memory accept/persist both - _row_to_memory safely reads both (JSON-decode + null handling) - _normalize_tags helper: lowercase, dedup, strip, cap at 10 - get_memories_for_context filters expired (valid_until < today UTC) - _rank_memories_for_query adds tag-boost: memories whose domain_tags appear as substrings in query text rank higher (tertiary key after content-overlap density + absolute overlap, before confidence) LLM extractor (_llm_prompt.py → llm-0.5.0): - SYSTEM_PROMPT documents domain_tags (2-5 keywords) + valid_until (time-bounded facts get expiry dates; durable facts stay null) - normalize_candidate_item parses both fields from model output with graceful fallback for string/null/missing LLM triage (scripts/auto_triage.py): - TRIAGE_SYSTEM_PROMPT documents same two fields - parse_verdict extracts them from verdict JSON - On promote: PUT /memory/{id} with tags + valid_until BEFORE POST /memory/{id}/promote, so active memories carry them API (src/atocore/api/routes.py): - MemoryCreateRequest: adds domain_tags, valid_until - MemoryUpdateRequest: adds domain_tags, valid_until, memory_type - GET /memory response exposes domain_tags + valid_until + created_at Triage UI (src/atocore/engineering/triage_ui.py): - Renders existing tags as colored badges - Adds inline text field for tags (comma-separated) + date picker for valid_until on every candidate card - Save&Promote button persists edits via PUT then promotes - Plain Promote (and Y shortcut) also saves tags/expiry if edited Wiki (src/atocore/engineering/wiki.py): - Search now matches memory content OR domain_tags - Search results render tags as clickable badges linking to /wiki/search?q=<tag> for cross-project navigation - valid_until shown as amber "valid until YYYY-MM-DD" hint Tests: 303 → 308 (5 new for Phase 3 behavior): - test_create_memory_with_tags_and_valid_until - test_create_memory_normalizes_tags - test_update_memory_sets_tags_and_valid_until - test_get_memories_for_context_excludes_expired - test_context_builder_tag_boost_orders_results Deferred (explicitly): temporal_scope enum, source_refs memory graph, HDBSCAN clustering, memory detail wiki page, backfill of existing actives. See docs/MASTER-BRAIN-PLAN.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:37:01 -04:00
Secondary: absolute overlap count. Tertiary: domain-tag match.
Quaternary: confidence.
Phase 3: domain_tags contribute a boost when they appear in the
query text. A memory tagged [optics, thermal] for a query about
"optics coating" gets promoted above a memory without those tags.
Tag boost fires AFTER content-overlap density so it only breaks
ties among content-similar candidates.
"""
from atocore.memory.reinforcement import _normalize, _tokenize
feat: Phase 3 V1 — Auto-Organization (domain_tags + valid_until) Adds structural metadata that the LLM triage was already implicitly reasoning about ("stale snapshot" → reject). Phase 3 captures that reasoning as fields so it can DRIVE retrieval, not just rejection. Schema (src/atocore/models/database.py): - domain_tags TEXT DEFAULT '[]' JSON array of lowercase topic keywords - valid_until DATETIME ISO date; null = permanent - idx_memories_valid_until index for efficient expiry queries Memory service (src/atocore/memory/service.py): - Memory dataclass gains domain_tags + valid_until - create_memory, update_memory accept/persist both - _row_to_memory safely reads both (JSON-decode + null handling) - _normalize_tags helper: lowercase, dedup, strip, cap at 10 - get_memories_for_context filters expired (valid_until < today UTC) - _rank_memories_for_query adds tag-boost: memories whose domain_tags appear as substrings in query text rank higher (tertiary key after content-overlap density + absolute overlap, before confidence) LLM extractor (_llm_prompt.py → llm-0.5.0): - SYSTEM_PROMPT documents domain_tags (2-5 keywords) + valid_until (time-bounded facts get expiry dates; durable facts stay null) - normalize_candidate_item parses both fields from model output with graceful fallback for string/null/missing LLM triage (scripts/auto_triage.py): - TRIAGE_SYSTEM_PROMPT documents same two fields - parse_verdict extracts them from verdict JSON - On promote: PUT /memory/{id} with tags + valid_until BEFORE POST /memory/{id}/promote, so active memories carry them API (src/atocore/api/routes.py): - MemoryCreateRequest: adds domain_tags, valid_until - MemoryUpdateRequest: adds domain_tags, valid_until, memory_type - GET /memory response exposes domain_tags + valid_until + created_at Triage UI (src/atocore/engineering/triage_ui.py): - Renders existing tags as colored badges - Adds inline text field for tags (comma-separated) + date picker for valid_until on every candidate card - Save&Promote button persists edits via PUT then promotes - Plain Promote (and Y shortcut) also saves tags/expiry if edited Wiki (src/atocore/engineering/wiki.py): - Search now matches memory content OR domain_tags - Search results render tags as clickable badges linking to /wiki/search?q=<tag> for cross-project navigation - valid_until shown as amber "valid until YYYY-MM-DD" hint Tests: 303 → 308 (5 new for Phase 3 behavior): - test_create_memory_with_tags_and_valid_until - test_create_memory_normalizes_tags - test_update_memory_sets_tags_and_valid_until - test_get_memories_for_context_excludes_expired - test_context_builder_tag_boost_orders_results Deferred (explicitly): temporal_scope enum, source_refs memory graph, HDBSCAN clustering, memory detail wiki page, backfill of existing actives. See docs/MASTER-BRAIN-PLAN.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:37:01 -04:00
query_lower = (query or "").lower()
scored: list[tuple[float, int, int, float, Memory]] = []
for mem in memories:
mem_tokens = _tokenize(_normalize(mem.content))
overlap = len(mem_tokens & query_tokens) if mem_tokens else 0
density = overlap / len(mem_tokens) if mem_tokens else 0.0
feat: Phase 3 V1 — Auto-Organization (domain_tags + valid_until) Adds structural metadata that the LLM triage was already implicitly reasoning about ("stale snapshot" → reject). Phase 3 captures that reasoning as fields so it can DRIVE retrieval, not just rejection. Schema (src/atocore/models/database.py): - domain_tags TEXT DEFAULT '[]' JSON array of lowercase topic keywords - valid_until DATETIME ISO date; null = permanent - idx_memories_valid_until index for efficient expiry queries Memory service (src/atocore/memory/service.py): - Memory dataclass gains domain_tags + valid_until - create_memory, update_memory accept/persist both - _row_to_memory safely reads both (JSON-decode + null handling) - _normalize_tags helper: lowercase, dedup, strip, cap at 10 - get_memories_for_context filters expired (valid_until < today UTC) - _rank_memories_for_query adds tag-boost: memories whose domain_tags appear as substrings in query text rank higher (tertiary key after content-overlap density + absolute overlap, before confidence) LLM extractor (_llm_prompt.py → llm-0.5.0): - SYSTEM_PROMPT documents domain_tags (2-5 keywords) + valid_until (time-bounded facts get expiry dates; durable facts stay null) - normalize_candidate_item parses both fields from model output with graceful fallback for string/null/missing LLM triage (scripts/auto_triage.py): - TRIAGE_SYSTEM_PROMPT documents same two fields - parse_verdict extracts them from verdict JSON - On promote: PUT /memory/{id} with tags + valid_until BEFORE POST /memory/{id}/promote, so active memories carry them API (src/atocore/api/routes.py): - MemoryCreateRequest: adds domain_tags, valid_until - MemoryUpdateRequest: adds domain_tags, valid_until, memory_type - GET /memory response exposes domain_tags + valid_until + created_at Triage UI (src/atocore/engineering/triage_ui.py): - Renders existing tags as colored badges - Adds inline text field for tags (comma-separated) + date picker for valid_until on every candidate card - Save&Promote button persists edits via PUT then promotes - Plain Promote (and Y shortcut) also saves tags/expiry if edited Wiki (src/atocore/engineering/wiki.py): - Search now matches memory content OR domain_tags - Search results render tags as clickable badges linking to /wiki/search?q=<tag> for cross-project navigation - valid_until shown as amber "valid until YYYY-MM-DD" hint Tests: 303 → 308 (5 new for Phase 3 behavior): - test_create_memory_with_tags_and_valid_until - test_create_memory_normalizes_tags - test_update_memory_sets_tags_and_valid_until - test_get_memories_for_context_excludes_expired - test_context_builder_tag_boost_orders_results Deferred (explicitly): temporal_scope enum, source_refs memory graph, HDBSCAN clustering, memory detail wiki page, backfill of existing actives. See docs/MASTER-BRAIN-PLAN.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:37:01 -04:00
# Tag boost: count how many of the memory's domain_tags appear
# as substrings in the raw query. Strong signal for topical match.
tag_hits = 0
for tag in (mem.domain_tags or []):
if tag and tag in query_lower:
tag_hits += 1
scored.append((density, overlap, tag_hits, mem.confidence, mem))
scored.sort(key=lambda t: (t[0], t[1], t[2], t[3]), reverse=True)
return [mem for _, _, _, _, mem in scored]
def _row_to_memory(row) -> Memory:
"""Convert a DB row to Memory dataclass."""
feat: Phase 3 V1 — Auto-Organization (domain_tags + valid_until) Adds structural metadata that the LLM triage was already implicitly reasoning about ("stale snapshot" → reject). Phase 3 captures that reasoning as fields so it can DRIVE retrieval, not just rejection. Schema (src/atocore/models/database.py): - domain_tags TEXT DEFAULT '[]' JSON array of lowercase topic keywords - valid_until DATETIME ISO date; null = permanent - idx_memories_valid_until index for efficient expiry queries Memory service (src/atocore/memory/service.py): - Memory dataclass gains domain_tags + valid_until - create_memory, update_memory accept/persist both - _row_to_memory safely reads both (JSON-decode + null handling) - _normalize_tags helper: lowercase, dedup, strip, cap at 10 - get_memories_for_context filters expired (valid_until < today UTC) - _rank_memories_for_query adds tag-boost: memories whose domain_tags appear as substrings in query text rank higher (tertiary key after content-overlap density + absolute overlap, before confidence) LLM extractor (_llm_prompt.py → llm-0.5.0): - SYSTEM_PROMPT documents domain_tags (2-5 keywords) + valid_until (time-bounded facts get expiry dates; durable facts stay null) - normalize_candidate_item parses both fields from model output with graceful fallback for string/null/missing LLM triage (scripts/auto_triage.py): - TRIAGE_SYSTEM_PROMPT documents same two fields - parse_verdict extracts them from verdict JSON - On promote: PUT /memory/{id} with tags + valid_until BEFORE POST /memory/{id}/promote, so active memories carry them API (src/atocore/api/routes.py): - MemoryCreateRequest: adds domain_tags, valid_until - MemoryUpdateRequest: adds domain_tags, valid_until, memory_type - GET /memory response exposes domain_tags + valid_until + created_at Triage UI (src/atocore/engineering/triage_ui.py): - Renders existing tags as colored badges - Adds inline text field for tags (comma-separated) + date picker for valid_until on every candidate card - Save&Promote button persists edits via PUT then promotes - Plain Promote (and Y shortcut) also saves tags/expiry if edited Wiki (src/atocore/engineering/wiki.py): - Search now matches memory content OR domain_tags - Search results render tags as clickable badges linking to /wiki/search?q=<tag> for cross-project navigation - valid_until shown as amber "valid until YYYY-MM-DD" hint Tests: 303 → 308 (5 new for Phase 3 behavior): - test_create_memory_with_tags_and_valid_until - test_create_memory_normalizes_tags - test_update_memory_sets_tags_and_valid_until - test_get_memories_for_context_excludes_expired - test_context_builder_tag_boost_orders_results Deferred (explicitly): temporal_scope enum, source_refs memory graph, HDBSCAN clustering, memory detail wiki page, backfill of existing actives. See docs/MASTER-BRAIN-PLAN.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:37:01 -04:00
import json as _json
feat(phase9-B): reinforce active memories from captured interactions Phase 9 Commit B from the agreed plan. With Commit A capturing what AtoCore fed to the LLM and what came back, this commit closes the weakest part of the loop: when a memory is actually referenced in a response, its confidence should drift up, and stale memories that nobody ever mentions should stay where they are. This is reinforcement only — nothing is promoted into trusted state and no candidates are created. Extraction is Commit C. Schema (additive migration): - memories.last_referenced_at DATETIME (null by default) - memories.reference_count INTEGER DEFAULT 0 - idx_memories_last_referenced on last_referenced_at - memories.status now accepts the new "candidate" value so Commit C has the status slot to land on. Existing active/superseded/invalid rows are untouched. New module: src/atocore/memory/reinforcement.py - reinforce_from_interaction(interaction): scans the interaction's response + response_summary for echoes of active memories and bumps confidence / reference_count for each match - matching is intentionally simple and explainable: * normalize both sides (lowercase, collapse whitespace) * require >= 12 chars of memory content to match * compare the leading 80-char window of each memory - the candidate pool is project-scoped memories for the interaction's project + global identity + preference memories, deduplicated - candidates and invalidated memories are NEVER reinforced; only active memories move Memory service changes: - MEMORY_STATUSES = ["candidate", "active", "superseded", "invalid"] - create_memory(status="candidate"|"active"|...) with per-status duplicate scoping so a candidate and an active with identical text can legitimately coexist during review - get_memories(status=...) explicit override of the legacy active_only flag; callers can now list the review queue cleanly - update_memory accepts any valid status including "candidate" - reinforce_memory(id, delta): low-level primitive that bumps confidence (capped at 1.0), increments reference_count, and sets last_referenced_at. Only active memories; returns (applied, old, new) - promote_memory / reject_candidate_memory helpers prepping Commit C Interactions service: - record_interaction(reinforce=True) runs reinforce_from_interaction automatically when the interaction has response content. reinforcement errors are logged but never raised back to the caller so capture itself is never blocked by a flaky downstream. - circular import between interactions service and memory.reinforcement avoided by lazy import inside the function API: - POST /interactions now accepts a reinforce bool field (default true) - POST /interactions/{id}/reinforce runs reinforcement on an existing captured interaction — useful for backfilling or for retrying after a transient error in the automatic pass - response lists which memory ids were reinforced with old / new confidence for audit Tests (17 new, all green): - reinforce_memory bumps, caps at 1.0, accumulates reference_count - reinforce_memory rejects candidates and missing ids - reinforce_memory rejects negative delta - reinforce_from_interaction matches active memory - reinforce_from_interaction ignores candidates and inactive - reinforce_from_interaction requires minimum content length - reinforce_from_interaction handles empty response cleanly - reinforce_from_interaction normalizes casing and whitespace - reinforce_from_interaction deduplicates across memory buckets - record_interaction auto-reinforces by default - record_interaction reinforce=False skips the pass - record_interaction handles empty response - POST /interactions/{id}/reinforce runs against stored interaction - POST /interactions/{id}/reinforce returns 404 for missing id - POST /interactions accepts reinforce=false Full suite: 135 passing (was 118). Trust model unchanged: - reinforcement only moves confidence within the existing active set - the candidate lifecycle is declared but only Commit C will actually create candidate memories - trusted project state is never touched by reinforcement Next: Commit C adds the rule-based extractor that produces candidate memories from captured interactions plus the promote/reject review queue endpoints.
2026-04-06 21:18:38 -04:00
keys = row.keys() if hasattr(row, "keys") else []
last_ref = row["last_referenced_at"] if "last_referenced_at" in keys else None
ref_count = row["reference_count"] if "reference_count" in keys else 0
feat: Phase 3 V1 — Auto-Organization (domain_tags + valid_until) Adds structural metadata that the LLM triage was already implicitly reasoning about ("stale snapshot" → reject). Phase 3 captures that reasoning as fields so it can DRIVE retrieval, not just rejection. Schema (src/atocore/models/database.py): - domain_tags TEXT DEFAULT '[]' JSON array of lowercase topic keywords - valid_until DATETIME ISO date; null = permanent - idx_memories_valid_until index for efficient expiry queries Memory service (src/atocore/memory/service.py): - Memory dataclass gains domain_tags + valid_until - create_memory, update_memory accept/persist both - _row_to_memory safely reads both (JSON-decode + null handling) - _normalize_tags helper: lowercase, dedup, strip, cap at 10 - get_memories_for_context filters expired (valid_until < today UTC) - _rank_memories_for_query adds tag-boost: memories whose domain_tags appear as substrings in query text rank higher (tertiary key after content-overlap density + absolute overlap, before confidence) LLM extractor (_llm_prompt.py → llm-0.5.0): - SYSTEM_PROMPT documents domain_tags (2-5 keywords) + valid_until (time-bounded facts get expiry dates; durable facts stay null) - normalize_candidate_item parses both fields from model output with graceful fallback for string/null/missing LLM triage (scripts/auto_triage.py): - TRIAGE_SYSTEM_PROMPT documents same two fields - parse_verdict extracts them from verdict JSON - On promote: PUT /memory/{id} with tags + valid_until BEFORE POST /memory/{id}/promote, so active memories carry them API (src/atocore/api/routes.py): - MemoryCreateRequest: adds domain_tags, valid_until - MemoryUpdateRequest: adds domain_tags, valid_until, memory_type - GET /memory response exposes domain_tags + valid_until + created_at Triage UI (src/atocore/engineering/triage_ui.py): - Renders existing tags as colored badges - Adds inline text field for tags (comma-separated) + date picker for valid_until on every candidate card - Save&Promote button persists edits via PUT then promotes - Plain Promote (and Y shortcut) also saves tags/expiry if edited Wiki (src/atocore/engineering/wiki.py): - Search now matches memory content OR domain_tags - Search results render tags as clickable badges linking to /wiki/search?q=<tag> for cross-project navigation - valid_until shown as amber "valid until YYYY-MM-DD" hint Tests: 303 → 308 (5 new for Phase 3 behavior): - test_create_memory_with_tags_and_valid_until - test_create_memory_normalizes_tags - test_update_memory_sets_tags_and_valid_until - test_get_memories_for_context_excludes_expired - test_context_builder_tag_boost_orders_results Deferred (explicitly): temporal_scope enum, source_refs memory graph, HDBSCAN clustering, memory detail wiki page, backfill of existing actives. See docs/MASTER-BRAIN-PLAN.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:37:01 -04:00
tags_raw = row["domain_tags"] if "domain_tags" in keys else None
try:
tags = _json.loads(tags_raw) if tags_raw else []
if not isinstance(tags, list):
tags = []
except Exception:
tags = []
valid_until = row["valid_until"] if "valid_until" in keys else None
return Memory(
id=row["id"],
memory_type=row["memory_type"],
content=row["content"],
project=row["project"] or "",
source_chunk_id=row["source_chunk_id"] or "",
confidence=row["confidence"],
status=row["status"],
created_at=row["created_at"],
updated_at=row["updated_at"],
feat(phase9-B): reinforce active memories from captured interactions Phase 9 Commit B from the agreed plan. With Commit A capturing what AtoCore fed to the LLM and what came back, this commit closes the weakest part of the loop: when a memory is actually referenced in a response, its confidence should drift up, and stale memories that nobody ever mentions should stay where they are. This is reinforcement only — nothing is promoted into trusted state and no candidates are created. Extraction is Commit C. Schema (additive migration): - memories.last_referenced_at DATETIME (null by default) - memories.reference_count INTEGER DEFAULT 0 - idx_memories_last_referenced on last_referenced_at - memories.status now accepts the new "candidate" value so Commit C has the status slot to land on. Existing active/superseded/invalid rows are untouched. New module: src/atocore/memory/reinforcement.py - reinforce_from_interaction(interaction): scans the interaction's response + response_summary for echoes of active memories and bumps confidence / reference_count for each match - matching is intentionally simple and explainable: * normalize both sides (lowercase, collapse whitespace) * require >= 12 chars of memory content to match * compare the leading 80-char window of each memory - the candidate pool is project-scoped memories for the interaction's project + global identity + preference memories, deduplicated - candidates and invalidated memories are NEVER reinforced; only active memories move Memory service changes: - MEMORY_STATUSES = ["candidate", "active", "superseded", "invalid"] - create_memory(status="candidate"|"active"|...) with per-status duplicate scoping so a candidate and an active with identical text can legitimately coexist during review - get_memories(status=...) explicit override of the legacy active_only flag; callers can now list the review queue cleanly - update_memory accepts any valid status including "candidate" - reinforce_memory(id, delta): low-level primitive that bumps confidence (capped at 1.0), increments reference_count, and sets last_referenced_at. Only active memories; returns (applied, old, new) - promote_memory / reject_candidate_memory helpers prepping Commit C Interactions service: - record_interaction(reinforce=True) runs reinforce_from_interaction automatically when the interaction has response content. reinforcement errors are logged but never raised back to the caller so capture itself is never blocked by a flaky downstream. - circular import between interactions service and memory.reinforcement avoided by lazy import inside the function API: - POST /interactions now accepts a reinforce bool field (default true) - POST /interactions/{id}/reinforce runs reinforcement on an existing captured interaction — useful for backfilling or for retrying after a transient error in the automatic pass - response lists which memory ids were reinforced with old / new confidence for audit Tests (17 new, all green): - reinforce_memory bumps, caps at 1.0, accumulates reference_count - reinforce_memory rejects candidates and missing ids - reinforce_memory rejects negative delta - reinforce_from_interaction matches active memory - reinforce_from_interaction ignores candidates and inactive - reinforce_from_interaction requires minimum content length - reinforce_from_interaction handles empty response cleanly - reinforce_from_interaction normalizes casing and whitespace - reinforce_from_interaction deduplicates across memory buckets - record_interaction auto-reinforces by default - record_interaction reinforce=False skips the pass - record_interaction handles empty response - POST /interactions/{id}/reinforce runs against stored interaction - POST /interactions/{id}/reinforce returns 404 for missing id - POST /interactions accepts reinforce=false Full suite: 135 passing (was 118). Trust model unchanged: - reinforcement only moves confidence within the existing active set - the candidate lifecycle is declared but only Commit C will actually create candidate memories - trusted project state is never touched by reinforcement Next: Commit C adds the rule-based extractor that produces candidate memories from captured interactions plus the promote/reject review queue endpoints.
2026-04-06 21:18:38 -04:00
last_referenced_at=last_ref or "",
reference_count=int(ref_count or 0),
feat: Phase 3 V1 — Auto-Organization (domain_tags + valid_until) Adds structural metadata that the LLM triage was already implicitly reasoning about ("stale snapshot" → reject). Phase 3 captures that reasoning as fields so it can DRIVE retrieval, not just rejection. Schema (src/atocore/models/database.py): - domain_tags TEXT DEFAULT '[]' JSON array of lowercase topic keywords - valid_until DATETIME ISO date; null = permanent - idx_memories_valid_until index for efficient expiry queries Memory service (src/atocore/memory/service.py): - Memory dataclass gains domain_tags + valid_until - create_memory, update_memory accept/persist both - _row_to_memory safely reads both (JSON-decode + null handling) - _normalize_tags helper: lowercase, dedup, strip, cap at 10 - get_memories_for_context filters expired (valid_until < today UTC) - _rank_memories_for_query adds tag-boost: memories whose domain_tags appear as substrings in query text rank higher (tertiary key after content-overlap density + absolute overlap, before confidence) LLM extractor (_llm_prompt.py → llm-0.5.0): - SYSTEM_PROMPT documents domain_tags (2-5 keywords) + valid_until (time-bounded facts get expiry dates; durable facts stay null) - normalize_candidate_item parses both fields from model output with graceful fallback for string/null/missing LLM triage (scripts/auto_triage.py): - TRIAGE_SYSTEM_PROMPT documents same two fields - parse_verdict extracts them from verdict JSON - On promote: PUT /memory/{id} with tags + valid_until BEFORE POST /memory/{id}/promote, so active memories carry them API (src/atocore/api/routes.py): - MemoryCreateRequest: adds domain_tags, valid_until - MemoryUpdateRequest: adds domain_tags, valid_until, memory_type - GET /memory response exposes domain_tags + valid_until + created_at Triage UI (src/atocore/engineering/triage_ui.py): - Renders existing tags as colored badges - Adds inline text field for tags (comma-separated) + date picker for valid_until on every candidate card - Save&Promote button persists edits via PUT then promotes - Plain Promote (and Y shortcut) also saves tags/expiry if edited Wiki (src/atocore/engineering/wiki.py): - Search now matches memory content OR domain_tags - Search results render tags as clickable badges linking to /wiki/search?q=<tag> for cross-project navigation - valid_until shown as amber "valid until YYYY-MM-DD" hint Tests: 303 → 308 (5 new for Phase 3 behavior): - test_create_memory_with_tags_and_valid_until - test_create_memory_normalizes_tags - test_update_memory_sets_tags_and_valid_until - test_get_memories_for_context_excludes_expired - test_context_builder_tag_boost_orders_results Deferred (explicitly): temporal_scope enum, source_refs memory graph, HDBSCAN clustering, memory detail wiki page, backfill of existing actives. See docs/MASTER-BRAIN-PLAN.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 21:37:01 -04:00
domain_tags=tags,
valid_until=valid_until or "",
)
def _validate_confidence(confidence: float) -> None:
if not 0.0 <= confidence <= 1.0:
raise ValueError("Confidence must be between 0.0 and 1.0")
# ---------------------------------------------------------------------
# Phase 7A — Memory Consolidation: merge-candidate lifecycle
# ---------------------------------------------------------------------
#
# The detector (scripts/memory_dedup.py) writes proposals into
# memory_merge_candidates. The triage UI lists pending rows, a human
# reviews, and on approve we execute the merge here — never at detect
# time. This keeps the audit trail clean: every mutation is a human
# decision.
def create_merge_candidate(
memory_ids: list[str],
similarity: float,
proposed_content: str,
proposed_memory_type: str,
proposed_project: str,
proposed_tags: list[str] | None = None,
proposed_confidence: float = 0.6,
reason: str = "",
) -> str | None:
"""Insert a merge-candidate row. Returns the new row id, or None if
a pending candidate already covers this exact set of memory ids
(idempotent scan re-running the detector doesn't double-create)."""
import json as _json
if not memory_ids or len(memory_ids) < 2:
raise ValueError("merge candidate requires at least 2 memory_ids")
memory_ids_sorted = sorted(set(memory_ids))
memory_ids_json = _json.dumps(memory_ids_sorted)
tags_json = _json.dumps(_normalize_tags(proposed_tags))
candidate_id = str(uuid.uuid4())
with get_connection() as conn:
# Idempotency: same sorted-id set already pending? skip.
existing = conn.execute(
"SELECT id FROM memory_merge_candidates "
"WHERE status = 'pending' AND memory_ids = ?",
(memory_ids_json,),
).fetchone()
if existing:
return None
conn.execute(
"INSERT INTO memory_merge_candidates "
"(id, status, memory_ids, similarity, proposed_content, "
"proposed_memory_type, proposed_project, proposed_tags, "
"proposed_confidence, reason) "
"VALUES (?, 'pending', ?, ?, ?, ?, ?, ?, ?, ?)",
(
candidate_id, memory_ids_json, float(similarity or 0.0),
(proposed_content or "")[:2000],
(proposed_memory_type or "knowledge")[:50],
(proposed_project or "")[:100],
tags_json,
max(0.0, min(1.0, float(proposed_confidence))),
(reason or "")[:500],
),
)
log.info(
"merge_candidate_created",
candidate_id=candidate_id,
memory_count=len(memory_ids_sorted),
similarity=round(similarity, 4),
)
return candidate_id
def get_merge_candidates(status: str = "pending", limit: int = 100) -> list[dict]:
"""List merge candidates with their source memories inlined."""
import json as _json
with get_connection() as conn:
rows = conn.execute(
"SELECT * FROM memory_merge_candidates "
"WHERE status = ? ORDER BY created_at DESC LIMIT ?",
(status, limit),
).fetchall()
out = []
for r in rows:
try:
mem_ids = _json.loads(r["memory_ids"] or "[]")
except Exception:
mem_ids = []
try:
tags = _json.loads(r["proposed_tags"] or "[]")
except Exception:
tags = []
sources = []
for mid in mem_ids:
srow = conn.execute(
"SELECT id, memory_type, content, project, confidence, "
"status, reference_count, domain_tags, valid_until "
"FROM memories WHERE id = ?",
(mid,),
).fetchone()
if srow:
try:
stags = _json.loads(srow["domain_tags"] or "[]")
except Exception:
stags = []
sources.append({
"id": srow["id"],
"memory_type": srow["memory_type"],
"content": srow["content"],
"project": srow["project"] or "",
"confidence": srow["confidence"],
"status": srow["status"],
"reference_count": int(srow["reference_count"] or 0),
"domain_tags": stags,
"valid_until": srow["valid_until"] or "",
})
out.append({
"id": r["id"],
"status": r["status"],
"memory_ids": mem_ids,
"similarity": r["similarity"],
"proposed_content": r["proposed_content"] or "",
"proposed_memory_type": r["proposed_memory_type"] or "knowledge",
"proposed_project": r["proposed_project"] or "",
"proposed_tags": tags,
"proposed_confidence": r["proposed_confidence"],
"reason": r["reason"] or "",
"created_at": r["created_at"],
"resolved_at": r["resolved_at"],
"resolved_by": r["resolved_by"],
"result_memory_id": r["result_memory_id"],
"sources": sources,
})
return out
def reject_merge_candidate(candidate_id: str, actor: str = "human-triage", note: str = "") -> bool:
"""Mark a merge candidate as rejected. Source memories stay untouched."""
now_str = datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S")
with get_connection() as conn:
result = conn.execute(
"UPDATE memory_merge_candidates "
"SET status = 'rejected', resolved_at = ?, resolved_by = ? "
"WHERE id = ? AND status = 'pending'",
(now_str, actor, candidate_id),
)
if result.rowcount == 0:
return False
log.info("merge_candidate_rejected", candidate_id=candidate_id, actor=actor, note=note[:100])
return True
def merge_memories(
candidate_id: str,
actor: str = "human-triage",
override_content: str | None = None,
override_tags: list[str] | None = None,
) -> str | None:
"""Execute an approved merge candidate.
1. Validate all source memories still status=active
2. Create the new merged memory (status=active)
3. Mark each source status=superseded with an audit row pointing at
the new merged id
4. Mark the candidate status=approved, record result_memory_id
5. Write a consolidated audit row on the new memory
Returns the new merged memory's id, or None if the candidate cannot
be executed (already resolved, source tampered, etc.).
``override_content`` and ``override_tags`` let the UI pass the human's
edits before clicking approve.
"""
import json as _json
now_str = datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S")
with get_connection() as conn:
row = conn.execute(
"SELECT * FROM memory_merge_candidates WHERE id = ?",
(candidate_id,),
).fetchone()
if row is None or row["status"] != "pending":
log.warning("merge_candidate_not_pending", candidate_id=candidate_id)
return None
try:
mem_ids = _json.loads(row["memory_ids"] or "[]")
except Exception:
mem_ids = []
if not mem_ids or len(mem_ids) < 2:
log.warning("merge_candidate_invalid_memory_ids", candidate_id=candidate_id)
return None
# Snapshot sources + validate all active
source_rows = []
for mid in mem_ids:
srow = conn.execute(
"SELECT * FROM memories WHERE id = ?", (mid,)
).fetchone()
if srow is None or srow["status"] != "active":
log.warning(
"merge_source_not_active",
candidate_id=candidate_id,
memory_id=mid,
actual_status=(srow["status"] if srow else "missing"),
)
return None
source_rows.append(srow)
# Build merged memory fields — prefer human overrides, then proposed
content = (override_content or row["proposed_content"] or "").strip()
if not content:
log.warning("merge_candidate_empty_content", candidate_id=candidate_id)
return None
merged_type = (row["proposed_memory_type"] or source_rows[0]["memory_type"]).lower()
if merged_type not in MEMORY_TYPES:
merged_type = source_rows[0]["memory_type"]
merged_project = row["proposed_project"] or source_rows[0]["project"] or ""
merged_project = resolve_project_name(merged_project)
# Tags: override wins, else proposed, else union of sources
if override_tags is not None:
merged_tags = _normalize_tags(override_tags)
else:
try:
proposed_tags = _json.loads(row["proposed_tags"] or "[]")
except Exception:
proposed_tags = []
if proposed_tags:
merged_tags = _normalize_tags(proposed_tags)
else:
union: list[str] = []
for srow in source_rows:
try:
stags = _json.loads(srow["domain_tags"] or "[]")
except Exception:
stags = []
for t in stags:
if isinstance(t, str) and t and t not in union:
union.append(t)
merged_tags = union
# confidence = max; reference_count = sum
merged_confidence = max(float(s["confidence"]) for s in source_rows)
total_refs = sum(int(s["reference_count"] or 0) for s in source_rows)
# valid_until: if any source is permanent (None/empty), merged is permanent.
# Otherwise take the latest (lexical compare on ISO dates works).
merged_vu: str | None = "" # placeholder
has_permanent = any(not (s["valid_until"] or "").strip() for s in source_rows)
if has_permanent:
merged_vu = None
else:
merged_vu = max((s["valid_until"] or "").strip() for s in source_rows) or None
new_id = str(uuid.uuid4())
tags_json = _json.dumps(merged_tags)
conn.execute(
"INSERT INTO memories (id, memory_type, content, project, "
"source_chunk_id, confidence, status, domain_tags, valid_until, "
"reference_count, last_referenced_at) "
"VALUES (?, ?, ?, ?, NULL, ?, 'active', ?, ?, ?, ?)",
(
new_id, merged_type, content[:2000], merged_project,
merged_confidence, tags_json, merged_vu, total_refs, now_str,
),
)
# Mark sources superseded
for srow in source_rows:
conn.execute(
"UPDATE memories SET status = 'superseded', updated_at = ? "
"WHERE id = ?",
(now_str, srow["id"]),
)
# Mark candidate approved
conn.execute(
"UPDATE memory_merge_candidates SET status = 'approved', "
"resolved_at = ?, resolved_by = ?, result_memory_id = ? WHERE id = ?",
(now_str, actor, new_id, candidate_id),
)
# Audit rows (out of the transaction; fail-open via _audit_memory)
_audit_memory(
memory_id=new_id,
action="created_via_merge",
actor=actor,
after={
"memory_type": merged_type,
"content": content,
"project": merged_project,
"confidence": merged_confidence,
"domain_tags": merged_tags,
"reference_count": total_refs,
"merged_from": list(mem_ids),
"merge_candidate_id": candidate_id,
},
note=f"merged {len(mem_ids)} sources via candidate {candidate_id[:8]}",
)
for srow in source_rows:
_audit_memory(
memory_id=srow["id"],
action="superseded",
actor=actor,
before={"status": "active", "content": srow["content"]},
after={"status": "superseded", "superseded_by": new_id},
note=f"merged into {new_id}",
)
log.info(
"merge_executed",
candidate_id=candidate_id,
result_memory_id=new_id,
source_count=len(source_rows),
actor=actor,
)
return new_id