fix(memory): SQL-aggregate dashboard counts, project on update, id-based invalidate

Three bugs surfaced by the 2026-04-29 Codex review of the state-of-the-service
plan, all in the memory write/read path:

1. /admin/dashboard memory counts were derived from a confidence-sorted
   get_memories(limit=500) sample. With prod at 1091 active memories the
   dashboard reported 315 ("active in the top 500"), while integrity
   reported the SQL aggregate 1091. Replaced the sampling block with a
   new get_memory_count_summary() helper that does straight SQL aggregates
   over status/type/project. Dashboard memories.{active,candidates,...}
   now match integrity. Adds memories.{by_status,total} for completeness.

2. PUT /memory/{id} silently dropped project changes because
   MemoryUpdateRequest had no project field and update_memory() didn't
   accept one. auto_triage.py:407 detects suggested_project drift and
   issues a PUT to fix it; the fix never landed. Added project to the
   request schema and the service signature, with resolve_project_name
   canonicalization, before/after audit snapshot, and the existing
   duplicate-active check now scoped to the new project.

3. POST /memory/{id}/invalidate did _get_memories(status="active",
   limit=1) and looked for the target inside that single
   highest-confidence row. Any other active memory 404'd. Replaced with
   a direct id lookup via the new get_memory(id) helper; status branching
   stays the same (404 unknown / 200 already-invalid / 409 wrong-status /
   200 invalidated).

Tests added (9):
- test_get_memory_count_summary_returns_full_table_aggregates
- test_get_memory_returns_single_row_or_none
- test_update_memory_can_change_project_with_canonicalization
- test_update_memory_project_unchanged_when_not_passed
- test_api_invalidate_finds_active_memory_outside_top_one
- test_api_invalidate_already_invalid_is_idempotent
- test_api_invalidate_candidate_returns_409
- test_api_invalidate_unknown_id_is_404
- test_admin_dashboard_active_count_matches_full_table

Test count: 572 -> 581. Full suite green locally.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
2026-04-28 21:40:10 -04:00
parent 7042eaea46
commit fb4d55cbcd
4 changed files with 301 additions and 40 deletions

View File

@@ -575,3 +575,89 @@ def test_expire_stale_candidates_keeps_reinforced(isolated_db):
assert mid not in expired
mem = _get_memory_by_id(mid)
assert mem["status"] == "candidate"
# ---------------------------------------------------------------------------
# Wave 1 (2026-04-29) — counts come from SQL, not from the top-N sample.
# Exposed by Codex audit when prod /admin/dashboard reported 315 active
# while /admin/integrity-check reported 1091. The dashboard was building
# its counts from a confidence-sorted limit=500 fetch.
# ---------------------------------------------------------------------------
def test_get_memory_count_summary_returns_full_table_aggregates(isolated_db):
"""Counts come from SQL aggregates, not a sampled fetch."""
from atocore.memory.service import (
create_memory,
get_memory_count_summary,
invalidate_memory,
)
# Create more rows than any reasonable sampling LIMIT so any
# LIMIT-based counter would visibly disagree with reality.
for i in range(120):
create_memory(
"knowledge",
f"fact-{i}",
project="p04-gigabit",
confidence=0.9,
status="active",
)
for i in range(7):
create_memory("knowledge", f"cand-{i}", status="candidate")
invalid_obj = create_memory("knowledge", "to-invalidate", status="active")
invalidate_memory(invalid_obj.id)
summary = get_memory_count_summary()
assert summary["total"] == 120 + 7 + 1
assert summary["by_status"]["active"] == 120
assert summary["by_status"]["candidate"] == 7
assert summary["by_status"]["invalid"] == 1
assert summary["active"]["total"] == 120
assert summary["active"]["by_type"] == {"knowledge": 120}
assert summary["active"]["by_project"] == {"p04-gigabit": 120}
def test_get_memory_returns_single_row_or_none(isolated_db):
from atocore.memory.service import create_memory, get_memory
mem = create_memory("knowledge", "single-row test")
fetched = get_memory(mem.id)
assert fetched is not None
assert fetched.id == mem.id
assert get_memory("non-existent-id") is None
def test_update_memory_can_change_project_with_canonicalization(
isolated_db, project_registry
):
"""update_memory(project=...) canonicalizes aliases and writes audit."""
project_registry(("p04-gigabit", ("p04", "gigabit")))
from atocore.memory.service import (
create_memory,
get_memory,
get_memory_audit,
update_memory,
)
mem = create_memory("knowledge", "retargetable fact", project="atocore")
ok = update_memory(mem.id, project="p04") # alias
assert ok is True
refreshed = get_memory(mem.id)
assert refreshed.project == "p04-gigabit" # canonical, not "p04"
audit_rows = get_memory_audit(mem.id, limit=10)
update_rows = [r for r in audit_rows if r.get("action") == "updated"]
assert update_rows, f"expected an updated audit row, got {audit_rows}"
head = update_rows[0]
assert head["before"]["project"] == "atocore"
assert head["after"]["project"] == "p04-gigabit"
def test_update_memory_project_unchanged_when_not_passed(isolated_db):
from atocore.memory.service import create_memory, get_memory, update_memory
mem = create_memory("knowledge", "untouched project", project="p06-polisher")
update_memory(mem.id, content="edited content")
assert get_memory(mem.id).project == "p06-polisher"