Codex caught a real data-loss bug in the legacy alias migration
shipped in 7e60f5a. plan_state_migration filtered state rows to
status='active' only, then apply_plan deleted the shadow projects
row at the end. Because project_state.project_id has
ON DELETE CASCADE, any superseded or invalid state rows still
attached to the shadow project got silently cascade-deleted —
exactly the audit loss a cleanup migration must not cause.
This commit fixes the bug and adds regression tests that lock in
the invariant "shadow state of every status is accounted for".
Root cause
----------
scripts/migrate_legacy_aliases.py::plan_state_migration was:
"SELECT * FROM project_state WHERE project_id = ? AND status = 'active'"
which only found live rows. Any historical row (status in
'superseded' or 'invalid') was invisible to the plan, so the apply
step had nothing to rekey for it. Then the shadow project row was
deleted at the end, cascade-deleting every unplanned row.
The fix
-------
plan_state_migration now selects ALL state rows attached to the
shadow project regardless of status, and handles every row per a
per-status decision table:
| Shadow status | Canonical at same triple? | Values | Action |
|---------------|---------------------------|------------|--------------------------------|
| any | no | — | clean rekey |
| any | yes | same | shadow superseded in place |
| active | yes, active | different | COLLISION, apply refuses |
| active | yes, inactive | different | shadow wins, canonical deleted |
| inactive | yes, any | different | historical drop (logged) |
Four changes in the script:
1. SELECT drops the status filter so the plan walks every row.
2. New StateRekeyPlan.historical_drops list captures the shadow
rows that lose to a canonical row at the same triple because the
shadow is already inactive. These are the only unavoidable data
losses, and they happen because the UNIQUE(project_id, category,
key) constraint on project_state doesn't allow two rows per
triple regardless of status.
3. New apply action 'replace_inactive_canonical' for the
shadow-active-vs-canonical-inactive case. At apply time the
canonical inactive row is DELETEd first (SQLite's default
immediate constraint checking) and then the shadow is UPDATEd
into its place in two separate statements. Adds a new
state_rows_replaced_inactive_canonical counter.
4. New apply counter state_rows_historical_dropped for audit
transparency. The rows themselves are still cascade-deleted
when the shadow project row is dropped, but they're counted
and reported.
Five places render_plan_text and plan_to_json_dict updated:
- counts() gains state_historical_drops
- render_plan_text prints a 'historical drops' section with each
shadow-canonical pair and their statuses when there are any, so
the operator sees the audit loss BEFORE running --apply
- The new section explicitly tells the operator: "if any of these
values are worth keeping as separate audit records, manually copy
them out before running --apply"
- plan_to_json_dict carries historical_drops into the JSON report
- The state counts table in the human report now shows both
'state collisions (block)' and 'state historical drops' as
separate lines so the operator can distinguish
"apply will refuse" from "apply will drop historical rows"
Regression tests (3 new, all green)
-----------------------------------
tests/test_migrate_legacy_aliases.py:
- test_apply_preserves_superseded_shadow_state_when_no_collision:
the direct regression for the codex finding. Seeds a shadow with
a superseded state row on a triple the canonical doesn't have,
runs the migration, verifies via raw SQL that the row is now
attached to the canonical projects row and still has status
'superseded'. This is the test that would have failed before
the fix.
- test_apply_drops_shadow_inactive_row_when_canonical_holds_same_triple:
covers the unavoidable data-loss case. Seeds shadow superseded
+ canonical active at the same triple with different values,
verifies plan.counts() reports one historical_drop, runs apply,
verifies the canonical value is preserved and the shadow value
is gone.
- test_apply_replaces_inactive_canonical_with_active_shadow:
covers the cross-contamination case where shadow has live value
and canonical has a stale invalid row. Shadow wins by deleting
canonical and rekeying in its place. Verifies the counter and
the final state.
Plus _seed_state_row now accepts a status kwarg so the seeding
helper can create superseded/invalid rows directly.
test_dry_run_on_empty_registry_reports_empty_plan was updated to
include the new state_historical_drops key in the expected counts
dict (all zero for an empty plan, so the test shape is the same).
Full suite: 197 passing (was 194), 1 warning. The +3 is the three
new regression tests.
What this commit does NOT do
----------------------------
- Does NOT try to preserve historical shadow rows that collide
with a canonical row at the same triple. That would require a
schema change (adding (id) to the UNIQUE key, or a separate
history table) and isn't in scope for a cleanup migration.
The operator sees these as explicit 'historical drops' in the
plan output and can copy them out manually if any are worth
preserving.
- Does NOT change any behavior for rows that were already
reachable from the canonicalized read path. The fix only
affects legacy rows whose project_id points at a shadow row.
- Does NOT re-verify the earlier happy-path tests beyond the full
suite confirming them still green.
38 KiB
38 KiB