Commit Graph

27 Commits

Author SHA1 Message Date
368adf2ebc docs(arch): tool-handoff-boundaries + representation-authority
Session 3 of the four-session plan. Two more engineering planning
docs that lock in the most contentious architectural decisions
before V1 implementation begins.

docs/architecture/tool-handoff-boundaries.md
--------------------------------------------
Locks in the V1 read/write relationship with external tools:

- AtoCore is a one-way mirror in V1. External tools push,
  AtoCore reads, AtoCore never writes back.
- Per-tool stance table covering KB-CAD, KB-FEM, NX, PKM, Gitea
  repos, OpenClaw, AtoDrive, PLM/vendor systems
- Two new ingest endpoints proposed for V1:
  POST /ingest/kb-cad/export and POST /ingest/kb-fem/export
- Sketch JSON shapes for both exports (intentionally minimal,
  to be refined in dedicated schema docs during implementation)
- Drift handling: KB-CAD changes a value -> creates an entity
  candidate -> existing active becomes a conflict member ->
  human resolves via the conflict model
- Hard-line invariants V1 will not cross: no write to external
  tools, no live polling, no silent merging, no schema fan-out,
  no external-tool-specific logic in entity types
- Why not bidirectional: schema drift, conflict semantics, trust
  hierarchy, velocity, reversibility
- V2+ deferred items: selective write-back annotations, light
  polling, direct NX integration, cost/vendor/PLM connections
- Open questions for the implementation sprint: schema location,
  who runs the exporter, full-vs-incremental, exporter auth

docs/architecture/representation-authority.md
---------------------------------------------
The canonical-home matrix that says where each kind of fact
actually lives:

- Six representation layers identified: PKM, KB project,
  Gitea repos, AtoCore memories, AtoCore entities, AtoCore
  project_state
- The hard rule: every fact kind has exactly one canonical
  home; other layers may hold derived copies but never disagree
- Comprehensive matrix covering 22 fact kinds (CAD geometry,
  CAD-side structure, FEM mesh, FEM results, code, repo docs,
  PKM prose, identity, preference, episodic, decision,
  requirement, constraint, validation claim, material,
  parameter, project status, ADRs, runbooks, backup metadata,
  interactions)
- Cross-layer supremacy rule: project_state > tool-of-origin >
  entities > active memories > source chunks
- Three worked examples showing how the rules apply:
  * "what material does the lateral support pad use?" (KB-CAD
    canonical, project_state override possible)
  * "did we decide to merge the bind mounts?" (Gitea + memory
    both canonical for different aspects)
  * "what's p05's current next focus?" (project_state always
    wins for current state queries)
- Concrete consequences for V1 implementation: Material and
  Parameter are mostly KB-CAD shadows; Decisions / Requirements /
  Constraints / ValidationClaims are AtoCore-canonical; PKM is
  never authoritative; project_state is the override layer;
  the conflict model is the enforcement mechanism
- Out of scope for V1: facts about other people, vendor/cost
  facts, time-bounded facts, cross-project shared facts
- Open questions for V1: how the reviewer sees canonical home
  in the UI, whether entities need an explicit canonical_home
  field, how project_state overrides surface in query results

This is pure doc work. No code, no schema, no behavior changes.
After this commit the engineering planning sprint is 6 of 8 docs
done — only human-mirror-rules and engineering-v1-acceptance
remain.
2026-04-07 06:50:56 -04:00
a637017900 slash command for daily AtoCore use + backup-restore procedure
Session 2 of the four-session plan. Lands two operational pieces:
the Claude Code slash command that makes AtoCore reachable from
inside any Claude Code session, and the full backup/restore
procedure doc that turns the backup endpoint code into a real
operational drill.

Slash command (.claude/commands/atocore-context.md)
---------------------------------------------------
- Project-level slash command following the standard frontmatter
  format (description + argument-hint)
- Parses the user prompt and an optional trailing project id, with
  case-insensitive matching against the registered project ids
  (atocore, p04-gigabit, p05-interferometer, p06-polisher and
  their aliases)
- Calls POST /context/build on the live AtoCore service, defaulting
  to http://dalidou:8100 (overridable via ATOCORE_API_BASE env var)
- Renders the formatted context pack inline so the user can see
  exactly what AtoCore would feed an LLM, plus a stats banner and a
  per-chunk source list
- Includes graceful failure handling for network errors, 4xx, 5xx,
  and the empty-result case
- Defines a future capture path that POSTs to /interactions for the
  Phase 9 reflection loop. The current command leaves capture as
  manual / opt-in pending a clean post-turn hook design

.gitignore changes
------------------
- Replaced wholesale .claude/ ignore with .claude/* + exceptions
  for .claude/commands/ so project slash commands can be tracked
- Other .claude/* paths (worktrees, settings, local state) remain
  ignored

Backup-restore procedure (docs/backup-restore-procedure.md)
-----------------------------------------------------------
- Defines what gets backed up (SQLite + registry always, Chroma
  optional under ingestion lock) and what doesn't (sources, code,
  logs, cache, tmp)
- Documents the snapshot directory layout and the timestamp format
- Three trigger paths in priority order:
  - via POST /admin/backup with {include_chroma: true|false}
  - via the standalone src/atocore/ops/backup.py module
  - via cold filesystem copy with brief downtime as last resort
- Listing and validation procedure with the /admin/backup and
  /admin/backup/{stamp}/validate endpoints
- Full step-by-step restore procedure with mandatory pre-flight
  safety snapshot, ownership/permission requirements, and the
  post-restore verification checks
- Rollback path using the pre-restore safety copy
- Retention policy (last 7 daily / 4 weekly / 6 monthly) and
  explicit acknowledgment that the cleanup job is not yet
  implemented
- Drill schedule: quarterly full restore drill, post-migration
  drill, post-incident validation
- Common failure mode table with diagnoses
- Quickstart cheat sheet at the end for daily reference
- Open follow-ups: cleanup script, off-Dalidou target,
  encryption, automatic post-backup validation, incremental
  Chroma snapshots

The procedure has not yet been exercised against the live Dalidou
instance — that is the next step the user runs themselves once
the slash command is in place.
2026-04-07 06:46:50 -04:00
d0ff8b5738 Merge origin/main into codex/dalidou-storage-foundation
Integrate codex/port-atocore-ops-client (operator client + operations
playbook) so the dalidou-storage-foundation branch can fast-forward
into main.

# Conflicts:
#	README.md
2026-04-07 06:20:19 -04:00
b9da5b6d84 phase9 first-real-use validation + small hygiene wins
Session 1 of the four-session plan. Empirically exercises the Phase 9
loop (capture -> reinforce -> extract) for the first time and lands
three small hygiene fixes.

Validation script + report
--------------------------
scripts/phase9_first_real_use.py — reproducible script that:
  - sets up an isolated SQLite + Chroma store under
    data/validation/phase9-first-use (gitignored)
  - seeds 3 active memories
  - runs 8 sample interactions through capture + reinforce + extract
  - prints what each step produced and reinforcement state at the end
  - supports --json output for downstream tooling

docs/phase9-first-real-use.md — narrative report of the run with:
  - extraction results table (8/8 expectations met exactly)
  - the empirical finding that REINFORCEMENT MATCHED ZERO seeds
    despite sample 5 clearly echoing the rebase preference memory
  - root cause analysis: the substring matcher is too brittle for
    natural paraphrases (e.g. "prefers" vs "I prefer", "history"
    vs "the history")
  - recommended fix: replace substring matcher with a token-overlap
    matcher (>=70% of memory tokens present in response, with
    light stemming and a small stop list)
  - explicit note that the fix is queued as a follow-up commit, not
    bundled into the report — keeps the audit trail clean

Key extraction results from the run:
  - all 7 heading/sentence rules fired correctly
  - 0 false positives on the prose-only sample (the most important
    sanity check)
  - long content preserved without truncation
  - dedup correctly kept three distinct cues from one interaction
  - project scoping flowed cleanly through the pipeline

Hygiene 1: FastAPI lifespan migration (src/atocore/main.py)
- Replaced @app.on_event("startup") with the modern @asynccontextmanager
  lifespan handler
- Same setup work (setup_logging, ensure_runtime_dirs, init_db,
  init_project_state_schema, startup_ready log)
- Removes the two on_event deprecation warnings from every test run
- Test suite now shows 1 warning instead of 3

Hygiene 2: EXTRACTOR_VERSION constant (src/atocore/memory/extractor.py)
- Added EXTRACTOR_VERSION = "0.1.0" with a versioned change log comment
- MemoryCandidate dataclass carries extractor_version on every candidate
- POST /interactions/{id}/extract response now includes extractor_version
  on both the top level (current run) and on each candidate
- Implements the versioning requirement called out in
  docs/architecture/promotion-rules.md so old candidates can be
  identified and re-evaluated when the rule set evolves

Hygiene 3: ~/.git-credentials cleanup (out-of-tree, not committed)
- Removed the dead OAUTH_USER:<jwt> line for dalidou:3000 that was
  being silently rewritten by the system credential manager on every
  push attempt
- Configured credential.http://dalidou:3000.helper with the empty-string
  sentinel pattern so the URL-specific helper chain is exactly
  ["", store] instead of inheriting the system-level "manager" helper
  that ships with Git for Windows
- Same fix for the 100.80.199.40 (Tailscale) entry
- Verified end to end: a fresh push using only the cleaned credentials
  file (no embedded URL) authenticates as Antoine and lands cleanly

Full suite: 160 passing (no change from previous), 1 warning
(was 3) thanks to the lifespan migration.
2026-04-07 06:16:35 -04:00
480f13a6df docs(arch): memory-vs-entities, promotion-rules, conflict-model
Three planning docs that answer the architectural questions the
engineering query catalog raised. Together with the catalog they
form roughly half of the pre-implementation planning sprint.

docs/architecture/memory-vs-entities.md
---------------------------------------
Resolves the central question blocking every other engineering
layer doc: is a Decision a memory or an entity?

Key decisions:
- memories stay the canonical home for identity, preference, and
  episodic facts
- entities become the canonical home for project, knowledge, and
  adaptation facts once the engineering layer V1 ships
- no concept lives in both layers at full fidelity; one canonical
  home per concept
- a "graduation" flow lets active memories upgrade into entities
  (memory stays as a frozen historical pointer, never deleted)
- one shared candidate review queue across both layers
- context builder budget gains a 15% slot for engineering entities,
  slotted between identity/preference memories and retrieved chunks
- the Phase 9 memory extractor's structural cues (decision heading,
  constraint heading, requirement heading) are explicitly an
  intentional temporary overlap, cleanly migrated via graduation
  when the entity extractor ships

docs/architecture/promotion-rules.md
------------------------------------
Defines the full Layer 0 → Layer 2 pipeline:

- four layers: L0 raw source, L1 memory candidate/active, L2 entity
  candidate/active, L3 trusted project state
- three extraction triggers: on interaction capture (existing),
  on ingestion wave (new, batched per wave), on explicit request
- per-rule prior confidence tuned at write time by structural
  signal (echoes the retriever's high/low signal hints) and
  freshness bonus
- batch cap of 50 candidates per pass to protect the reviewer
- full provenance requirements: every candidate carries rule id,
  source_chunk_id, source_interaction_id, and extractor_version
- reversibility matrix for every promotion step
- explicit no-auto-promotion-in-V1 stance with the schema designed
  so auto-promotion policies can be added later without migration
- the hard invariant: nothing ever moves into L3 automatically
- ingestion-wave extraction produces a report artifact under
  data/extraction-reports/<wave-id>/

docs/architecture/conflict-model.md
-----------------------------------
Defines how AtoCore handles contradictory facts without violating
the "bad memory is worse than no memory" rule.

- conflict = two or more active rows claiming the same slot with
  incompatible values
- per-type "slot key" tuples for both memory and entity types
- cross-layer conflict detection respects the trust hierarchy:
  trusted project state > active entities > active memories
- new conflicts and conflict_members tables (schema proposal)
- detection at two latencies: synchronous at write time,
  asynchronous nightly sweep
- "flag, never block" rule: writes always succeed, conflicts are
  surfaced via /conflicts, /health open_conflicts_count, per-row
  response bodies, and the Human Mirror's disputed marker
- resolution is always human: promote-winner + supersede-others,
  or dismiss-as-not-a-real-conflict, both with audit trail
- explicitly out of scope for V1: cross-project conflicts,
  temporal-overlap conflicts, tolerance-aware numeric comparisons

Also updates:
- master-plan-status.md: Phase 9 moved from "started" to "baseline
  complete" now that Commits A, B, C are all landed
- master-plan-status.md: adds a "Engineering Layer Planning Sprint"
  section listing the doc wave so far and the remaining docs
  (tool-handoff-boundaries, human-mirror-rules,
  representation-authority, engineering-v1-acceptance)
- current-state.md: Phase 9 moved from "not started" to "baseline
  complete" with the A/B/C annotation

This is pure doc work. No code changes, no schema changes, no
behavior changes. Per the working rule in master-plan-status.md:
the architecture docs shape decisions, they do not force premature
schema work.
2026-04-06 21:30:35 -04:00
ceb129c7d1 Add operator client and operations playbook 2026-04-06 19:59:09 -04:00
2e449a4c33 docs(arch): engineering query catalog as the V1 driving target
First doc in the engineering-layer planning sprint. The premise of
this document is the inverse of the existing ontology doc: instead of
listing objects and seeing what they could do, we list the questions
we need to answer and let those drive what objects and relationships
must exist.

The rule established here:

> If a typed object or relationship does not serve at least one query
> in this catalog, it is not in V1.

Contents:

- 20 v1-required queries grouped into 5 tiers:
  - structure (Q-001..Q-004)
  - intent (Q-005..Q-009)
  - validation (Q-010..Q-012)
  - change/time (Q-013..Q-014)
  - cross-cutting (Q-016..Q-020)
- 3 v1-stretch queries (Q-021..Q-023)
- 4 v2 deferred queries (Q-024..Q-027) so V1 does not paint us into
  a corner

Each entry has: id, question, invocation, expected result shape,
required objects, required relationships, provenance requirement,
and tier.

Three queries are flagged as the "killer correctness" queries:
- Q-006 orphan requirements (engineering equivalent of untested code)
- Q-009 decisions based on flagged assumptions (catches fragile design)
- Q-011 validation claims with no supporting result (catches
  unevidenced claims)

The catalog ends with the implied implementation order for V1, the
list of object families intentionally deferred (BOM, manufacturing,
software, electrical, test correlation), and the open questions this
catalog raises for the next planning docs:

- when do orphan/unsupported queries flag (insert time vs query time)?
- when an Assumption flips, are dependent Decisions auto-flagged?
- does AtoCore block conflicts or always save-and-flag?
- is EVIDENCED_BY mandatory at insert?
- when does the Human Mirror regenerate?

These are the questions the next planning docs (memory-vs-entities,
conflict-model, promotion-rules) should answer before any engineering
layer code is written.

This is doc work only. No code, no schema, no behavior change.
Per the working rule in master-plan-status.md: the architecture docs
shape decisions, they do not force premature schema work.
2026-04-06 19:33:44 -04:00
ea3fed3d44 feat(phase9-A): interaction capture loop foundation
Phase 9 Commit A from the agreed plan: turn AtoCore from a stateless
context enhancer into a system that records what it actually fed to an
LLM and what came back. This is the audit trail Reflection (Commit B)
and Extraction (Commit C) will be layered on top of.

The interactions table existed in the schema since the original PoC
but nothing wrote to it. This change makes it real:

Schema migration (additive only):
- response                full LLM response (caller decides how much)
- memories_used           JSON list of memory ids in the context pack
- chunks_used             JSON list of chunk ids in the context pack
- client                  identifier of the calling system
                          (openclaw, claude-code, manual, ...)
- session_id              groups multi-turn conversations
- project                 project name (mirrors the memory module pattern,
                          no FK so capture stays cheap)
- indexes on session_id, project, created_at

The created_at column is now written explicitly with a SQLite-compatible
'YYYY-MM-DD HH:MM:SS' format so the same string lives in the DB and the
returned dataclass. Without this the `since` filter on list_interactions
would silently fail because CURRENT_TIMESTAMP and isoformat use different
shapes that do not compare cleanly as strings.

New module src/atocore/interactions/:
- Interaction dataclass
- record_interaction()    persists one round-trip (prompt required;
                          everything else optional). Refuses empty prompts.
- list_interactions()     filters by project / session_id / client / since,
                          newest-first, hard-capped at 500
- get_interaction()       fetch by id, full response + context pack

API endpoints:
- POST   /interactions                capture one interaction
- GET    /interactions                list with summaries (no full response)
- GET    /interactions/{id}           full record incl. response + pack

Trust model:
- Capture is read-only with respect to memories, project state, and
  source chunks. Nothing here promotes anything into trusted state.
- The audit trail becomes the dataset Commit B (reinforcement) and
  Commit C (extraction + review queue) will operate on.

Tests (13 new, all green):
- service: persist + roundtrip every field
- service: minimum-fields path (prompt only)
- service: empty / whitespace prompt rejected
- service: get by id returns None for missing
- service: filter by project, session, client
- service: ordering newest-first with limit
- service: since filter inclusive on cutoff (the bug the timestamp
  fix above caught)
- service: limit=0 returns empty
- API: POST records and round-trips through GET /interactions/{id}
- API: empty prompt returns 400
- API: missing id returns 404
- API: list filter returns summaries (not full response bodies)

Full suite: 118 passing (was 105).

master-plan-status.md updated to move Phase 9 from "not started" to
"started" with the explicit note that Commit A is in and Commits B/C
remain.
2026-04-06 19:31:43 -04:00
c9b9eede25 feat: tunable ranking, refresh status, chroma backup + admin endpoints
Three small improvements that move the operational baseline forward
without changing the existing trust model.

1. Tunable retrieval ranking weights
   - rank_project_match_boost, rank_query_token_step,
     rank_query_token_cap, rank_path_high_signal_boost,
     rank_path_low_signal_penalty are now Settings fields
   - all overridable via ATOCORE_* env vars
   - retriever no longer hard-codes 2.0 / 1.18 / 0.72 / 0.08 / 1.32
   - lets ranking be tuned per environment as Wave 1 is exercised
     without code changes

2. /projects/{name}/refresh status
   - refresh_registered_project now returns an overall status field
     ("ingested", "partial", "nothing_to_ingest") plus roots_ingested
     and roots_skipped counters
   - ProjectRefreshResponse advertises the new fields so callers can
     rely on them
   - covers the case where every configured root is missing on disk

3. Chroma cold snapshot + admin backup endpoints
   - create_runtime_backup now accepts include_chroma and writes a
     cold directory copy of the chroma persistence path
   - new list_runtime_backups() and validate_backup() helpers
   - new endpoints:
     - POST /admin/backup            create snapshot (optional chroma)
     - GET  /admin/backup            list snapshots
     - GET  /admin/backup/{stamp}/validate  structural validation
   - chroma snapshots are taken under exclusive_ingestion() so a refresh
     or ingest cannot race with the cold copy
   - backup metadata records what was actually included and how big

Tests:
- 8 new tests covering tunable weights, refresh status branches
  (ingested / partial / nothing_to_ingest), chroma snapshot, list,
  validate, and the API endpoints (including the lock-acquisition path)
- existing fake refresh stubs in test_api_storage.py updated for the
  expanded ProjectRefreshResponse model
- full suite: 105 passing (was 97)

next-steps doc updated to reflect that the chroma snapshot + restore
validation gap from current-state.md is now closed in code; only the
operational retention policy remains.
2026-04-06 18:42:19 -04:00
bdb42dba05 Expand active project wave and serialize refreshes 2026-04-06 14:58:14 -04:00
46a5d5887a Update plan status for organic routing 2026-04-06 14:06:54 -04:00
9943338846 Document organic OpenClaw routing layer 2026-04-06 14:04:49 -04:00
4aa2b696a9 Document next-phase execution plan 2026-04-06 13:10:11 -04:00
af01dd3e70 Add engineering architecture docs 2026-04-06 12:45:28 -04:00
8f74cab0e6 Sync live project registry descriptions 2026-04-06 12:36:15 -04:00
06aa931273 Add project registry update flow 2026-04-06 12:31:24 -04:00
c9757e313a Harden runtime and add backup foundation 2026-04-06 10:15:00 -04:00
9715fe3143 Add project registration endpoint 2026-04-06 09:52:19 -04:00
827dcf2cd1 Add project registration policy and template 2026-04-06 08:46:37 -04:00
d8028f406e Sync live corpus counts in current state doc 2026-04-06 08:19:42 -04:00
8293099025 Add project registry refresh foundation 2026-04-06 08:02:13 -04:00
0f95415530 Clarify source staging and refresh model 2026-04-06 07:53:18 -04:00
82c7535d15 Refresh current state and next steps docs 2026-04-06 07:36:33 -04:00
8a94da4bf4 Clarify operating model and project corpus state 2026-04-06 07:25:33 -04:00
5069d5b1b6 Update current state and next steps docs 2026-04-05 19:12:45 -04:00
440fc1d9ba Document ecosystem state and integration contract 2026-04-05 18:47:40 -04:00
6bfa1fcc37 Add Dalidou storage foundation and deployment prep 2026-04-05 18:33:52 -04:00