Clarify source staging and refresh model

This commit is contained in:
2026-04-06 07:53:18 -04:00
parent 82c7535d15
commit 0f95415530
5 changed files with 192 additions and 6 deletions

View File

@@ -49,6 +49,31 @@ separate.
The machine database is derived operational state, not the primary
human-readable source of truth.
## Source Snapshot Vs Machine Store
The human-readable files visible under `sources/vault` or `sources/drive` are
not the final "smart storage" format of AtoCore.
They are source snapshots made visible to the canonical Dalidou instance so
AtoCore can ingest them.
The actual machine-processed state lives in:
- `source_documents`
- `source_chunks`
- vector embeddings and indexes
- project memories
- trusted project state
- context-builder output
This means the staged markdown can still look very similar to the original PKM
or repo docs. That is normal.
The intelligence does not come from rewriting everything into a new markdown
vault. It comes from ingesting selected source material into the machine store
and then using that store for retrieval, trust-aware context assembly, and
memory.
## Canonical Hosting Model
Dalidou is the canonical host for the AtoCore service and machine database.
@@ -86,6 +111,14 @@ file replication of the live machine store.
- OpenClaw must continue to work if AtoCore is unavailable
- write-back from OpenClaw into AtoCore is deferred until later phases
Current staging behavior:
- selected project docs may be copied into a readable staging area on Dalidou
- AtoCore ingests from that staging area into the machine store
- the staging area is not itself the durable intelligence layer
- changes to the original PKM or repo source do not propagate automatically
until a refresh or re-ingest happens
## Intended Daily Operating Model
The target workflow is: