Files
ATOCore/docs/atocore-ecosystem-and-hosting.md

156 lines
4.8 KiB
Markdown

# AtoCore Ecosystem And Hosting
## Purpose
This document defines the intended boundaries between the Ato ecosystem layers
and the current hosting model.
## Ecosystem Roles
- `AtoCore`
- runtime, ingestion, retrieval, memory, context builder, API
- owns the machine-memory and context assembly system
- `AtoMind`
- future intelligence layer
- will own promotion, reflection, conflict handling, and trust decisions
- `AtoVault`
- human-readable memory source
- intended for Obsidian and manual inspection/editing
- `AtoDrive`
- trusted operational project source
- curated project truth with higher trust than general notes
## Trust Model
Current intended trust precedence:
1. Trusted Project State
2. AtoDrive artifacts
3. Recent validated memory
4. AtoVault summaries
5. PKM chunks
6. Historical or low-confidence material
## Storage Boundaries
Human-readable source layers and machine operational storage must remain
separate.
- `AtoVault` is a source layer, not the live vector database
- `AtoDrive` is a source layer, not the live vector database
- machine operational state includes:
- SQLite database
- vector store
- indexes
- embeddings
- runtime metadata
- cache and temp artifacts
The machine database is derived operational state, not the primary
human-readable source of truth.
## Source Snapshot Vs Machine Store
The human-readable files visible under `sources/vault` or `sources/drive` are
not the final "smart storage" format of AtoCore.
They are source snapshots made visible to the canonical Dalidou instance so
AtoCore can ingest them.
The actual machine-processed state lives in:
- `source_documents`
- `source_chunks`
- vector embeddings and indexes
- project memories
- trusted project state
- context-builder output
This means the staged markdown can still look very similar to the original PKM
or repo docs. That is normal.
The intelligence does not come from rewriting everything into a new markdown
vault. It comes from ingesting selected source material into the machine store
and then using that store for retrieval, trust-aware context assembly, and
memory.
## Canonical Hosting Model
Dalidou is the canonical host for the AtoCore service and machine database.
OpenClaw on the T420 should consume AtoCore over API and network, ideally over
Tailscale or another trusted internal network path.
The live SQLite and vector store must not be treated as a multi-node synced
filesystem. The architecture should prefer one canonical running service over
file replication of the live machine store.
## Canonical Dalidou Layout
```text
/srv/storage/atocore/
app/ # deployed AtoCore repository
data/ # canonical machine state
db/
chroma/
cache/
tmp/
sources/ # human-readable source inputs
vault/
drive/
logs/
backups/
run/
```
## Operational Rules
- source directories are treated as read-only by the AtoCore runtime
- Dalidou holds the canonical machine DB
- OpenClaw should use AtoCore as an additive context service
- OpenClaw must continue to work if AtoCore is unavailable
- write-back from OpenClaw into AtoCore is deferred until later phases
Current staging behavior:
- selected project docs may be copied into a readable staging area on Dalidou
- AtoCore ingests from that staging area into the machine store
- the staging area is not itself the durable intelligence layer
- changes to the original PKM or repo source do not propagate automatically
until a refresh or re-ingest happens
## Intended Daily Operating Model
The target workflow is:
- the human continues to work primarily in PKM project notes, Git/Gitea repos,
Discord, and normal OpenClaw sessions
- OpenClaw keeps its own runtime behavior and memory system
- AtoCore acts as the durable external context layer that compiles trusted
project state, retrieval, and long-lived machine-readable context
- AtoCore improves prompt quality and robustness without replacing direct repo
work, direct file reads, or OpenClaw's own memory
In other words:
- PKM and repos remain the human-authoritative project sources
- OpenClaw remains the active operating environment
- AtoCore remains the compiled context engine and machine-memory host
## Current Status
As of the current implementation pass:
- the AtoCore runtime is deployed on Dalidou
- the canonical machine-data layout exists on Dalidou
- the service is running from Dalidou
- the T420/OpenClaw machine can reach AtoCore over network
- a first read-only OpenClaw-side helper exists
- the live corpus now includes initial AtoCore self-knowledge and a first
curated batch for active projects
- the long-term content corpus still needs broader project and vault ingestion
This means the platform is hosted on Dalidou now, the first cross-machine
integration path exists, and the live content corpus is partially populated but
not yet fully ingested.