Files
ATOCore/docs/atocore-ecosystem-and-hosting.md

3.7 KiB

AtoCore Ecosystem And Hosting

Purpose

This document defines the intended boundaries between the Ato ecosystem layers and the current hosting model.

Ecosystem Roles

  • AtoCore
    • runtime, ingestion, retrieval, memory, context builder, API
    • owns the machine-memory and context assembly system
  • AtoMind
    • future intelligence layer
    • will own promotion, reflection, conflict handling, and trust decisions
  • AtoVault
    • human-readable memory source
    • intended for Obsidian and manual inspection/editing
  • AtoDrive
    • trusted operational project source
    • curated project truth with higher trust than general notes

Trust Model

Current intended trust precedence:

  1. Trusted Project State
  2. AtoDrive artifacts
  3. Recent validated memory
  4. AtoVault summaries
  5. PKM chunks
  6. Historical or low-confidence material

Storage Boundaries

Human-readable source layers and machine operational storage must remain separate.

  • AtoVault is a source layer, not the live vector database
  • AtoDrive is a source layer, not the live vector database
  • machine operational state includes:
    • SQLite database
    • vector store
    • indexes
    • embeddings
    • runtime metadata
    • cache and temp artifacts

The machine database is derived operational state, not the primary human-readable source of truth.

Canonical Hosting Model

Dalidou is the canonical host for the AtoCore service and machine database.

OpenClaw on the T420 should consume AtoCore over API and network, ideally over Tailscale or another trusted internal network path.

The live SQLite and vector store must not be treated as a multi-node synced filesystem. The architecture should prefer one canonical running service over file replication of the live machine store.

Canonical Dalidou Layout

/srv/storage/atocore/
  app/         # deployed AtoCore repository
  data/        # canonical machine state
    db/
    chroma/
    cache/
    tmp/
  sources/     # human-readable source inputs
    vault/
    drive/
  logs/
  backups/
  run/

Operational Rules

  • source directories are treated as read-only by the AtoCore runtime
  • Dalidou holds the canonical machine DB
  • OpenClaw should use AtoCore as an additive context service
  • OpenClaw must continue to work if AtoCore is unavailable
  • write-back from OpenClaw into AtoCore is deferred until later phases

Intended Daily Operating Model

The target workflow is:

  • the human continues to work primarily in PKM project notes, Git/Gitea repos, Discord, and normal OpenClaw sessions
  • OpenClaw keeps its own runtime behavior and memory system
  • AtoCore acts as the durable external context layer that compiles trusted project state, retrieval, and long-lived machine-readable context
  • AtoCore improves prompt quality and robustness without replacing direct repo work, direct file reads, or OpenClaw's own memory

In other words:

  • PKM and repos remain the human-authoritative project sources
  • OpenClaw remains the active operating environment
  • AtoCore remains the compiled context engine and machine-memory host

Current Status

As of the current implementation pass:

  • the AtoCore runtime is deployed on Dalidou
  • the canonical machine-data layout exists on Dalidou
  • the service is running from Dalidou
  • the T420/OpenClaw machine can reach AtoCore over network
  • a first read-only OpenClaw-side helper exists
  • the live corpus now includes initial AtoCore self-knowledge and a first curated batch for active projects
  • the long-term content corpus still needs broader project and vault ingestion

This means the platform is hosted on Dalidou now, the first cross-machine integration path exists, and the live content corpus is partially populated but not yet fully ingested.