Document ecosystem state and integration contract

This commit is contained in:
2026-04-05 18:47:40 -04:00
parent 6bfa1fcc37
commit 440fc1d9ba
4 changed files with 290 additions and 0 deletions

View File

@@ -0,0 +1,100 @@
# AtoCore Ecosystem And Hosting
## Purpose
This document defines the intended boundaries between the Ato ecosystem layers
and the current hosting model.
## Ecosystem Roles
- `AtoCore`
- runtime, ingestion, retrieval, memory, context builder, API
- owns the machine-memory and context assembly system
- `AtoMind`
- future intelligence layer
- will own promotion, reflection, conflict handling, and trust decisions
- `AtoVault`
- human-readable memory source
- intended for Obsidian and manual inspection/editing
- `AtoDrive`
- trusted operational project source
- curated project truth with higher trust than general notes
## Trust Model
Current intended trust precedence:
1. Trusted Project State
2. AtoDrive artifacts
3. Recent validated memory
4. AtoVault summaries
5. PKM chunks
6. Historical or low-confidence material
## Storage Boundaries
Human-readable source layers and machine operational storage must remain
separate.
- `AtoVault` is a source layer, not the live vector database
- `AtoDrive` is a source layer, not the live vector database
- machine operational state includes:
- SQLite database
- vector store
- indexes
- embeddings
- runtime metadata
- cache and temp artifacts
The machine database is derived operational state, not the primary
human-readable source of truth.
## Canonical Hosting Model
Dalidou is the canonical host for the AtoCore service and machine database.
OpenClaw on the T420 should consume AtoCore over API and network, ideally over
Tailscale or another trusted internal network path.
The live SQLite and vector store must not be treated as a multi-node synced
filesystem. The architecture should prefer one canonical running service over
file replication of the live machine store.
## Canonical Dalidou Layout
```text
/srv/storage/atocore/
app/ # deployed AtoCore repository
data/ # canonical machine state
db/
chroma/
cache/
tmp/
sources/ # human-readable source inputs
vault/
drive/
logs/
backups/
run/
```
## Operational Rules
- source directories are treated as read-only by the AtoCore runtime
- Dalidou holds the canonical machine DB
- OpenClaw should use AtoCore as an additive context service
- OpenClaw must continue to work if AtoCore is unavailable
- write-back from OpenClaw into AtoCore is deferred until later phases
## Current Status
As of the current implementation pass:
- the AtoCore runtime is deployed on Dalidou
- the canonical machine-data layout exists on Dalidou
- the service is running from Dalidou
- the long-term content corpus still needs to be populated into the live
Dalidou instance
This means the platform is hosted on Dalidou now, while the live content corpus
is only partially initialized and not yet fully ingested.

74
docs/current-state.md Normal file
View File

@@ -0,0 +1,74 @@
# AtoCore Current State
## Status Summary
AtoCore is no longer just a proof of concept. The local engine exists, the
correctness pass is complete, and Dalidou now hosts the canonical runtime and
machine-storage location.
## Phase Assessment
- completed
- Phase 0
- Phase 0.5
- Phase 1
- baseline complete
- Phase 2
- Phase 3
- Phase 5
- Phase 7
- partial
- Phase 4
- not started
- Phase 6
- Phase 8
- Phase 9
- Phase 10
- Phase 11
- Phase 12
- Phase 13
## What Exists Today
- ingestion pipeline
- parser and chunker
- SQLite-backed memory and project state
- vector retrieval
- context builder
- API routes for query, context, health, and source status
- env-driven storage and deployment paths
- Dalidou Docker deployment foundation
## What Is True On Dalidou
- deployed repo location:
- `/srv/storage/atocore/app`
- canonical machine DB location:
- `/srv/storage/atocore/data/db/atocore.db`
- canonical vector store location:
- `/srv/storage/atocore/data/chroma`
- source input locations:
- `/srv/storage/atocore/sources/vault`
- `/srv/storage/atocore/sources/drive`
The service and storage foundation are live on Dalidou.
The machine-data host is real and canonical.
The content corpus is not fully populated yet. A fresh or near-fresh live DB is
running there until the ingestion pipeline loads the ecosystem docs and project
content.
## Immediate Next Focus
1. Ingest AtoCore ecosystem and planning docs into the Dalidou instance
2. Define the OpenClaw integration contract clearly
3. Wire OpenClaw to consume AtoCore read-only over network
4. Ingest selected project content in a controlled way
## Guiding Constraints
- bad memory is worse than no memory
- trusted project state must remain highest priority
- human-readable sources and machine storage stay separate
- OpenClaw integration must not degrade OpenClaw baseline behavior

View File

@@ -9,6 +9,8 @@ Deploy AtoCore on Dalidou as the canonical runtime and machine-memory host.
- OpenClaw on the T420 consumes AtoCore over network/Tailscale API.
- `sources/vault` and `sources/drive` are read-only inputs by convention.
- SQLite/Chroma machine state stays on Dalidou and is not treated as a sync peer.
- The app and machine-storage host can be live before the long-term content
corpus is fully populated.
## Directory layout
@@ -75,3 +77,15 @@ curl http://127.0.0.1:8100/sources
- reverse proxy / TLS exposure
- automated source ingestion job
- OpenClaw client wiring
## Current Reality Check
When this deployment is first brought up, the service may be healthy before the
real corpus has been ingested.
That means:
- AtoCore the system can already be hosted on Dalidou
- the canonical machine-data location can already be on Dalidou
- but the live knowledge/content corpus may still be empty or only partially
loaded until source ingestion is run

View File

@@ -0,0 +1,102 @@
# OpenClaw Integration Contract
## Purpose
This document defines the first safe integration contract between OpenClaw and
AtoCore.
The goal is to let OpenClaw consume AtoCore as an external context service
without degrading OpenClaw's existing baseline behavior.
## Integration Principles
- OpenClaw remains the runtime and orchestration layer
- AtoCore remains the context enrichment layer
- AtoCore is optional at runtime
- if AtoCore is unavailable, OpenClaw must continue operating normally
- initial integration is read-only
- OpenClaw should not automatically write memories, project state, or ingestion
updates during the first integration batch
## First Safe Responsibilities
OpenClaw may use AtoCore for:
- health and readiness checks
- context building for contextual prompts
- retrieval/query support
- project-state lookup when a project is detected
OpenClaw should not yet use AtoCore for:
- automatic memory write-back
- automatic reflection
- conflict resolution decisions
- replacing OpenClaw's own memory system
## First API Surface
OpenClaw should treat these as the initial contract:
- `GET /health`
- check service readiness
- `GET /sources`
- inspect source registration state
- `POST /context/build`
- ask AtoCore for a budgeted context pack
- `POST /query`
- use retrieval when useful
Additional project-state inspection can be added if needed, but the first
integration should stay small and resilient.
## Failure Behavior
OpenClaw must treat AtoCore as additive.
If AtoCore times out, returns an error, or is unavailable:
- OpenClaw should continue with its own normal baseline behavior
- no hard dependency should block the user's run
- no partially written AtoCore state should be assumed
## Suggested OpenClaw Configuration
OpenClaw should eventually expose configuration like:
- `ATOCORE_ENABLED`
- `ATOCORE_BASE_URL`
- `ATOCORE_TIMEOUT_MS`
- `ATOCORE_FAIL_OPEN`
Recommended first behavior:
- enabled only when configured
- low timeout
- fail open by default
- no writeback enabled
## Suggested Usage Pattern
1. OpenClaw receives a user request
2. OpenClaw decides whether the request is contextual enough to query AtoCore
3. If yes, OpenClaw calls AtoCore
4. If AtoCore returns usable context, OpenClaw includes it
5. If AtoCore fails or returns nothing useful, OpenClaw proceeds normally
## Deferred Work
- memory promotion rules
- identity and preference write flows
- reflection loop
- automatic ingestion requests from OpenClaw
- write-back policy
- conflict-resolution integration
## Precondition Before Wider Ingestion
Before bulk ingestion of projects or ecosystem notes:
- the AtoCore service should be reachable from the T420
- the OpenClaw failure fallback path should be confirmed
- the initial contract should be documented and stable