Files
ATOCore/docs/operating-model.md

143 lines
3.9 KiB
Markdown
Raw Normal View History

# AtoCore Operating Model
## Purpose
This document makes the intended day-to-day operating model explicit.
The goal is not to replace how work already happens. The goal is to make that
existing workflow stronger by adding a durable context engine.
## Core Idea
Normal work continues in:
- PKM project notes
- Gitea repositories
- Discord and OpenClaw workflows
OpenClaw keeps:
- its own memory
- its own runtime and orchestration behavior
- its own workspace and direct file/repo tooling
AtoCore adds:
- trusted project state
- retrievable cross-source context
- durable machine memory
- context assembly that improves prompt quality and robustness
## Layer Responsibilities
- PKM and repos
- human-authoritative project sources
- where knowledge is created, edited, reviewed, and maintained
- OpenClaw
- active operating environment
- orchestration, direct repo work, messaging, agent workflows, local memory
- AtoCore
- compiled context engine
- durable machine-memory host
- retrieval and context assembly layer
## Why This Architecture Works
Each layer has different strengths and weaknesses.
- PKM and repos are rich but noisy and manual to search
- OpenClaw memory is useful but session-shaped and not the whole project record
- raw LLM repo work is powerful but can miss trusted broader context
- AtoCore can compile context across sources and provide a better prompt input
The result should be:
- stronger prompts
- more robust outputs
- less manual reconstruction
- better continuity across sessions and models
## What AtoCore Should Not Replace
AtoCore should not replace:
- normal file reads
- direct repo search
- direct PKM work
- OpenClaw's own memory
- OpenClaw's runtime and tool behavior
It should supplement those systems.
## What Healthy Usage Looks Like
When working on a project:
1. OpenClaw still uses local workspace/repo context
2. OpenClaw still uses its own memory
3. AtoCore adds:
- trusted current project state
- retrieved project documents
- cross-source project context
- context assembly for more robust model prompts
## Practical Rule
Think of AtoCore as the durable external context hard drive for LLM work:
- fast machine-readable context
- persistent project understanding
- stronger prompt inputs
- no need to replace the normal project workflow
That is the architecture target.
## Why The Staged Markdown Exists
The staged markdown on Dalidou is a source-input layer, not the end product of
the system.
In the current deployment model:
1. selected PKM, AtoDrive, or repo docs are copied or mirrored into a Dalidou
source path
2. AtoCore ingests them
3. the machine store keeps the processed representation
4. retrieval and context building operate on that machine store
So if the staged docs look very similar to your original PKM notes, that is
expected. They are source material, not the compiled context layer itself.
## What Happens When A Source Changes
If you edit a PKM note or repo doc at the original source, AtoCore does not
magically know yet.
The current model is refresh-based:
1. update the human-authoritative source
2. refresh or re-stage the relevant project source set on Dalidou
3. run ingestion again
4. let AtoCore update the machine representation
This is still an intermediate workflow. The long-run target is a cleaner source
registry and refresh model so that commands like `refresh p05-interferometer`
become natural and reliable.
## Current Scope Of Ingestion
The current project corpus is intentionally selective, not exhaustive.
For active projects, the goal right now is to ingest:
- high-value anchor docs
- strong meeting notes with real decisions
- architecture and constraints docs
- selected repo context that explains the system shape
The goal is not to dump the entire PKM or whole repo tree into AtoCore on the
first pass.
So if a project only has some curated notes and not the full project universe in
the staged area yet, that is normal for the current phase.