Files
ATOCore/docs/operating-model.md

94 lines
2.3 KiB
Markdown
Raw Normal View History

# AtoCore Operating Model
## Purpose
This document makes the intended day-to-day operating model explicit.
The goal is not to replace how work already happens. The goal is to make that
existing workflow stronger by adding a durable context engine.
## Core Idea
Normal work continues in:
- PKM project notes
- Gitea repositories
- Discord and OpenClaw workflows
OpenClaw keeps:
- its own memory
- its own runtime and orchestration behavior
- its own workspace and direct file/repo tooling
AtoCore adds:
- trusted project state
- retrievable cross-source context
- durable machine memory
- context assembly that improves prompt quality and robustness
## Layer Responsibilities
- PKM and repos
- human-authoritative project sources
- where knowledge is created, edited, reviewed, and maintained
- OpenClaw
- active operating environment
- orchestration, direct repo work, messaging, agent workflows, local memory
- AtoCore
- compiled context engine
- durable machine-memory host
- retrieval and context assembly layer
## Why This Architecture Works
Each layer has different strengths and weaknesses.
- PKM and repos are rich but noisy and manual to search
- OpenClaw memory is useful but session-shaped and not the whole project record
- raw LLM repo work is powerful but can miss trusted broader context
- AtoCore can compile context across sources and provide a better prompt input
The result should be:
- stronger prompts
- more robust outputs
- less manual reconstruction
- better continuity across sessions and models
## What AtoCore Should Not Replace
AtoCore should not replace:
- normal file reads
- direct repo search
- direct PKM work
- OpenClaw's own memory
- OpenClaw's runtime and tool behavior
It should supplement those systems.
## What Healthy Usage Looks Like
When working on a project:
1. OpenClaw still uses local workspace/repo context
2. OpenClaw still uses its own memory
3. AtoCore adds:
- trusted current project state
- retrieved project documents
- cross-source project context
- context assembly for more robust model prompts
## Practical Rule
Think of AtoCore as the durable external context hard drive for LLM work:
- fast machine-readable context
- persistent project understanding
- stronger prompt inputs
- no need to replace the normal project workflow
That is the architecture target.