2.0 KiB
AtoCore Backup Strategy
Purpose
This document describes the current backup baseline for the Dalidou-hosted AtoCore machine store.
The immediate goal is not full disaster-proof automation yet. The goal is to have one safe, repeatable way to snapshot the most important writable state.
Current Backup Baseline
Today, the safest hot-backup target is:
- SQLite machine database
- project registry JSON
- backup metadata describing what was captured
This is now supported by:
python -m atocore.ops.backup
What The Script Captures
The backup command creates a timestamped snapshot under:
ATOCORE_BACKUP_DIR/snapshots/<timestamp>/
It currently writes:
db/atocore.db- created with SQLite's backup API
config/project-registry.json- copied if it exists
backup-metadata.json- timestamp, paths, and backup notes
What It Does Not Yet Capture
The current script does not hot-backup Chroma.
That is intentional.
For now, Chroma should be treated as one of:
- rebuildable derived state
- or something that needs a deliberate cold snapshot/export workflow
Until that workflow exists, do not rely on ad hoc live file copies of the vector store while the service is actively writing.
Dalidou Use
On Dalidou, the canonical machine paths are:
- DB:
/srv/storage/atocore/data/db/atocore.db
- registry:
/srv/storage/atocore/config/project-registry.json
- backups:
/srv/storage/atocore/backups
So a normal backup run should happen on Dalidou itself, not from another machine.
Next Backup Improvements
- decide Chroma policy clearly
- rebuild vs cold snapshot vs export
- add a simple scheduled backup routine on Dalidou
- add retention policy for old snapshots
- optionally add a restore validation check
Healthy Rule
Do not design around syncing the live machine DB/vector store between machines.
Back up the canonical Dalidou state. Restore from Dalidou state. Keep OpenClaw as a client of AtoCore, not a storage peer.