Update current state and next steps docs
This commit is contained in:
@@ -93,8 +93,10 @@ As of the current implementation pass:
|
||||
- the AtoCore runtime is deployed on Dalidou
|
||||
- the canonical machine-data layout exists on Dalidou
|
||||
- the service is running from Dalidou
|
||||
- the long-term content corpus still needs to be populated into the live
|
||||
Dalidou instance
|
||||
- the T420/OpenClaw machine can reach AtoCore over network
|
||||
- a first read-only OpenClaw-side helper exists
|
||||
- the long-term content corpus still needs broader project and vault ingestion
|
||||
|
||||
This means the platform is hosted on Dalidou now, while the live content corpus
|
||||
is only partially initialized and not yet fully ingested.
|
||||
This means the platform is hosted on Dalidou now, the first cross-machine
|
||||
integration path exists, and the live content corpus is partially populated but
|
||||
not yet fully ingested.
|
||||
|
||||
@@ -3,8 +3,9 @@
|
||||
## Status Summary
|
||||
|
||||
AtoCore is no longer just a proof of concept. The local engine exists, the
|
||||
correctness pass is complete, and Dalidou now hosts the canonical runtime and
|
||||
machine-storage location.
|
||||
correctness pass is complete, Dalidou now hosts the canonical runtime and
|
||||
machine-storage location, and the T420/OpenClaw side now has a safe read-only
|
||||
path to consume AtoCore.
|
||||
|
||||
## Phase Assessment
|
||||
|
||||
@@ -38,6 +39,8 @@ machine-storage location.
|
||||
- API routes for query, context, health, and source status
|
||||
- env-driven storage and deployment paths
|
||||
- Dalidou Docker deployment foundation
|
||||
- initial AtoCore self-knowledge corpus ingested on Dalidou
|
||||
- T420/OpenClaw read-only AtoCore helper skill
|
||||
|
||||
## What Is True On Dalidou
|
||||
|
||||
@@ -55,16 +58,37 @@ The service and storage foundation are live on Dalidou.
|
||||
|
||||
The machine-data host is real and canonical.
|
||||
|
||||
The content corpus is not fully populated yet. A fresh or near-fresh live DB is
|
||||
running there until the ingestion pipeline loads the ecosystem docs and project
|
||||
content.
|
||||
The content corpus is partially populated now.
|
||||
|
||||
The Dalidou instance already contains:
|
||||
|
||||
- AtoCore ecosystem and hosting docs
|
||||
- current-state and OpenClaw integration docs
|
||||
- Master Plan V3
|
||||
- Build Spec V1
|
||||
- trusted project-state entries for `atocore`
|
||||
|
||||
The broader long-term corpus is still not fully populated yet. Wider project and
|
||||
vault ingestion remains a deliberate next step rather than something already
|
||||
completed.
|
||||
|
||||
## What Is True On The T420
|
||||
|
||||
- SSH access is working
|
||||
- OpenClaw workspace inspected at `/home/papa/clawd`
|
||||
- OpenClaw's own memory system remains unchanged
|
||||
- a read-only AtoCore integration skill exists in the workspace:
|
||||
- `/home/papa/clawd/skills/atocore-context/`
|
||||
- the T420 can successfully reach Dalidou AtoCore over network/Tailscale
|
||||
- fail-open behavior has been verified for the helper path
|
||||
|
||||
## Immediate Next Focus
|
||||
|
||||
1. Ingest AtoCore ecosystem and planning docs into the Dalidou instance
|
||||
2. Define the OpenClaw integration contract clearly
|
||||
3. Wire OpenClaw to consume AtoCore read-only over network
|
||||
4. Ingest selected project content in a controlled way
|
||||
1. Use the new T420-side AtoCore skill in real OpenClaw workflows
|
||||
2. Ingest selected active project sources in a controlled way
|
||||
3. Define the first broader AtoVault/AtoDrive ingestion batches
|
||||
4. Add backup/export strategy for Dalidou machine state
|
||||
5. Only later consider deeper automatic OpenClaw integration or write-back
|
||||
|
||||
## Guiding Constraints
|
||||
|
||||
|
||||
60
docs/next-steps.md
Normal file
60
docs/next-steps.md
Normal file
@@ -0,0 +1,60 @@
|
||||
# AtoCore Next Steps
|
||||
|
||||
## Current Position
|
||||
|
||||
AtoCore now has:
|
||||
|
||||
- canonical runtime and machine storage on Dalidou
|
||||
- separated source and machine-data boundaries
|
||||
- initial self-knowledge ingested into the live instance
|
||||
- trusted project-state entries for AtoCore itself
|
||||
- a first read-only OpenClaw integration path on the T420
|
||||
|
||||
## Immediate Next Steps
|
||||
|
||||
1. Use the T420 `atocore-context` skill in real OpenClaw workflows
|
||||
- confirm the ergonomics are good
|
||||
- confirm the fail-open behavior remains acceptable in practice
|
||||
2. Ingest selected active projects only
|
||||
- start with the current active project set
|
||||
- prefer trusted operational/project sources first
|
||||
- ingest broader PKM sources only after the trusted layer is loaded
|
||||
3. Review retrieval quality after the first real project ingestion batch
|
||||
- check whether the top hits are useful
|
||||
- check whether trusted project state remains dominant
|
||||
4. Define backup and export procedures for Dalidou
|
||||
- SQLite snapshot/backup strategy
|
||||
- Chroma backup or rebuild policy
|
||||
5. Keep deeper automatic runtime integration deferred until the read-only model
|
||||
has proven value
|
||||
|
||||
## Recommended Active Project Ingestion Order
|
||||
|
||||
1. `p04-gigabit`
|
||||
2. `p05-interferometer`
|
||||
3. `p06-polisher`
|
||||
|
||||
For each project:
|
||||
|
||||
1. identify the matching AtoDrive/project-operational sources
|
||||
2. identify the matching PKM project folder(s)
|
||||
3. ingest the trusted/operational material first
|
||||
4. ingest broader notes second
|
||||
5. review retrieval quality before moving on
|
||||
|
||||
## Deferred On Purpose
|
||||
|
||||
- automatic write-back from OpenClaw into AtoCore
|
||||
- automatic memory promotion
|
||||
- reflection loop integration
|
||||
- replacing OpenClaw's own memory system
|
||||
- syncing the live machine DB between machines
|
||||
|
||||
## Success Criteria For The Next Batch
|
||||
|
||||
The next batch is successful if:
|
||||
|
||||
- OpenClaw can use AtoCore naturally when context is needed
|
||||
- AtoCore answers correctly for the active project set
|
||||
- project ingestion remains controlled rather than noisy
|
||||
- the canonical Dalidou instance stays stable
|
||||
@@ -8,6 +8,23 @@ AtoCore.
|
||||
The goal is to let OpenClaw consume AtoCore as an external context service
|
||||
without degrading OpenClaw's existing baseline behavior.
|
||||
|
||||
## Current Implemented State
|
||||
|
||||
The first safe integration foundation now exists on the T420 workspace:
|
||||
|
||||
- OpenClaw's own memory system is unchanged
|
||||
- a local read-only helper skill exists at:
|
||||
- `/home/papa/clawd/skills/atocore-context/`
|
||||
- the helper currently talks to the canonical Dalidou instance
|
||||
- the helper has verified:
|
||||
- `health`
|
||||
- `project-state`
|
||||
- `query`
|
||||
- fail-open fallback when AtoCore is unavailable
|
||||
|
||||
This means the network and workflow foundation is working, even though deeper
|
||||
automatic integration into OpenClaw runtime behavior is still deferred.
|
||||
|
||||
## Integration Principles
|
||||
|
||||
- OpenClaw remains the runtime and orchestration layer
|
||||
@@ -50,6 +67,18 @@ OpenClaw should treat these as the initial contract:
|
||||
Additional project-state inspection can be added if needed, but the first
|
||||
integration should stay small and resilient.
|
||||
|
||||
## Current Helper Surface
|
||||
|
||||
The current helper script exposes:
|
||||
|
||||
- `health`
|
||||
- `sources`
|
||||
- `stats`
|
||||
- `project-state <project>`
|
||||
- `query <prompt> [top_k]`
|
||||
- `context-build <prompt> [project] [budget]`
|
||||
- `ingest-sources`
|
||||
|
||||
## Failure Behavior
|
||||
|
||||
OpenClaw must treat AtoCore as additive.
|
||||
@@ -86,6 +115,7 @@ Recommended first behavior:
|
||||
|
||||
## Deferred Work
|
||||
|
||||
- deeper automatic runtime wiring inside OpenClaw itself
|
||||
- memory promotion rules
|
||||
- identity and preference write flows
|
||||
- reflection loop
|
||||
|
||||
Reference in New Issue
Block a user