feat: Phase 7I + UI refresh (capture form, memory/domain/activity pages, topnav)
Closes three gaps the user surfaced: (1) OpenClaw agents run blind
without AtoCore context, (2) mobile/desktop chats can't be captured
at all, (3) wiki UI hadn't kept up with backend capabilities.
Phase 7I — OpenClaw two-way bridge
- Plugin now calls /context/build on before_agent_start and prepends
the context pack to event.prompt, so whatever LLM runs underneath
(sonnet, opus, codex, local model) answers grounded in AtoCore
knowledge. Captured prompt stays the user's original text; fail-open
with a 5s timeout. Config-gated via injectContext flag.
- Plugin version 0.0.0 → 0.2.0; README rewritten.
UI refresh
- /wiki/capture — paste-to-ingest form for Claude Desktop / web / mobile
/ ChatGPT / other. Goes through normal /interactions pipeline with
client="claude-desktop|claude-web|claude-mobile|chatgpt|other".
Fixes the rotovap/mushroom-on-phone gap.
- /wiki/memories/{id} (Phase 7E) — full memory detail: content, status,
confidence, refs, valid_until, domain_tags (clickable to domain
pages), project link, source chunk, graduated-to-entity link, full
audit trail, related-by-tag neighbors.
- /wiki/domains/{tag} (Phase 7F) — cross-project view: all active
memories with the given tag grouped by project, sorted by count.
Case-insensitive, whitespace-tolerant. Also surfaces graduated
entities carrying the tag.
- /wiki/activity — autonomous-activity timeline feed. Summary chips
by action (created/promoted/merged/superseded/decayed/canonicalized)
and by actor (auto-dedup-tier1, auto-dedup-tier2, confidence-decay,
phase10-auto-promote, transient-to-durable, tag-canon, human-triage).
Answers "what has the brain been doing while I was away?"
- Home refresh: persistent topnav (Home · Activity · Capture · Triage
· Dashboard), "What the brain is doing" snippet above project cards
showing recent autonomous-actor counts, link to full activity.
Tests: +10 (capture page, memory detail + 404, domain cross-project +
empty + tag normalization, activity feed + groupings, home topnav,
superseded-source detail after merge). 440 → 450.
Known next: capture-browser extension for Claude.ai web (bigger
project, deferred); voice/mobile relay (adjacent).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -1,275 +1,49 @@
|
|||||||
# AtoCore Current State
|
# AtoCore — Current State (2026-04-19)
|
||||||
|
|
||||||
## Status Summary
|
Live deploy: `877b97e` · Dalidou health: ok · Harness: 17/18.
|
||||||
|
|
||||||
AtoCore is no longer just a proof of concept. The local engine exists, the
|
## The numbers
|
||||||
correctness pass is complete, Dalidou now hosts the canonical runtime and
|
|
||||||
machine-storage location, and the T420/OpenClaw side now has a safe read-only
|
|
||||||
path to consume AtoCore. The live corpus is no longer just self-knowledge: it
|
|
||||||
now includes a first curated ingestion batch for the active projects.
|
|
||||||
|
|
||||||
## Phase Assessment
|
| | count |
|
||||||
|
|---|---|
|
||||||
|
| Active memories | 266 (180 project, 31 preference, 24 knowledge, 17 adaptation, 11 episodic, 3 identity) |
|
||||||
|
| Candidates pending | **0** (autonomous triage drained the queue) |
|
||||||
|
| Interactions captured | 605 (250 claude-code, 351 openclaw) |
|
||||||
|
| Entities (typed graph) | 50 |
|
||||||
|
| Vectors in Chroma | 33K+ |
|
||||||
|
| Projects | 6 registered (p04, p05, p06, abb-space, atomizer-v2, atocore) + apm emerging (2 memories, below auto-register threshold) |
|
||||||
|
| Unique domain tags | 210 |
|
||||||
|
| Tests | 440 passing |
|
||||||
|
|
||||||
- completed
|
## Autonomous pipeline — what runs without me
|
||||||
- Phase 0
|
|
||||||
- Phase 0.5
|
|
||||||
- Phase 1
|
|
||||||
- baseline complete
|
|
||||||
- Phase 2
|
|
||||||
- Phase 3
|
|
||||||
- Phase 5
|
|
||||||
- Phase 7
|
|
||||||
- Phase 9 (Commits A/B/C: capture, reinforcement, extractor + review queue)
|
|
||||||
- partial
|
|
||||||
- Phase 4
|
|
||||||
- Phase 8
|
|
||||||
- not started
|
|
||||||
- Phase 6
|
|
||||||
- Phase 10
|
|
||||||
- Phase 11
|
|
||||||
- Phase 12
|
|
||||||
- Phase 13
|
|
||||||
|
|
||||||
## What Exists Today
|
| When | Job | Does |
|
||||||
|
|---|---|---|
|
||||||
|
| every hour | `hourly-extract.sh` | Pulls new interactions → LLM extraction → 3-tier auto-triage (sonnet → opus → discard/human). 0 pending candidates right now = autonomy is working. |
|
||||||
|
| every 2 min | `dedup-watcher.sh` | Services UI-triggered dedup scans |
|
||||||
|
| daily 03:00 UTC | Full nightly (`batch-extract.sh`) | Extract · triage · auto-promote reinforced · synthesis · harness · dedup (0.90) · emerging detector · transient→durable · **confidence decay (7D)** · integrity check · alerts |
|
||||||
|
| Sundays | +Weekly deep pass | Knowledge-base lint · dedup @ 0.85 · **tag canonicalization (7C)** |
|
||||||
|
|
||||||
- ingestion pipeline
|
Last nightly run (2026-04-19 03:00 UTC): **31 promoted · 39 rejected · 0 needs human**. That's the brain self-organizing.
|
||||||
- parser and chunker
|
|
||||||
- SQLite-backed memory and project state
|
|
||||||
- vector retrieval
|
|
||||||
- context builder
|
|
||||||
- API routes for query, context, health, and source status
|
|
||||||
- project registry and per-project refresh foundation
|
|
||||||
- project registration lifecycle:
|
|
||||||
- template
|
|
||||||
- proposal preview
|
|
||||||
- approved registration
|
|
||||||
- safe update of existing project registrations
|
|
||||||
- refresh
|
|
||||||
- implementation-facing architecture notes for:
|
|
||||||
- engineering knowledge hybrid architecture
|
|
||||||
- engineering ontology v1
|
|
||||||
- env-driven storage and deployment paths
|
|
||||||
- Dalidou Docker deployment foundation
|
|
||||||
- initial AtoCore self-knowledge corpus ingested on Dalidou
|
|
||||||
- T420/OpenClaw read-only AtoCore helper skill
|
|
||||||
- full active-project markdown/text corpus wave for:
|
|
||||||
- `p04-gigabit`
|
|
||||||
- `p05-interferometer`
|
|
||||||
- `p06-polisher`
|
|
||||||
|
|
||||||
## What Is True On Dalidou
|
## Phase 7 — Memory Consolidation status
|
||||||
|
|
||||||
- deployed repo location:
|
| Subphase | What | Status |
|
||||||
- `/srv/storage/atocore/app`
|
|---|---|---|
|
||||||
- canonical machine DB location:
|
| 7A | Semantic dedup + merge lifecycle | live |
|
||||||
- `/srv/storage/atocore/data/db/atocore.db`
|
| 7A.1 | Tiered auto-approve (sonnet ≥0.8 + sim ≥0.92 → merge; opus escalation; human only for ambiguous) | live |
|
||||||
- canonical vector store location:
|
| 7B | Memory-to-memory contradiction detection (0.70–0.88 band, classify duplicate/contradicts/supersedes) | deferred, needs 7A signal |
|
||||||
- `/srv/storage/atocore/data/chroma`
|
| 7C | Tag canonicalization (weekly; auto-apply ≥0.8 confidence; protects project tokens) | live (first run: 0 proposals — vocabulary is clean) |
|
||||||
- source input locations:
|
| 7D | Confidence decay (0.97/day on idle unreferenced; auto-supersede below 0.3) | live (first run: 0 decayed — nothing idle+unreferenced yet) |
|
||||||
- `/srv/storage/atocore/sources/vault`
|
| 7E | `/wiki/memories/{id}` detail page | pending |
|
||||||
- `/srv/storage/atocore/sources/drive`
|
| 7F | `/wiki/domains/{tag}` cross-project view | pending (wants 7C + more usage first) |
|
||||||
|
| 7G | Re-extraction on prompt version bump | pending |
|
||||||
|
| 7H | Chroma vector hygiene (delete vectors for superseded memories) | pending |
|
||||||
|
|
||||||
The service and storage foundation are live on Dalidou.
|
## Known gaps (honest)
|
||||||
|
|
||||||
The machine-data host is real and canonical.
|
1. **Capture surface is Claude-Code-and-OpenClaw only.** Conversations in Claude Desktop, Claude.ai web, phone, or any other LLM UI are NOT captured. Example: the rotovap/mushroom chat yesterday never reached AtoCore because no hook fired. See Q4 below.
|
||||||
|
2. **OpenClaw is capture-only, not context-grounded.** The plugin POSTs `/interactions` on `llm_output` but does NOT call `/context/build` on `before_agent_start`. OpenClaw's underlying agent runs blind. See Q2 below.
|
||||||
The project registry is now also persisted in a canonical mounted config path on
|
3. **Human interface (wiki) is thin and static.** 5 project cards + a "System" line. No dashboard for the autonomous activity. No per-memory detail page. See Q3/Q5.
|
||||||
Dalidou:
|
4. **Harness 17/18** — the `p04-constraints` fixture wants "Zerodur" but retrieval surfaces related-not-exact terms. Content gap, not a retrieval regression.
|
||||||
|
5. **Two projects under-populated**: p05-interferometer (4 memories, 18 state) and atomizer-v2 (1 memory, 6 state). Batch re-extract with the new llm-0.6.0 prompt would help.
|
||||||
- `/srv/storage/atocore/config/project-registry.json`
|
|
||||||
|
|
||||||
The content corpus is partially populated now.
|
|
||||||
|
|
||||||
The Dalidou instance already contains:
|
|
||||||
|
|
||||||
- AtoCore ecosystem and hosting docs
|
|
||||||
- current-state and OpenClaw integration docs
|
|
||||||
- Master Plan V3
|
|
||||||
- Build Spec V1
|
|
||||||
- trusted project-state entries for `atocore`
|
|
||||||
- full staged project markdown/text corpora for:
|
|
||||||
- `p04-gigabit`
|
|
||||||
- `p05-interferometer`
|
|
||||||
- `p06-polisher`
|
|
||||||
- curated repo-context docs for:
|
|
||||||
- `p05`: `Fullum-Interferometer`
|
|
||||||
- `p06`: `polisher-sim`
|
|
||||||
- trusted project-state entries for:
|
|
||||||
- `p04-gigabit`
|
|
||||||
- `p05-interferometer`
|
|
||||||
- `p06-polisher`
|
|
||||||
|
|
||||||
Current live stats after the full active-project wave are now far beyond the
|
|
||||||
initial seed stage:
|
|
||||||
|
|
||||||
- more than `1,100` source documents
|
|
||||||
- more than `20,000` chunks
|
|
||||||
- matching vector count
|
|
||||||
|
|
||||||
The broader long-term corpus is still not fully populated yet. Wider project and
|
|
||||||
vault ingestion remains a deliberate next step rather than something already
|
|
||||||
completed, but the corpus is now meaningfully seeded beyond AtoCore's own docs.
|
|
||||||
|
|
||||||
For human-readable quality review, the current staged project markdown corpus is
|
|
||||||
primarily visible under:
|
|
||||||
|
|
||||||
- `/srv/storage/atocore/sources/vault/incoming/projects`
|
|
||||||
|
|
||||||
This staged area is now useful for review because it contains the markdown/text
|
|
||||||
project docs that were actually ingested for the full active-project wave.
|
|
||||||
|
|
||||||
It is important to read this staged area correctly:
|
|
||||||
|
|
||||||
- it is a readable ingestion input layer
|
|
||||||
- it is not the final machine-memory representation itself
|
|
||||||
- seeing familiar PKM-style notes there is expected
|
|
||||||
- the machine-processed intelligence lives in the DB, chunks, vectors, memory,
|
|
||||||
trusted project state, and context-builder outputs
|
|
||||||
|
|
||||||
## What Is True On The T420
|
|
||||||
|
|
||||||
- SSH access is working
|
|
||||||
- OpenClaw workspace inspected at `/home/papa/clawd`
|
|
||||||
- OpenClaw's own memory system remains unchanged
|
|
||||||
- a read-only AtoCore integration skill exists in the workspace:
|
|
||||||
- `/home/papa/clawd/skills/atocore-context/`
|
|
||||||
- the T420 can successfully reach Dalidou AtoCore over network/Tailscale
|
|
||||||
- fail-open behavior has been verified for the helper path
|
|
||||||
- OpenClaw can now seed AtoCore in two distinct ways:
|
|
||||||
- project-scoped memory entries
|
|
||||||
- staged document ingestion into the retrieval corpus
|
|
||||||
- the helper now supports the practical registered-project lifecycle:
|
|
||||||
- projects
|
|
||||||
- project-template
|
|
||||||
- propose-project
|
|
||||||
- register-project
|
|
||||||
- update-project
|
|
||||||
- refresh-project
|
|
||||||
- the helper now also supports the first organic routing layer:
|
|
||||||
- `detect-project "<prompt>"`
|
|
||||||
- `auto-context "<prompt>" [budget] [project]`
|
|
||||||
- OpenClaw can now default to AtoCore for project-knowledge questions without
|
|
||||||
requiring explicit helper commands from the human every time
|
|
||||||
|
|
||||||
## What Exists In Memory vs Corpus
|
|
||||||
|
|
||||||
These remain separate and that is intentional.
|
|
||||||
|
|
||||||
In `/memory`:
|
|
||||||
|
|
||||||
- project-scoped curated memories now exist for:
|
|
||||||
- `p04-gigabit`: 5 memories
|
|
||||||
- `p05-interferometer`: 6 memories
|
|
||||||
- `p06-polisher`: 8 memories
|
|
||||||
|
|
||||||
These are curated summaries and extracted stable project signals.
|
|
||||||
|
|
||||||
In `source_documents` / retrieval corpus:
|
|
||||||
|
|
||||||
- full project markdown/text corpora are now present for the active project set
|
|
||||||
- retrieval is no longer limited to AtoCore self-knowledge only
|
|
||||||
- the current corpus is broad enough that ranking quality matters more than
|
|
||||||
corpus presence alone
|
|
||||||
- underspecified prompts can still pull in historical or archive material, so
|
|
||||||
project-aware routing and better ranking remain important
|
|
||||||
|
|
||||||
The source refresh model now has a concrete foundation in code:
|
|
||||||
|
|
||||||
- a project registry file defines known project ids, aliases, and ingest roots
|
|
||||||
- the API can list registered projects
|
|
||||||
- the API can return a registration template
|
|
||||||
- the API can preview a registration without mutating state
|
|
||||||
- the API can persist an approved registration
|
|
||||||
- the API can update an existing registered project without changing its canonical id
|
|
||||||
- the API can refresh one registered project at a time
|
|
||||||
|
|
||||||
This lifecycle is now coherent end to end for normal use.
|
|
||||||
|
|
||||||
The first live update passes on existing registered projects have now been
|
|
||||||
verified against `p04-gigabit` and `p05-interferometer`:
|
|
||||||
|
|
||||||
- the registration description can be updated safely
|
|
||||||
- the canonical project id remains unchanged
|
|
||||||
- refresh still behaves cleanly after the update
|
|
||||||
- `context/build` still returns useful project-specific context afterward
|
|
||||||
|
|
||||||
## Reliability Baseline
|
|
||||||
|
|
||||||
The runtime has now been hardened in a few practical ways:
|
|
||||||
|
|
||||||
- SQLite connections use a configurable busy timeout
|
|
||||||
- SQLite uses WAL mode to reduce transient lock pain under normal concurrent use
|
|
||||||
- project registry writes are atomic file replacements rather than in-place rewrites
|
|
||||||
- a full runtime backup and restore path now exists and has been exercised on
|
|
||||||
live Dalidou:
|
|
||||||
- SQLite (hot online backup via `conn.backup()`)
|
|
||||||
- project registry (file copy)
|
|
||||||
- Chroma vector store (cold directory copy under `exclusive_ingestion()`)
|
|
||||||
- backup metadata
|
|
||||||
- `restore_runtime_backup()` with CLI entry point
|
|
||||||
(`python -m atocore.ops.backup restore <STAMP>
|
|
||||||
--confirm-service-stopped`), pre-restore safety snapshot for
|
|
||||||
rollback, WAL/SHM sidecar cleanup, `PRAGMA integrity_check`
|
|
||||||
on the restored file
|
|
||||||
- the first live drill on 2026-04-09 surfaced and fixed a Chroma
|
|
||||||
restore bug on Docker bind-mounted volumes (`shutil.rmtree`
|
|
||||||
on a mount point); a regression test now asserts the
|
|
||||||
destination inode is stable across restore
|
|
||||||
- deploy provenance is visible end-to-end:
|
|
||||||
- `/health` reports `build_sha`, `build_time`, `build_branch`
|
|
||||||
from env vars wired by `deploy.sh`
|
|
||||||
- `deploy.sh` Step 6 verifies the live `build_sha` matches the
|
|
||||||
just-built commit (exit code 6 on drift) so "live is current?"
|
|
||||||
can be answered precisely, not just by `__version__`
|
|
||||||
- `deploy.sh` Step 1.5 detects that the script itself changed
|
|
||||||
in the pulled commit and re-execs into the fresh copy, so
|
|
||||||
the deploy never silently runs the old script against new source
|
|
||||||
|
|
||||||
This does not eliminate every concurrency edge, but it materially improves the
|
|
||||||
current operational baseline.
|
|
||||||
|
|
||||||
In `Trusted Project State`:
|
|
||||||
|
|
||||||
- each active seeded project now has a conservative trusted-state set
|
|
||||||
- promoted facts cover:
|
|
||||||
- summary
|
|
||||||
- core architecture or boundary decision
|
|
||||||
- key constraints
|
|
||||||
- next focus
|
|
||||||
|
|
||||||
This separation is healthy:
|
|
||||||
|
|
||||||
- memory stores distilled project facts
|
|
||||||
- corpus stores the underlying retrievable documents
|
|
||||||
|
|
||||||
## Immediate Next Focus
|
|
||||||
|
|
||||||
1. ~~Re-run the full backup/restore drill~~ — DONE 2026-04-11,
|
|
||||||
full pass (db, registry, chroma, integrity all true)
|
|
||||||
2. ~~Turn on auto-capture of Claude Code sessions in conservative
|
|
||||||
mode~~ — DONE 2026-04-11, Stop hook wired via
|
|
||||||
`deploy/hooks/capture_stop.py` → `POST /interactions`
|
|
||||||
with `reinforce=false`; kill switch via
|
|
||||||
`ATOCORE_CAPTURE_DISABLED=1`
|
|
||||||
3. Run a short real-use pilot with auto-capture on, verify
|
|
||||||
interactions are landing in Dalidou, review quality
|
|
||||||
4. Use the new T420-side organic routing layer in real OpenClaw workflows
|
|
||||||
4. Tighten retrieval quality for the now fully ingested active project corpora
|
|
||||||
5. Move to Wave 2 trusted-operational ingestion instead of blindly widening raw corpus further
|
|
||||||
6. Keep the new engineering-knowledge architecture docs as implementation guidance while avoiding premature schema work
|
|
||||||
7. Expand the remaining boring operations baseline:
|
|
||||||
- retention policy cleanup script
|
|
||||||
- off-Dalidou backup target (rsync or similar)
|
|
||||||
8. Only later consider write-back, reflection, or deeper autonomous behaviors
|
|
||||||
|
|
||||||
See also:
|
|
||||||
|
|
||||||
- [ingestion-waves.md](C:/Users/antoi/ATOCore/docs/ingestion-waves.md)
|
|
||||||
- [master-plan-status.md](C:/Users/antoi/ATOCore/docs/master-plan-status.md)
|
|
||||||
|
|
||||||
## Guiding Constraints
|
|
||||||
|
|
||||||
- bad memory is worse than no memory
|
|
||||||
- trusted project state must remain highest priority
|
|
||||||
- human-readable sources and machine storage stay separate
|
|
||||||
- OpenClaw integration must not degrade OpenClaw baseline behavior
|
|
||||||
|
|||||||
@@ -1,29 +1,40 @@
|
|||||||
# AtoCore Capture Plugin for OpenClaw
|
# AtoCore Capture + Context Plugin for OpenClaw
|
||||||
|
|
||||||
Minimal OpenClaw plugin that mirrors Claude Code's `capture_stop.py` behavior:
|
Two-way bridge between OpenClaw agents and AtoCore:
|
||||||
|
|
||||||
|
**Capture (since v1)**
|
||||||
- watches user-triggered assistant turns
|
- watches user-triggered assistant turns
|
||||||
- POSTs `prompt` + `response` to `POST /interactions`
|
- POSTs `prompt` + `response` to `POST /interactions`
|
||||||
- sets `client="openclaw"`
|
- sets `client="openclaw"`, `reinforce=true`
|
||||||
- sets `reinforce=true`
|
|
||||||
- fails open on network or API errors
|
- fails open on network or API errors
|
||||||
|
|
||||||
## Config
|
**Context injection (Phase 7I, v2+)**
|
||||||
|
- on `before_agent_start`, fetches a context pack from `POST /context/build`
|
||||||
|
- prepends the pack to the agent's prompt so whatever LLM runs underneath
|
||||||
|
(sonnet, opus, codex, local model — whichever OpenClaw delegates to)
|
||||||
|
answers grounded in what AtoCore already knows
|
||||||
|
- original user prompt is still what gets captured later (no recursion)
|
||||||
|
- fails open: context unreachable → agent runs as before
|
||||||
|
|
||||||
Optional plugin config:
|
## Config
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"baseUrl": "http://dalidou:8100",
|
"baseUrl": "http://dalidou:8100",
|
||||||
"minPromptLength": 15,
|
"minPromptLength": 15,
|
||||||
"maxResponseLength": 50000
|
"maxResponseLength": 50000,
|
||||||
|
"injectContext": true,
|
||||||
|
"contextCharBudget": 4000
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
If `baseUrl` is omitted, the plugin uses `ATOCORE_BASE_URL` or defaults to `http://dalidou:8100`.
|
- `baseUrl` — defaults to `ATOCORE_BASE_URL` env or `http://dalidou:8100`
|
||||||
|
- `injectContext` — set to `false` to disable the Phase 7I context injection and make this a pure one-way capture plugin again
|
||||||
|
- `contextCharBudget` — cap on injected context size. `/context/build` respects it too; this is a client-side safety net. Default 4000 chars (~1000 tokens).
|
||||||
|
|
||||||
## Notes
|
## Notes
|
||||||
|
|
||||||
- Project detection is intentionally left empty for now. Unscoped capture is acceptable because AtoCore's extraction pipeline handles unscoped interactions.
|
- Project detection is intentionally left empty — AtoCore's extraction pipeline handles unscoped interactions and infers the project from content.
|
||||||
- Extraction is **not** part of the capture path. This plugin only records interactions and lets AtoCore reinforcement run automatically.
|
- Extraction is **not** part of this plugin. Interactions are captured; batch extraction runs via cron on the AtoCore host.
|
||||||
- The plugin captures only user-triggered turns, not heartbeats or system-only runs.
|
- Context injection only fires for user-triggered turns (not heartbeats or system-only runs).
|
||||||
|
- Timeouts: context fetch is 5s (short so a slow AtoCore never blocks a user turn); capture post is 10s.
|
||||||
|
|||||||
@@ -3,6 +3,11 @@ import { definePluginEntry } from "openclaw/plugin-sdk/core";
|
|||||||
const DEFAULT_BASE_URL = process.env.ATOCORE_BASE_URL || "http://dalidou:8100";
|
const DEFAULT_BASE_URL = process.env.ATOCORE_BASE_URL || "http://dalidou:8100";
|
||||||
const DEFAULT_MIN_PROMPT_LENGTH = 15;
|
const DEFAULT_MIN_PROMPT_LENGTH = 15;
|
||||||
const DEFAULT_MAX_RESPONSE_LENGTH = 50_000;
|
const DEFAULT_MAX_RESPONSE_LENGTH = 50_000;
|
||||||
|
// Phase 7I — context injection: cap how much AtoCore context we stuff
|
||||||
|
// back into the prompt. The /context/build endpoint respects a budget
|
||||||
|
// parameter too, but we keep a client-side safety net.
|
||||||
|
const DEFAULT_CONTEXT_CHAR_BUDGET = 4_000;
|
||||||
|
const DEFAULT_INJECT_CONTEXT = true;
|
||||||
|
|
||||||
function trimText(value) {
|
function trimText(value) {
|
||||||
return typeof value === "string" ? value.trim() : "";
|
return typeof value === "string" ? value.trim() : "";
|
||||||
@@ -41,6 +46,37 @@ async function postInteraction(baseUrl, payload, logger) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Phase 7I — fetch a context pack for the incoming prompt so the agent
|
||||||
|
// answers grounded in what AtoCore already knows. Fail-open: if the
|
||||||
|
// request times out or errors, we just don't inject; the agent runs as
|
||||||
|
// before. Never block the user's turn on AtoCore availability.
|
||||||
|
async function fetchContextPack(baseUrl, prompt, project, charBudget, logger) {
|
||||||
|
try {
|
||||||
|
const res = await fetch(`${baseUrl.replace(/\/$/, "")}/context/build`, {
|
||||||
|
method: "POST",
|
||||||
|
headers: { "Content-Type": "application/json" },
|
||||||
|
body: JSON.stringify({
|
||||||
|
prompt,
|
||||||
|
project: project || "",
|
||||||
|
char_budget: charBudget
|
||||||
|
}),
|
||||||
|
signal: AbortSignal.timeout(5_000)
|
||||||
|
});
|
||||||
|
if (!res.ok) {
|
||||||
|
logger?.debug?.("atocore_context_fetch_failed", { status: res.status });
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
const data = await res.json();
|
||||||
|
const pack = trimText(data?.formatted_context || "");
|
||||||
|
return pack || null;
|
||||||
|
} catch (error) {
|
||||||
|
logger?.debug?.("atocore_context_fetch_error", {
|
||||||
|
error: error instanceof Error ? error.message : String(error)
|
||||||
|
});
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
export default definePluginEntry({
|
export default definePluginEntry({
|
||||||
register(api) {
|
register(api) {
|
||||||
const logger = api.logger;
|
const logger = api.logger;
|
||||||
@@ -55,6 +91,28 @@ export default definePluginEntry({
|
|||||||
pendingBySession.delete(ctx.sessionId);
|
pendingBySession.delete(ctx.sessionId);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Phase 7I — inject AtoCore context into the agent's prompt so it
|
||||||
|
// answers grounded in what the brain already knows. Config-gated
|
||||||
|
// (injectContext: false disables). Fail-open.
|
||||||
|
const baseUrl = trimText(config.baseUrl) || DEFAULT_BASE_URL;
|
||||||
|
const injectContext = config.injectContext !== false && DEFAULT_INJECT_CONTEXT;
|
||||||
|
const charBudget = Number(config.contextCharBudget || DEFAULT_CONTEXT_CHAR_BUDGET);
|
||||||
|
if (injectContext && event && typeof event === "object") {
|
||||||
|
const pack = await fetchContextPack(baseUrl, prompt, "", charBudget, logger);
|
||||||
|
if (pack) {
|
||||||
|
// Prepend to the event's prompt so the agent sees grounded info
|
||||||
|
// before the user's question. OpenClaw's agent receives
|
||||||
|
// event.prompt as its primary input; modifying it here grounds
|
||||||
|
// whatever LLM the agent delegates to (sonnet, opus, codex,
|
||||||
|
// local model — doesn't matter).
|
||||||
|
event.prompt = `${pack}\n\n---\n\n${prompt}`;
|
||||||
|
logger?.debug?.("atocore_context_injected", { chars: pack.length });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Record the ORIGINAL user prompt (not the injected version) so
|
||||||
|
// captured interactions stay clean for later extraction.
|
||||||
pendingBySession.set(ctx.sessionId, {
|
pendingBySession.set(ctx.sessionId, {
|
||||||
prompt,
|
prompt,
|
||||||
sessionId: ctx.sessionId,
|
sessionId: ctx.sessionId,
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
{
|
{
|
||||||
"name": "@atomaste/atocore-openclaw-capture",
|
"name": "@atomaste/atocore-openclaw-capture",
|
||||||
"private": true,
|
"private": true,
|
||||||
"version": "0.0.0",
|
"version": "0.2.0",
|
||||||
"type": "module",
|
"type": "module",
|
||||||
"description": "OpenClaw plugin that captures assistant turns to AtoCore interactions"
|
"description": "OpenClaw plugin: captures assistant turns to AtoCore interactions AND injects AtoCore context into agent prompts before they run (Phase 7I two-way bridge)"
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -33,8 +33,12 @@ from atocore.interactions.service import (
|
|||||||
)
|
)
|
||||||
from atocore.engineering.mirror import generate_project_overview
|
from atocore.engineering.mirror import generate_project_overview
|
||||||
from atocore.engineering.wiki import (
|
from atocore.engineering.wiki import (
|
||||||
|
render_activity,
|
||||||
|
render_capture,
|
||||||
|
render_domain,
|
||||||
render_entity,
|
render_entity,
|
||||||
render_homepage,
|
render_homepage,
|
||||||
|
render_memory_detail,
|
||||||
render_project,
|
render_project,
|
||||||
render_search,
|
render_search,
|
||||||
)
|
)
|
||||||
@@ -119,6 +123,33 @@ def wiki_search(q: str = "") -> HTMLResponse:
|
|||||||
return HTMLResponse(content=render_search(q))
|
return HTMLResponse(content=render_search(q))
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/wiki/capture", response_class=HTMLResponse)
|
||||||
|
def wiki_capture() -> HTMLResponse:
|
||||||
|
"""Phase 7I follow-up: paste mobile/desktop chats into AtoCore."""
|
||||||
|
return HTMLResponse(content=render_capture())
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/wiki/memories/{memory_id}", response_class=HTMLResponse)
|
||||||
|
def wiki_memory(memory_id: str) -> HTMLResponse:
|
||||||
|
"""Phase 7E: memory detail with audit trail + neighbors."""
|
||||||
|
html = render_memory_detail(memory_id)
|
||||||
|
if html is None:
|
||||||
|
raise HTTPException(status_code=404, detail="Memory not found")
|
||||||
|
return HTMLResponse(content=html)
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/wiki/domains/{tag}", response_class=HTMLResponse)
|
||||||
|
def wiki_domain(tag: str) -> HTMLResponse:
|
||||||
|
"""Phase 7F: cross-project view for a domain tag."""
|
||||||
|
return HTMLResponse(content=render_domain(tag))
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/wiki/activity", response_class=HTMLResponse)
|
||||||
|
def wiki_activity(hours: int = 48, limit: int = 100) -> HTMLResponse:
|
||||||
|
"""Autonomous-activity timeline feed."""
|
||||||
|
return HTMLResponse(content=render_activity(hours=hours, limit=limit))
|
||||||
|
|
||||||
|
|
||||||
@router.get("/admin/triage", response_class=HTMLResponse)
|
@router.get("/admin/triage", response_class=HTMLResponse)
|
||||||
def admin_triage(limit: int = 100) -> HTMLResponse:
|
def admin_triage(limit: int = 100) -> HTMLResponse:
|
||||||
"""Human triage UI for candidate memories.
|
"""Human triage UI for candidate memories.
|
||||||
|
|||||||
@@ -26,8 +26,26 @@ from atocore.memory.service import get_memories
|
|||||||
from atocore.projects.registry import load_project_registry
|
from atocore.projects.registry import load_project_registry
|
||||||
|
|
||||||
|
|
||||||
def render_html(title: str, body_html: str, breadcrumbs: list[tuple[str, str]] | None = None) -> str:
|
_TOP_NAV_LINKS = [
|
||||||
nav = ""
|
("🏠 Home", "/wiki"),
|
||||||
|
("📡 Activity", "/wiki/activity"),
|
||||||
|
("📥 Capture", "/wiki/capture"),
|
||||||
|
("🔀 Triage", "/admin/triage"),
|
||||||
|
("📊 Dashboard", "/admin/dashboard"),
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def _render_topnav(active_path: str = "") -> str:
|
||||||
|
items = []
|
||||||
|
for label, href in _TOP_NAV_LINKS:
|
||||||
|
cls = "topnav-item active" if href == active_path else "topnav-item"
|
||||||
|
items.append(f'<a href="{href}" class="{cls}">{label}</a>')
|
||||||
|
return f'<nav class="topnav">{" ".join(items)}</nav>'
|
||||||
|
|
||||||
|
|
||||||
|
def render_html(title: str, body_html: str, breadcrumbs: list[tuple[str, str]] | None = None, active_path: str = "") -> str:
|
||||||
|
topnav = _render_topnav(active_path)
|
||||||
|
crumbs = ""
|
||||||
if breadcrumbs:
|
if breadcrumbs:
|
||||||
parts = []
|
parts = []
|
||||||
for label, href in breadcrumbs:
|
for label, href in breadcrumbs:
|
||||||
@@ -35,8 +53,9 @@ def render_html(title: str, body_html: str, breadcrumbs: list[tuple[str, str]] |
|
|||||||
parts.append(f'<a href="{href}">{label}</a>')
|
parts.append(f'<a href="{href}">{label}</a>')
|
||||||
else:
|
else:
|
||||||
parts.append(f"<span>{label}</span>")
|
parts.append(f"<span>{label}</span>")
|
||||||
nav = f'<nav class="breadcrumbs">{" / ".join(parts)}</nav>'
|
crumbs = f'<nav class="breadcrumbs">{" / ".join(parts)}</nav>'
|
||||||
|
|
||||||
|
nav = topnav + crumbs
|
||||||
return _TEMPLATE.replace("{{title}}", title).replace("{{nav}}", nav).replace("{{body}}", body_html)
|
return _TEMPLATE.replace("{{title}}", title).replace("{{nav}}", nav).replace("{{body}}", body_html)
|
||||||
|
|
||||||
|
|
||||||
@@ -100,6 +119,35 @@ def render_homepage() -> str:
|
|||||||
lines.append('<button type="submit">Search</button>')
|
lines.append('<button type="submit">Search</button>')
|
||||||
lines.append('</form>')
|
lines.append('</form>')
|
||||||
|
|
||||||
|
# What's happening — autonomous activity snippet
|
||||||
|
try:
|
||||||
|
from atocore.memory.service import get_recent_audit
|
||||||
|
recent = get_recent_audit(limit=30)
|
||||||
|
by_action: dict[str, int] = {}
|
||||||
|
by_actor: dict[str, int] = {}
|
||||||
|
for a in recent:
|
||||||
|
by_action[a["action"]] = by_action.get(a["action"], 0) + 1
|
||||||
|
by_actor[a["actor"]] = by_actor.get(a["actor"], 0) + 1
|
||||||
|
# Surface autonomous actors specifically
|
||||||
|
auto_actors = {k: v for k, v in by_actor.items()
|
||||||
|
if k.startswith("auto-") or k == "confidence-decay"
|
||||||
|
or k == "phase10-auto-promote" or k == "transient-to-durable"}
|
||||||
|
if recent:
|
||||||
|
lines.append('<div class="activity-snippet">')
|
||||||
|
lines.append('<h3>📡 What the brain is doing</h3>')
|
||||||
|
top_actions = sorted(by_action.items(), key=lambda x: -x[1])[:6]
|
||||||
|
lines.append('<div class="stat-row">' +
|
||||||
|
"".join(f'<span>{a}: {n}</span>' for a, n in top_actions) +
|
||||||
|
'</div>')
|
||||||
|
if auto_actors:
|
||||||
|
lines.append(f'<p style="font-size:0.9rem; margin:0.3rem 0;">Autonomous actors: ' +
|
||||||
|
" · ".join(f'<code>{k}</code> ({v})' for k, v in auto_actors.items()) +
|
||||||
|
'</p>')
|
||||||
|
lines.append('<p style="font-size:0.85rem; margin:0;"><a href="/wiki/activity">Full timeline →</a></p>')
|
||||||
|
lines.append('</div>')
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
for bucket_name, items in buckets.items():
|
for bucket_name, items in buckets.items():
|
||||||
if not items:
|
if not items:
|
||||||
continue
|
continue
|
||||||
@@ -167,7 +215,7 @@ def render_homepage() -> str:
|
|||||||
|
|
||||||
lines.append(f'<p><a href="/admin/triage">Triage Queue</a> · <a href="/admin/dashboard">API Dashboard (JSON)</a> · <a href="/health">Health Check</a></p>')
|
lines.append(f'<p><a href="/admin/triage">Triage Queue</a> · <a href="/admin/dashboard">API Dashboard (JSON)</a> · <a href="/health">Health Check</a></p>')
|
||||||
|
|
||||||
return render_html("AtoCore Wiki", "\n".join(lines))
|
return render_html("AtoCore Wiki", "\n".join(lines), active_path="/wiki")
|
||||||
|
|
||||||
|
|
||||||
def render_project(project: str) -> str:
|
def render_project(project: str) -> str:
|
||||||
@@ -288,6 +336,370 @@ def render_search(query: str) -> str:
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------
|
||||||
|
# Phase 7I follow-up — /wiki/capture: paste mobile/desktop chats
|
||||||
|
# ---------------------------------------------------------------------
|
||||||
|
|
||||||
|
|
||||||
|
def render_capture() -> str:
|
||||||
|
lines = ['<h1>📥 Capture a conversation</h1>']
|
||||||
|
lines.append(
|
||||||
|
'<p>Paste a chat from Claude Desktop, Claude.ai (web or mobile), '
|
||||||
|
'or any other LLM. It goes through the same pipeline as auto-captured '
|
||||||
|
'interactions: extraction → 3-tier triage → active memory if it carries signal.</p>'
|
||||||
|
)
|
||||||
|
lines.append('<p class="meta">Your prompt + the assistant\'s response. Project is optional — '
|
||||||
|
'the extractor infers it from content.</p>')
|
||||||
|
lines.append("""
|
||||||
|
<form id="capture-form" style="display:flex; flex-direction:column; gap:0.8rem; margin-top:1rem;">
|
||||||
|
<label><strong>Your prompt / question</strong>
|
||||||
|
<textarea id="cap-prompt" required rows="4"
|
||||||
|
style="width:100%; padding:0.6rem; background:var(--bg); color:var(--text); border:1px solid var(--border); border-radius:6px; font-family:inherit; font-size:0.95rem;"
|
||||||
|
placeholder="Paste what you asked…"></textarea>
|
||||||
|
</label>
|
||||||
|
<label><strong>Assistant response</strong>
|
||||||
|
<textarea id="cap-response" required rows="10"
|
||||||
|
style="width:100%; padding:0.6rem; background:var(--bg); color:var(--text); border:1px solid var(--border); border-radius:6px; font-family:inherit; font-size:0.95rem;"
|
||||||
|
placeholder="Paste the full assistant response…"></textarea>
|
||||||
|
</label>
|
||||||
|
<div style="display:flex; gap:0.5rem; align-items:center; flex-wrap:wrap;">
|
||||||
|
<label style="display:flex; gap:0.35rem; align-items:center;">Project (optional):
|
||||||
|
<input type="text" id="cap-project" placeholder="auto-detect"
|
||||||
|
style="padding:0.35rem 0.6rem; background:var(--bg); color:var(--text); border:1px solid var(--border); border-radius:4px; font-family:monospace; width:180px;">
|
||||||
|
</label>
|
||||||
|
<label style="display:flex; gap:0.35rem; align-items:center;">Source:
|
||||||
|
<select id="cap-source" style="padding:0.35rem; background:var(--bg); color:var(--text); border:1px solid var(--border); border-radius:4px;">
|
||||||
|
<option value="claude-desktop">Claude Desktop</option>
|
||||||
|
<option value="claude-web">Claude.ai web</option>
|
||||||
|
<option value="claude-mobile">Claude mobile</option>
|
||||||
|
<option value="chatgpt">ChatGPT</option>
|
||||||
|
<option value="other">Other</option>
|
||||||
|
</select>
|
||||||
|
</label>
|
||||||
|
</div>
|
||||||
|
<button type="submit"
|
||||||
|
style="padding:0.6rem 1.2rem; background:var(--accent); color:white; border:none; border-radius:6px; cursor:pointer; font-size:1rem; font-weight:600; align-self:flex-start;">
|
||||||
|
Save to AtoCore
|
||||||
|
</button>
|
||||||
|
</form>
|
||||||
|
<div id="cap-status" style="margin-top:1rem; font-size:0.9rem; min-height:1.5em;"></div>
|
||||||
|
|
||||||
|
<script>
|
||||||
|
document.getElementById('capture-form').addEventListener('submit', async (e) => {
|
||||||
|
e.preventDefault();
|
||||||
|
const prompt = document.getElementById('cap-prompt').value.trim();
|
||||||
|
const response = document.getElementById('cap-response').value.trim();
|
||||||
|
const project = document.getElementById('cap-project').value.trim();
|
||||||
|
const source = document.getElementById('cap-source').value;
|
||||||
|
const status = document.getElementById('cap-status');
|
||||||
|
if (!prompt || !response) { status.textContent = 'Need both prompt and response.'; return; }
|
||||||
|
status.textContent = 'Saving…';
|
||||||
|
try {
|
||||||
|
const r = await fetch('/interactions', {
|
||||||
|
method: 'POST',
|
||||||
|
headers: {'Content-Type': 'application/json'},
|
||||||
|
body: JSON.stringify({
|
||||||
|
prompt: prompt, response: response,
|
||||||
|
client: source, project: project, reinforce: true
|
||||||
|
})
|
||||||
|
});
|
||||||
|
if (r.ok) {
|
||||||
|
const data = await r.json();
|
||||||
|
status.innerHTML = '✅ Saved — interaction ' + (data.interaction_id || '?').slice(0,8) +
|
||||||
|
'. Runs through extraction + triage within the hour.<br>' +
|
||||||
|
'<a href="/interactions/' + (data.interaction_id || '') + '">view</a>';
|
||||||
|
document.getElementById('capture-form').reset();
|
||||||
|
} else {
|
||||||
|
status.textContent = '❌ ' + r.status + ': ' + (await r.text()).slice(0, 200);
|
||||||
|
}
|
||||||
|
} catch (err) { status.textContent = '❌ ' + err.message; }
|
||||||
|
});
|
||||||
|
</script>
|
||||||
|
""")
|
||||||
|
lines.append(
|
||||||
|
'<h2>How this works</h2>'
|
||||||
|
'<ul>'
|
||||||
|
'<li><strong>Claude Code</strong> → auto-captured via Stop hook</li>'
|
||||||
|
'<li><strong>OpenClaw</strong> → auto-captured + gets AtoCore context injected on prompt start (Phase 7I)</li>'
|
||||||
|
'<li><strong>Anything else</strong> (Claude Desktop, mobile, web, ChatGPT) → paste here</li>'
|
||||||
|
'</ul>'
|
||||||
|
'<p>The extractor is aggressive about capturing signal — don\'t hand-filter. '
|
||||||
|
'If the conversation had nothing durable, triage will auto-reject.</p>'
|
||||||
|
)
|
||||||
|
|
||||||
|
return render_html(
|
||||||
|
"Capture — AtoCore",
|
||||||
|
"\n".join(lines),
|
||||||
|
breadcrumbs=[("Wiki", "/wiki"), ("Capture", "")],
|
||||||
|
active_path="/wiki/capture",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------
|
||||||
|
# Phase 7E — /wiki/memories/{id}: memory detail page
|
||||||
|
# ---------------------------------------------------------------------
|
||||||
|
|
||||||
|
|
||||||
|
def render_memory_detail(memory_id: str) -> str | None:
|
||||||
|
"""Full view of a single memory: content, audit trail, source refs,
|
||||||
|
neighbors, graduation status. Fills the drill-down gap the list
|
||||||
|
views can't."""
|
||||||
|
from atocore.memory.service import get_memory_audit
|
||||||
|
from atocore.models.database import get_connection
|
||||||
|
|
||||||
|
with get_connection() as conn:
|
||||||
|
row = conn.execute("SELECT * FROM memories WHERE id = ?", (memory_id,)).fetchone()
|
||||||
|
if row is None:
|
||||||
|
return None
|
||||||
|
|
||||||
|
import json as _json
|
||||||
|
mem = dict(row)
|
||||||
|
try:
|
||||||
|
tags = _json.loads(mem.get("domain_tags") or "[]") or []
|
||||||
|
except Exception:
|
||||||
|
tags = []
|
||||||
|
|
||||||
|
lines = [f'<h1>{mem["memory_type"]}: <span style="color:var(--text);">{mem["content"][:80]}</span></h1>']
|
||||||
|
if len(mem["content"]) > 80:
|
||||||
|
lines.append(f'<blockquote><p>{mem["content"]}</p></blockquote>')
|
||||||
|
|
||||||
|
# Metadata row
|
||||||
|
meta_items = [
|
||||||
|
f'<span class="tag">{mem["status"]}</span>',
|
||||||
|
f'<strong>{mem["memory_type"]}</strong>',
|
||||||
|
]
|
||||||
|
if mem.get("project"):
|
||||||
|
meta_items.append(f'<a href="/wiki/projects/{mem["project"]}">{mem["project"]}</a>')
|
||||||
|
meta_items.append(f'confidence: <strong>{float(mem.get("confidence") or 0):.2f}</strong>')
|
||||||
|
meta_items.append(f'refs: <strong>{int(mem.get("reference_count") or 0)}</strong>')
|
||||||
|
if mem.get("valid_until"):
|
||||||
|
meta_items.append(f'<span class="mem-expiry">valid until {str(mem["valid_until"])[:10]}</span>')
|
||||||
|
lines.append(f'<p>{" · ".join(meta_items)}</p>')
|
||||||
|
|
||||||
|
if tags:
|
||||||
|
tag_links = " ".join(f'<a href="/wiki/domains/{t}" class="tag-badge">{t}</a>' for t in tags)
|
||||||
|
lines.append(f'<p><span class="mem-tags">{tag_links}</span></p>')
|
||||||
|
|
||||||
|
lines.append(f'<p class="meta">id: <code>{mem["id"]}</code> · created: {mem["created_at"]}'
|
||||||
|
f' · updated: {mem.get("updated_at", "?")}'
|
||||||
|
+ (f' · last referenced: {mem["last_referenced_at"]}' if mem.get("last_referenced_at") else '')
|
||||||
|
+ '</p>')
|
||||||
|
|
||||||
|
# Graduation
|
||||||
|
if mem.get("graduated_to_entity_id"):
|
||||||
|
eid = mem["graduated_to_entity_id"]
|
||||||
|
lines.append(
|
||||||
|
f'<h2>🎓 Graduated</h2>'
|
||||||
|
f'<p>This memory was promoted to a typed entity: '
|
||||||
|
f'<a href="/wiki/entities/{eid}">{eid[:8]}</a></p>'
|
||||||
|
)
|
||||||
|
|
||||||
|
# Source chunk
|
||||||
|
if mem.get("source_chunk_id"):
|
||||||
|
lines.append(f'<h2>Source chunk</h2><p><code>{mem["source_chunk_id"]}</code></p>')
|
||||||
|
|
||||||
|
# Audit trail
|
||||||
|
audit = get_memory_audit(memory_id, limit=50)
|
||||||
|
if audit:
|
||||||
|
lines.append(f'<h2>Audit trail ({len(audit)} events)</h2><ul>')
|
||||||
|
for a in audit:
|
||||||
|
note = f' — {a["note"]}' if a.get("note") else ""
|
||||||
|
lines.append(
|
||||||
|
f'<li><code>{a["timestamp"]}</code> '
|
||||||
|
f'<strong>{a["action"]}</strong> '
|
||||||
|
f'<em>{a["actor"]}</em>{note}</li>'
|
||||||
|
)
|
||||||
|
lines.append('</ul>')
|
||||||
|
|
||||||
|
# Neighbors by shared tag
|
||||||
|
if tags:
|
||||||
|
from atocore.memory.service import get_memories as _get_memories
|
||||||
|
neighbors = []
|
||||||
|
for t in tags[:3]:
|
||||||
|
for other in _get_memories(active_only=True, limit=30):
|
||||||
|
if other.id == memory_id:
|
||||||
|
continue
|
||||||
|
if any(ot == t for ot in (other.domain_tags or [])):
|
||||||
|
neighbors.append(other)
|
||||||
|
# Dedupe
|
||||||
|
seen = set()
|
||||||
|
uniq = []
|
||||||
|
for n in neighbors:
|
||||||
|
if n.id in seen:
|
||||||
|
continue
|
||||||
|
seen.add(n.id)
|
||||||
|
uniq.append(n)
|
||||||
|
if uniq:
|
||||||
|
lines.append(f'<h2>Related (by tag)</h2><ul>')
|
||||||
|
for n in uniq[:10]:
|
||||||
|
lines.append(
|
||||||
|
f'<li><a href="/wiki/memories/{n.id}">[{n.memory_type}] '
|
||||||
|
f'{n.content[:120]}</a>'
|
||||||
|
+ (f' <span class="tag">{n.project}</span>' if n.project else '')
|
||||||
|
+ '</li>'
|
||||||
|
)
|
||||||
|
lines.append('</ul>')
|
||||||
|
|
||||||
|
return render_html(
|
||||||
|
f"Memory {memory_id[:8]}",
|
||||||
|
"\n".join(lines),
|
||||||
|
breadcrumbs=[("Wiki", "/wiki"), ("Memory", "")],
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------
|
||||||
|
# Phase 7F — /wiki/domains/{tag}: cross-project domain view
|
||||||
|
# ---------------------------------------------------------------------
|
||||||
|
|
||||||
|
|
||||||
|
def render_domain(tag: str) -> str:
|
||||||
|
"""All memories + entities carrying a given domain_tag, grouped by project.
|
||||||
|
Answers 'what does the brain know about optics, across all projects?'"""
|
||||||
|
tag = (tag or "").strip().lower()
|
||||||
|
if not tag:
|
||||||
|
return render_html("Domain", "<p>No tag specified.</p>",
|
||||||
|
breadcrumbs=[("Wiki", "/wiki"), ("Domains", "")])
|
||||||
|
|
||||||
|
all_mems = get_memories(active_only=True, limit=500)
|
||||||
|
matching = [m for m in all_mems
|
||||||
|
if any((t or "").lower() == tag for t in (m.domain_tags or []))]
|
||||||
|
|
||||||
|
# Group by project
|
||||||
|
by_project: dict[str, list] = {}
|
||||||
|
for m in matching:
|
||||||
|
by_project.setdefault(m.project or "(global)", []).append(m)
|
||||||
|
|
||||||
|
lines = [f'<h1>Domain: <code>{tag}</code></h1>']
|
||||||
|
lines.append(f'<p class="meta">{len(matching)} active memories across {len(by_project)} projects</p>')
|
||||||
|
|
||||||
|
if not matching:
|
||||||
|
lines.append(
|
||||||
|
f'<p>No memories currently carry the tag <code>{tag}</code>.</p>'
|
||||||
|
'<p>Domain tags are assigned by the extractor when it identifies '
|
||||||
|
'the topical scope of a memory. They update over time.</p>'
|
||||||
|
)
|
||||||
|
return render_html(
|
||||||
|
f"Domain: {tag}",
|
||||||
|
"\n".join(lines),
|
||||||
|
breadcrumbs=[("Wiki", "/wiki"), ("Domains", ""), (tag, "")],
|
||||||
|
)
|
||||||
|
|
||||||
|
# Sort projects by count descending, (global) last
|
||||||
|
def sort_key(item: tuple[str, list]) -> tuple[int, int]:
|
||||||
|
proj, mems = item
|
||||||
|
return (1 if proj == "(global)" else 0, -len(mems))
|
||||||
|
|
||||||
|
for proj, mems in sorted(by_project.items(), key=sort_key):
|
||||||
|
proj_link = proj if proj == "(global)" else f'<a href="/wiki/projects/{proj}">{proj}</a>'
|
||||||
|
lines.append(f'<h2>{proj_link} ({len(mems)})</h2><ul>')
|
||||||
|
for m in mems:
|
||||||
|
other_tags = [t for t in (m.domain_tags or []) if t != tag][:3]
|
||||||
|
other_tags_html = ""
|
||||||
|
if other_tags:
|
||||||
|
other_tags_html = ' <span class="mem-tags">' + " ".join(
|
||||||
|
f'<a href="/wiki/domains/{t}" class="tag-badge">{t}</a>' for t in other_tags
|
||||||
|
) + '</span>'
|
||||||
|
lines.append(
|
||||||
|
f'<li><a href="/wiki/memories/{m.id}">[{m.memory_type}] '
|
||||||
|
f'{m.content[:200]}</a>'
|
||||||
|
f' <span class="meta">conf {m.confidence:.2f} · refs {m.reference_count}</span>'
|
||||||
|
f'{other_tags_html}</li>'
|
||||||
|
)
|
||||||
|
lines.append('</ul>')
|
||||||
|
|
||||||
|
# Entities with this tag (if any have tags — currently they might not)
|
||||||
|
try:
|
||||||
|
all_entities = get_entities(limit=500)
|
||||||
|
ent_matching = []
|
||||||
|
for e in all_entities:
|
||||||
|
tags = e.properties.get("domain_tags") if e.properties else []
|
||||||
|
if isinstance(tags, list) and tag in [str(t).lower() for t in tags]:
|
||||||
|
ent_matching.append(e)
|
||||||
|
if ent_matching:
|
||||||
|
lines.append(f'<h2>🔧 Entities ({len(ent_matching)})</h2><ul>')
|
||||||
|
for e in ent_matching:
|
||||||
|
lines.append(
|
||||||
|
f'<li><a href="/wiki/entities/{e.id}">[{e.entity_type}] {e.name}</a>'
|
||||||
|
+ (f' <span class="tag">{e.project}</span>' if e.project else '')
|
||||||
|
+ '</li>'
|
||||||
|
)
|
||||||
|
lines.append('</ul>')
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
return render_html(
|
||||||
|
f"Domain: {tag}",
|
||||||
|
"\n".join(lines),
|
||||||
|
breadcrumbs=[("Wiki", "/wiki"), ("Domains", ""), (tag, "")],
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------
|
||||||
|
# /wiki/activity — autonomous-activity feed
|
||||||
|
# ---------------------------------------------------------------------
|
||||||
|
|
||||||
|
|
||||||
|
def render_activity(hours: int = 48, limit: int = 100) -> str:
|
||||||
|
"""Timeline of what the autonomous pipeline did recently. Answers
|
||||||
|
'what has the brain been doing while I was away?'"""
|
||||||
|
from atocore.memory.service import get_recent_audit
|
||||||
|
|
||||||
|
audit = get_recent_audit(limit=limit)
|
||||||
|
|
||||||
|
# Group events by category for summary
|
||||||
|
by_action: dict[str, int] = {}
|
||||||
|
by_actor: dict[str, int] = {}
|
||||||
|
for a in audit:
|
||||||
|
by_action[a["action"]] = by_action.get(a["action"], 0) + 1
|
||||||
|
by_actor[a["actor"]] = by_actor.get(a["actor"], 0) + 1
|
||||||
|
|
||||||
|
lines = [f'<h1>📡 Activity Feed</h1>']
|
||||||
|
lines.append(f'<p class="meta">Last {len(audit)} events in the memory audit log</p>')
|
||||||
|
|
||||||
|
# Summary chips
|
||||||
|
if by_action or by_actor:
|
||||||
|
lines.append('<h2>Summary</h2>')
|
||||||
|
lines.append('<p><strong>By action:</strong> ' +
|
||||||
|
" · ".join(f'{k}: {v}' for k, v in sorted(by_action.items(), key=lambda x: -x[1])) +
|
||||||
|
'</p>')
|
||||||
|
lines.append('<p><strong>By actor:</strong> ' +
|
||||||
|
" · ".join(f'<code>{k}</code>: {v}' for k, v in sorted(by_actor.items(), key=lambda x: -x[1])) +
|
||||||
|
'</p>')
|
||||||
|
|
||||||
|
# Action-type color/emoji
|
||||||
|
action_emoji = {
|
||||||
|
"created": "➕", "promoted": "✅", "rejected": "❌", "invalidated": "🚫",
|
||||||
|
"superseded": "🔀", "reinforced": "🔁", "updated": "✏️",
|
||||||
|
"auto_promoted": "⚡", "created_via_merge": "🔗",
|
||||||
|
"valid_until_extended": "⏳", "tag_canonicalized": "🏷️",
|
||||||
|
}
|
||||||
|
|
||||||
|
lines.append('<h2>Timeline</h2><ul>')
|
||||||
|
for a in audit:
|
||||||
|
emoji = action_emoji.get(a["action"], "•")
|
||||||
|
preview = a.get("content_preview") or ""
|
||||||
|
ts_short = a["timestamp"][:16] if a.get("timestamp") else "?"
|
||||||
|
mid_short = (a.get("memory_id") or "")[:8]
|
||||||
|
note = f' — <em>{a["note"]}</em>' if a.get("note") else ""
|
||||||
|
lines.append(
|
||||||
|
f'<li>{emoji} <code>{ts_short}</code> '
|
||||||
|
f'<strong>{a["action"]}</strong> '
|
||||||
|
f'<em>{a["actor"]}</em> '
|
||||||
|
f'<a href="/wiki/memories/{a["memory_id"]}">{mid_short}</a>'
|
||||||
|
f'{note}'
|
||||||
|
+ (f'<br><span style="opacity:0.6; font-size:0.85rem; margin-left:1.5rem;">{preview[:140]}</span>' if preview else '')
|
||||||
|
+ '</li>'
|
||||||
|
)
|
||||||
|
lines.append('</ul>')
|
||||||
|
|
||||||
|
return render_html(
|
||||||
|
"Activity — AtoCore",
|
||||||
|
"\n".join(lines),
|
||||||
|
breadcrumbs=[("Wiki", "/wiki"), ("Activity", "")],
|
||||||
|
active_path="/wiki/activity",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
_TEMPLATE = """<!DOCTYPE html>
|
_TEMPLATE = """<!DOCTYPE html>
|
||||||
<html lang="en">
|
<html lang="en">
|
||||||
<head>
|
<head>
|
||||||
@@ -324,6 +736,17 @@ _TEMPLATE = """<!DOCTYPE html>
|
|||||||
hr { border: none; border-top: 1px solid var(--border); margin: 2rem 0; }
|
hr { border: none; border-top: 1px solid var(--border); margin: 2rem 0; }
|
||||||
.breadcrumbs { margin-bottom: 1.5rem; font-size: 0.85em; opacity: 0.7; }
|
.breadcrumbs { margin-bottom: 1.5rem; font-size: 0.85em; opacity: 0.7; }
|
||||||
.breadcrumbs a { opacity: 0.8; }
|
.breadcrumbs a { opacity: 0.8; }
|
||||||
|
.topnav { display: flex; gap: 0.25rem; flex-wrap: wrap; margin-bottom: 1rem; padding-bottom: 0.8rem; border-bottom: 1px solid var(--border); }
|
||||||
|
.topnav-item { padding: 0.35rem 0.8rem; background: var(--card); border: 1px solid var(--border); border-radius: 6px; font-size: 0.88rem; color: var(--text); opacity: 0.75; text-decoration: none; }
|
||||||
|
.topnav-item:hover { opacity: 1; background: var(--hover); text-decoration: none; }
|
||||||
|
.topnav-item.active { background: var(--accent); color: white; border-color: var(--accent); opacity: 1; }
|
||||||
|
.topnav-item.active:hover { background: var(--accent); }
|
||||||
|
.activity-snippet { background: var(--card); border: 1px solid var(--border); border-radius: 8px; padding: 1rem; margin: 1rem 0; }
|
||||||
|
.activity-snippet h3 { color: var(--accent); margin-bottom: 0.4rem; }
|
||||||
|
.activity-snippet ul { margin: 0.3rem 0 0 1.2rem; font-size: 0.9rem; }
|
||||||
|
.activity-snippet li { margin-bottom: 0.2rem; }
|
||||||
|
.stat-row { display: flex; gap: 1rem; flex-wrap: wrap; font-size: 0.9rem; margin: 0.4rem 0; }
|
||||||
|
.stat-row span { padding: 0.1rem 0.4rem; background: var(--hover); border-radius: 4px; }
|
||||||
.meta { font-size: 0.8em; opacity: 0.5; margin-top: 0.5rem; }
|
.meta { font-size: 0.8em; opacity: 0.5; margin-top: 0.5rem; }
|
||||||
.tag { background: var(--accent); color: var(--bg); padding: 0.1rem 0.4rem; border-radius: 3px; font-size: 0.75em; margin-left: 0.3rem; }
|
.tag { background: var(--accent); color: var(--bg); padding: 0.1rem 0.4rem; border-radius: 3px; font-size: 0.75em; margin-left: 0.3rem; }
|
||||||
.search-box { display: flex; gap: 0.5rem; margin: 1.5rem 0; }
|
.search-box { display: flex; gap: 0.5rem; margin: 1.5rem 0; }
|
||||||
|
|||||||
158
tests/test_wiki_pages.py
Normal file
158
tests/test_wiki_pages.py
Normal file
@@ -0,0 +1,158 @@
|
|||||||
|
"""Tests for the new wiki pages shipped in the UI refresh:
|
||||||
|
- /wiki/capture (7I follow-up)
|
||||||
|
- /wiki/memories/{id} (7E)
|
||||||
|
- /wiki/domains/{tag} (7F)
|
||||||
|
- /wiki/activity (activity feed)
|
||||||
|
- home refresh (topnav + activity snippet)
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from atocore.engineering.wiki import (
|
||||||
|
render_activity,
|
||||||
|
render_capture,
|
||||||
|
render_domain,
|
||||||
|
render_homepage,
|
||||||
|
render_memory_detail,
|
||||||
|
)
|
||||||
|
from atocore.engineering.service import init_engineering_schema
|
||||||
|
from atocore.memory.service import create_memory
|
||||||
|
from atocore.models.database import init_db
|
||||||
|
|
||||||
|
|
||||||
|
def _init_all():
|
||||||
|
"""Wiki pages read from both the memory and engineering schemas, so
|
||||||
|
tests need both initialized (the engineering schema is a separate
|
||||||
|
init_engineering_schema() call)."""
|
||||||
|
init_db()
|
||||||
|
init_engineering_schema()
|
||||||
|
|
||||||
|
|
||||||
|
def test_capture_page_renders(tmp_data_dir):
|
||||||
|
_init_all()
|
||||||
|
html = render_capture()
|
||||||
|
assert "Capture a conversation" in html
|
||||||
|
assert "cap-prompt" in html
|
||||||
|
assert "cap-response" in html
|
||||||
|
# Topnav present
|
||||||
|
assert "topnav" in html
|
||||||
|
# Source options for mobile/desktop
|
||||||
|
assert "claude-desktop" in html
|
||||||
|
assert "claude-mobile" in html
|
||||||
|
|
||||||
|
|
||||||
|
def test_memory_detail_renders(tmp_data_dir):
|
||||||
|
_init_all()
|
||||||
|
m = create_memory(
|
||||||
|
"knowledge", "APM uses NX bridge for DXF → STL",
|
||||||
|
project="apm", confidence=0.7, domain_tags=["apm", "nx", "cad"],
|
||||||
|
)
|
||||||
|
html = render_memory_detail(m.id)
|
||||||
|
assert html is not None
|
||||||
|
assert "APM uses NX" in html
|
||||||
|
assert "Audit trail" in html
|
||||||
|
# Tag links go to domain pages
|
||||||
|
assert '/wiki/domains/apm' in html
|
||||||
|
assert '/wiki/domains/nx' in html
|
||||||
|
# Project link present
|
||||||
|
assert '/wiki/projects/apm' in html
|
||||||
|
|
||||||
|
|
||||||
|
def test_memory_detail_404(tmp_data_dir):
|
||||||
|
_init_all()
|
||||||
|
assert render_memory_detail("nonexistent-id") is None
|
||||||
|
|
||||||
|
|
||||||
|
def test_domain_page_lists_memories(tmp_data_dir):
|
||||||
|
_init_all()
|
||||||
|
create_memory("knowledge", "optics fact 1", project="p04-gigabit",
|
||||||
|
domain_tags=["optics"])
|
||||||
|
create_memory("knowledge", "optics fact 2", project="p05-interferometer",
|
||||||
|
domain_tags=["optics", "metrology"])
|
||||||
|
create_memory("knowledge", "other", project="p06-polisher",
|
||||||
|
domain_tags=["firmware"])
|
||||||
|
|
||||||
|
html = render_domain("optics")
|
||||||
|
assert "Domain: <code>optics</code>" in html
|
||||||
|
assert "p04-gigabit" in html
|
||||||
|
assert "p05-interferometer" in html
|
||||||
|
assert "optics fact 1" in html
|
||||||
|
assert "optics fact 2" in html
|
||||||
|
# Unrelated memory should NOT appear
|
||||||
|
assert "other" not in html or "firmware" not in html
|
||||||
|
|
||||||
|
|
||||||
|
def test_domain_page_empty(tmp_data_dir):
|
||||||
|
_init_all()
|
||||||
|
html = render_domain("definitely-not-a-tag")
|
||||||
|
assert "No memories currently carry" in html
|
||||||
|
|
||||||
|
|
||||||
|
def test_domain_page_normalizes_tag(tmp_data_dir):
|
||||||
|
_init_all()
|
||||||
|
create_memory("knowledge", "x", domain_tags=["firmware"])
|
||||||
|
# Case-insensitive
|
||||||
|
assert "firmware" in render_domain("FIRMWARE")
|
||||||
|
# Whitespace tolerant
|
||||||
|
assert "firmware" in render_domain(" firmware ")
|
||||||
|
|
||||||
|
|
||||||
|
def test_activity_feed_renders(tmp_data_dir):
|
||||||
|
_init_all()
|
||||||
|
m = create_memory("knowledge", "activity test")
|
||||||
|
html = render_activity()
|
||||||
|
assert "Activity Feed" in html
|
||||||
|
# The newly-created memory should appear as a "created" event
|
||||||
|
assert "created" in html
|
||||||
|
# Short timestamp format
|
||||||
|
assert m.id[:8] in html
|
||||||
|
|
||||||
|
|
||||||
|
def test_activity_feed_groups_by_action_and_actor(tmp_data_dir):
|
||||||
|
_init_all()
|
||||||
|
for i in range(3):
|
||||||
|
create_memory("knowledge", f"m{i}", actor="test-actor")
|
||||||
|
|
||||||
|
html = render_activity()
|
||||||
|
# Summary row should show "created: 3" or similar
|
||||||
|
assert "created" in html
|
||||||
|
assert "test-actor" in html
|
||||||
|
|
||||||
|
|
||||||
|
def test_homepage_has_topnav_and_activity(tmp_data_dir):
|
||||||
|
_init_all()
|
||||||
|
create_memory("knowledge", "homepage test")
|
||||||
|
html = render_homepage()
|
||||||
|
# Topnav with expected items
|
||||||
|
assert "🏠 Home" in html
|
||||||
|
assert "📡 Activity" in html
|
||||||
|
assert "📥 Capture" in html
|
||||||
|
assert "/wiki/capture" in html
|
||||||
|
assert "/wiki/activity" in html
|
||||||
|
# Activity snippet
|
||||||
|
assert "What the brain is doing" in html
|
||||||
|
|
||||||
|
|
||||||
|
def test_memory_detail_shows_superseded_sources(tmp_data_dir):
|
||||||
|
"""After a merge, sources go to status=superseded. Detail page should
|
||||||
|
still render them."""
|
||||||
|
from atocore.memory.service import (
|
||||||
|
create_merge_candidate, merge_memories,
|
||||||
|
)
|
||||||
|
_init_all()
|
||||||
|
m1 = create_memory("knowledge", "alpha variant 1", project="test")
|
||||||
|
m2 = create_memory("knowledge", "alpha variant 2", project="test")
|
||||||
|
cid = create_merge_candidate(
|
||||||
|
memory_ids=[m1.id, m2.id], similarity=0.9,
|
||||||
|
proposed_content="alpha merged",
|
||||||
|
proposed_memory_type="knowledge", proposed_project="test",
|
||||||
|
)
|
||||||
|
merge_memories(cid, actor="auto-dedup-tier1")
|
||||||
|
|
||||||
|
# Source detail page should render and show the superseded status
|
||||||
|
html1 = render_memory_detail(m1.id)
|
||||||
|
assert html1 is not None
|
||||||
|
assert "superseded" in html1
|
||||||
|
assert "auto-dedup-tier1" in html1 # audit trail shows who merged
|
||||||
Reference in New Issue
Block a user