feat: Claude Code context injection (UserPromptSubmit hook)

Closes the asymmetry the user surfaced: before this, Claude Code
captured every turn (Stop hook) but retrieval only happened when
Claude chose to call atocore_context (opt-in MCP tool). OpenClaw had
both sides covered after 7I; Claude Code did not.

Now symmetric. Every Claude Code prompt is auto-sent to
/context/build and the returned pack is prepended via
hookSpecificOutput.additionalContext — same as what OpenClaw's
before_agent_start hook now does.

- deploy/hooks/inject_context.py — UserPromptSubmit hook. Fail-open
  (always exit 0). Skips short/XML prompts. 5s timeout. Project
  inference mirrors capture_stop.py cwd→slug table. Kill switch:
  ATOCORE_CONTEXT_DISABLED=1.
- ~/.claude/settings.json registered the hook (local config, not
  committed; copy-paste snippet in docs/capture-surfaces.md).
- Removed /wiki/capture from topnav. Endpoint still exists but the
  page is now labeled "fallback only" with a warning banner. The
  sanctioned surfaces are Claude Code + OpenClaw; manual paste is
  explicitly not the design.
- docs/capture-surfaces.md — scope statement: two surfaces, nothing
  else. Anthropic API polling explicitly prohibited.

Tests: +8 for inject_context.py (exit 0 on all failure modes, kill
switch, short prompt filter, XML filter, bad stdin, mock-server
success shape, project inference from cwd). Updated 2 wiki tests
for the topnav change. 450 → 459.

Verified live with real AtoCore: injected 2979 chars of atocore
project context on a cwd-matched prompt.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
2026-04-19 12:01:41 -04:00
parent 6e43cc7383
commit 9c91d778d9
5 changed files with 448 additions and 16 deletions

View File

@@ -0,0 +1,174 @@
#!/usr/bin/env python3
"""Claude Code UserPromptSubmit hook: inject AtoCore context.
Mirrors the OpenClaw 7I pattern on the Claude Code side. Every user
prompt submitted to Claude Code is (a) sent to /context/build on the
AtoCore API, and (b) the returned context pack is prepended to the
prompt the LLM sees — so Claude Code answers grounded in what AtoCore
already knows, same as OpenClaw now does.
Contract per Claude Code hooks spec:
stdin: JSON with `prompt`, `session_id`, `transcript_path`, `cwd`,
`hook_event_name`, etc.
stdout on success: JSON
{"hookSpecificOutput":
{"hookEventName": "UserPromptSubmit",
"additionalContext": "<pack>"}}
exit 0 always — fail open. An unreachable AtoCore must never block
the user's prompt.
Environment variables:
ATOCORE_URL base URL (default http://dalidou:8100)
ATOCORE_CONTEXT_DISABLED set to "1" to disable injection
ATOCORE_CONTEXT_BUDGET max chars of injected pack (default 4000)
ATOCORE_CONTEXT_TIMEOUT HTTP timeout in seconds (default 5)
Usage in ~/.claude/settings.json:
"UserPromptSubmit": [{
"matcher": "",
"hooks": [{
"type": "command",
"command": "python /path/to/inject_context.py",
"timeout": 10
}]
}]
"""
from __future__ import annotations
import json
import os
import sys
import urllib.error
import urllib.request
ATOCORE_URL = os.environ.get("ATOCORE_URL", "http://dalidou:8100")
CONTEXT_TIMEOUT = float(os.environ.get("ATOCORE_CONTEXT_TIMEOUT", "5"))
CONTEXT_BUDGET = int(os.environ.get("ATOCORE_CONTEXT_BUDGET", "4000"))
# Don't spend an API call on trivial acks or slash commands.
MIN_PROMPT_LENGTH = 15
# Project inference table — kept in sync with capture_stop.py so both
# hooks agree on what project a Claude Code session belongs to.
_VAULT = "C:\\Users\\antoi\\antoine\\My Libraries\\Antoine Brain Extension"
_PROJECT_PATH_MAP: dict[str, str] = {
f"{_VAULT}\\2-Projects\\P04-GigaBIT-M1": "p04-gigabit",
f"{_VAULT}\\2-Projects\\P10-Interferometer": "p05-interferometer",
f"{_VAULT}\\2-Projects\\P11-Polisher-Fullum": "p06-polisher",
f"{_VAULT}\\2-Projects\\P08-ABB-Space-Mirror": "abb-space",
f"{_VAULT}\\2-Projects\\I01-Atomizer": "atomizer-v2",
f"{_VAULT}\\2-Projects\\I02-AtoCore": "atocore",
"C:\\Users\\antoi\\ATOCore": "atocore",
"C:\\Users\\antoi\\Polisher-Sim": "p06-polisher",
"C:\\Users\\antoi\\Fullum-Interferometer": "p05-interferometer",
"C:\\Users\\antoi\\Atomizer-V2": "atomizer-v2",
}
def _infer_project(cwd: str) -> str:
if not cwd:
return ""
norm = os.path.normpath(cwd).lower()
for path_prefix, project_id in _PROJECT_PATH_MAP.items():
if norm.startswith(os.path.normpath(path_prefix).lower()):
return project_id
return ""
def _emit_empty() -> None:
"""Exit 0 with no additionalContext — equivalent to no-op."""
sys.exit(0)
def _emit_context(pack: str) -> None:
"""Write the hook output JSON and exit 0."""
out = {
"hookSpecificOutput": {
"hookEventName": "UserPromptSubmit",
"additionalContext": pack,
}
}
sys.stdout.write(json.dumps(out))
sys.exit(0)
def main() -> None:
if os.environ.get("ATOCORE_CONTEXT_DISABLED") == "1":
_emit_empty()
try:
raw = sys.stdin.read()
if not raw.strip():
_emit_empty()
hook_data = json.loads(raw)
except Exception as exc:
# Bad stdin → nothing to do
print(f"inject_context: bad stdin: {exc}", file=sys.stderr)
_emit_empty()
prompt = (hook_data.get("prompt") or "").strip()
cwd = hook_data.get("cwd", "")
if len(prompt) < MIN_PROMPT_LENGTH:
_emit_empty()
# Skip meta / system prompts that start with '<' (XML tags etc.)
if prompt.startswith("<"):
_emit_empty()
project = _infer_project(cwd)
body = json.dumps({
"prompt": prompt,
"project": project,
"char_budget": CONTEXT_BUDGET,
}).encode("utf-8")
req = urllib.request.Request(
f"{ATOCORE_URL}/context/build",
data=body,
headers={"Content-Type": "application/json"},
method="POST",
)
try:
resp = urllib.request.urlopen(req, timeout=CONTEXT_TIMEOUT)
data = json.loads(resp.read().decode("utf-8"))
except urllib.error.URLError as exc:
# AtoCore unreachable — fail open
print(f"inject_context: atocore unreachable: {exc}", file=sys.stderr)
_emit_empty()
except Exception as exc:
print(f"inject_context: request failed: {exc}", file=sys.stderr)
_emit_empty()
pack = (data.get("formatted_context") or "").strip()
if not pack:
_emit_empty()
# Safety truncate. /context/build respects the budget we sent, but
# be defensive in case of a regression.
if len(pack) > CONTEXT_BUDGET + 500:
pack = pack[:CONTEXT_BUDGET] + "\n\n[context truncated]"
# Wrap so the LLM knows this is injected grounding, not user text.
wrapped = (
"---\n"
"AtoCore-injected context for this prompt "
f"(project={project or '(none)'}):\n\n"
f"{pack}\n"
"---"
)
print(
f"inject_context: injected {len(pack)} chars "
f"(project={project or 'none'}, prompt_chars={len(prompt)})",
file=sys.stderr,
)
_emit_context(wrapped)
if __name__ == "__main__":
main()