- Restructure docs/ folder (remove numeric prefixes): - 04_USER_GUIDES -> guides/ - 05_API_REFERENCE -> api/ - 06_PHYSICS -> physics/ - 07_DEVELOPMENT -> development/ - 08_ARCHIVE -> archive/ - 09_DIAGRAMS -> diagrams/ - Replace tagline 'Talk, don't click' with 'LLM-driven optimization framework' in 9 files - Create comprehensive docs/GETTING_STARTED.md: - Prerequisites and quick setup - Project structure overview - First study tutorial (Claude or manual) - Dashboard usage guide - Neural acceleration introduction - Rewrite docs/00_INDEX.md with correct paths and modern structure - Archive obsolete files: - 01_PROTOCOLS.md -> archive/historical/01_PROTOCOLS_legacy.md - 03_GETTING_STARTED.md -> archive/historical/ - ATOMIZER_PODCAST_BRIEFING.md -> archive/marketing/ - Update timestamps to 2026-01-20 across all key files - Update .gitignore to exclude docs/generated/ - Version bump: ATOMIZER_CONTEXT v1.8 -> v2.0
26 KiB
26 KiB
Claude + Canvas Integration V2
The Vision
Side-by-side LLM + Canvas where:
- Claude talks → Canvas updates in real-time (user sees nodes appear/change)
- User tweaks Canvas → Claude sees changes (bi-directional sync)
- Full Claude Code-level power through the dashboard chat
- Interview-driven study creation entirely through chat
The user can:
- Describe what they want in natural language
- Watch the canvas build itself
- Make quick manual tweaks
- Continue the conversation with Claude seeing their changes
- Have Claude execute protocols, create files, run optimizations
Current State vs Target
What We Have Now
┌──────────────────┐ ┌──────────────────┐
│ Chat Panel │ │ Canvas │
│ (Power Mode) │ │ (SpecRenderer) │
├──────────────────┤ ├──────────────────┤
│ - Anthropic API │ │ - Loads spec │
│ - Write tools │ │ - User edits │
│ - spec_modified │--->│ - Auto-refresh │
│ events │ │ on event │
└──────────────────┘ └──────────────────┘
│ │
│ No real-time │
│ canvas state │
│ in Claude context │
└──────────────────────┘
Gaps:
- Claude doesn't see current canvas state in real-time
- No interview engine for guided study creation
- Limited tool set (no file ops, no protocol execution)
- No streaming for tool calls
- Mode switching requires reconnection
What We Want
┌───────────────────────────────────────────────────────────────────┐
│ ATOMIZER DASHBOARD │
├────────────────────────────┬──────────────────────────────────────┤
│ │ │
│ CHAT PANEL │ CANVAS │
│ (Atomizer Assistant) │ (SpecRenderer) │
│ │ │
│ ┌──────────────────────┐ │ ┌────────────────────────────────┐ │
│ │ "Create a bracket │ │ │ │ │
│ │ optimization with │ │ │ [DV: thickness] │ │
│ │ mass and stiffness" │ │ │ │ │ │
│ └──────────────────────┘ │ │ ▼ │ │
│ │ │ │ [Model Node] │ │
│ ▼ │ │ │ │ │
│ ┌──────────────────────┐ │ │ ▼ │ │
│ │ 🔧 Adding thickness │ │ │ [Ext: mass]──>[Obj: min] │ │
│ │ 🔧 Adding mass ext │◄─┼──┤ [Ext: disp]──>[Obj: min] │ │
│ │ 🔧 Adding objective │ │ │ │ │
│ │ │ │ │ (nodes appear in real-time) │ │
│ │ ✓ Study configured! │ │ │ │ │
│ └──────────────────────┘ │ └────────────────────────────────┘ │
│ │ │
│ ┌──────────────────────┐ │ User can click any node to edit │
│ │ Claude sees the │ │ Claude sees user's edits │
│ │ canvas state and │◄─┼──────────────────────────────────────│
│ │ user's manual edits │ │ │
│ └──────────────────────┘ │ │
└────────────────────────────┴──────────────────────────────────────┘
Architecture
1. WebSocket Hub (Bi-directional Sync)
┌─────────────────────┐
│ WebSocket Hub │
│ (Single Connection)│
└─────────┬───────────┘
│
┌────────────────────┼────────────────────┐
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Chat Panel │ │ Canvas │ │ Spec Store │
│ │ │ │ │ │
│ - Send messages │ │ - User edits │ │ - Single source │
│ - Receive text │ │ - Node add/del │ │ of truth │
│ - See tool calls│ │ - Edge changes │ │ - Validates │
└─────────────────┘ └─────────────────┘ └─────────────────┘
Message Types:
Client → Server:
{ type: "message", content: "..." } # Chat message
{ type: "canvas_edit", patch: {...} } # User made canvas change
{ type: "set_study", study_id: "..." } # Switch study
{ type: "ping" } # Heartbeat
Server → Client:
{ type: "text", content: "...", done: false } # Streaming text
{ type: "tool_start", tool: "...", input: {...} }
{ type: "tool_result", tool: "...", result: "..." }
{ type: "spec_updated", spec: {...} } # Full spec after change
{ type: "canvas_patch", patch: {...} } # Incremental update
{ type: "done" } # Response complete
{ type: "pong" } # Heartbeat response
2. Enhanced Claude Agent
The AtomizerClaudeAgent needs to be more like Claude Code:
class AtomizerClaudeAgent:
"""Full-power Claude agent with Claude Code-like capabilities"""
def __init__(self, study_id: Optional[str] = None):
self.client = anthropic.Anthropic()
self.study_id = study_id
self.spec_store = SpecStore(study_id) # Real-time spec access
self.interview_state = None # For guided creation
self.tools = self._define_full_tools()
async def chat_stream(
self,
message: str,
conversation: List[Dict],
canvas_state: Optional[Dict] = None # Current canvas from frontend
) -> AsyncGenerator[Dict, None]:
"""Stream responses with tool calls"""
# Build context with current canvas state
system = self._build_system_prompt(canvas_state)
# Stream the response
with self.client.messages.stream(
model="claude-sonnet-4-20250514",
max_tokens=8192,
system=system,
messages=conversation + [{"role": "user", "content": message}],
tools=self.tools
) as stream:
for event in stream:
if event.type == "content_block_delta":
if event.delta.type == "text_delta":
yield {"type": "text", "content": event.delta.text}
elif event.type == "content_block_start":
if event.content_block.type == "tool_use":
yield {
"type": "tool_start",
"tool": event.content_block.name,
"input": {} # Will be completed
}
# Handle tool calls after stream
response = stream.get_final_message()
for block in response.content:
if block.type == "tool_use":
result = await self._execute_tool(block.name, block.input)
yield {
"type": "tool_result",
"tool": block.name,
"result": result["result"],
"spec_changed": result.get("spec_changed", False)
}
# If spec changed, send the updated spec
if result.get("spec_changed"):
yield {
"type": "spec_updated",
"spec": self.spec_store.get_dict()
}
3. Full Tool Set
Claude needs more tools to match Claude Code power:
FULL_TOOLS = [
# === READ TOOLS ===
"read_study_config", # Read atomizer_spec.json
"query_trials", # Query optimization database
"list_studies", # List available studies
"read_file", # Read any file in study
"list_files", # List files in study directory
"read_nx_expressions", # Get NX model expressions
# === WRITE TOOLS (Spec Modification) ===
"add_design_variable", # Add DV to spec
"add_extractor", # Add extractor (built-in or custom)
"add_objective", # Add objective
"add_constraint", # Add constraint
"update_spec_field", # Update any spec field by path
"remove_node", # Remove any node by ID
"update_canvas_layout", # Reposition nodes for better layout
# === STUDY MANAGEMENT ===
"create_study", # Create new study directory + spec
"clone_study", # Clone existing study
"validate_spec", # Validate current spec
"migrate_config", # Migrate legacy config to spec v2
# === OPTIMIZATION CONTROL ===
"start_optimization", # Start optimization run
"stop_optimization", # Stop running optimization
"get_optimization_status",# Check if running, trial count
# === FILE OPERATIONS ===
"write_file", # Write file to study directory
"create_directory", # Create directory in study
# === NX INTEGRATION ===
"introspect_model", # Get model info (expressions, features)
"suggest_design_vars", # AI-suggest design variables from model
# === INTERVIEW/GUIDED CREATION ===
"start_interview", # Begin guided study creation
"process_answer", # Process user's interview answer
"get_interview_state", # Get current interview progress
]
4. Interview Engine Integration
The interview happens through chat, not a separate UI:
class InterviewEngine:
"""Guided study creation through conversation"""
PHASES = [
("model", "Let's set up your model. What's the path to your NX simulation file?"),
("objectives", "What do you want to optimize? (e.g., minimize mass, minimize displacement)"),
("design_vars", "Which parameters can I vary? I can suggest some based on your model."),
("constraints", "Any constraints to respect? (e.g., max stress, min frequency)"),
("method", "I recommend {method} for this problem. Should I configure it?"),
("review", "Here's the complete configuration. Ready to create the study?"),
]
def __init__(self, spec_store: SpecStore):
self.spec_store = spec_store
self.current_phase = 0
self.collected_data = {}
def get_current_question(self) -> str:
phase_name, question = self.PHASES[self.current_phase]
# Customize question based on collected data
if phase_name == "method":
method = self._recommend_method()
question = question.format(method=method)
return question
def process_answer(self, answer: str) -> Dict:
"""Process answer and build spec incrementally"""
phase_name, _ = self.PHASES[self.current_phase]
# Extract structured data from answer
extracted = self._extract_for_phase(phase_name, answer)
self.collected_data[phase_name] = extracted
# Update spec with extracted data
spec_update = self._apply_to_spec(phase_name, extracted)
# Advance to next phase
self.current_phase += 1
return {
"phase": phase_name,
"extracted": extracted,
"spec_update": spec_update,
"next_question": self.get_current_question() if self.current_phase < len(self.PHASES) else None,
"complete": self.current_phase >= len(self.PHASES)
}
Claude uses the interview through tools:
async def _tool_start_interview(self, params: Dict) -> str:
"""Start guided study creation"""
self.interview_state = InterviewEngine(self.spec_store)
return {
"status": "started",
"first_question": self.interview_state.get_current_question()
}
async def _tool_process_answer(self, params: Dict) -> str:
"""Process user's answer in interview"""
if not self.interview_state:
return {"error": "No interview in progress"}
result = self.interview_state.process_answer(params["answer"])
if result["spec_update"]:
# Spec was updated - this will trigger canvas update
return {
"status": "updated",
"spec_changed": True,
"next_question": result["next_question"],
"complete": result["complete"]
}
return result
Frontend Implementation
1. Unified WebSocket Hook
// hooks/useAtomizerSocket.ts
export function useAtomizerSocket(studyId: string | undefined) {
const [spec, setSpec] = useState<AtomizerSpec | null>(null);
const [messages, setMessages] = useState<ChatMessage[]>([]);
const [isThinking, setIsThinking] = useState(false);
const [currentTool, setCurrentTool] = useState<string | null>(null);
const ws = useRef<WebSocket | null>(null);
// Single WebSocket connection for everything
useEffect(() => {
const protocol = window.location.protocol === 'https:' ? 'wss:' : 'ws:';
const host = import.meta.env.DEV ? 'localhost:8001' : window.location.host;
ws.current = new WebSocket(`${protocol}//${host}/api/atomizer/ws`);
ws.current.onmessage = (event) => {
const data = JSON.parse(event.data);
switch (data.type) {
case 'text':
// Streaming text from Claude
setMessages(prev => {
const last = prev[prev.length - 1];
if (last?.role === 'assistant' && !last.complete) {
return [...prev.slice(0, -1), {
...last,
content: last.content + data.content
}];
}
return [...prev, {
id: Date.now().toString(),
role: 'assistant',
content: data.content,
complete: false
}];
});
break;
case 'tool_start':
setCurrentTool(data.tool);
// Add tool indicator to chat
setMessages(prev => [...prev, {
id: Date.now().toString(),
role: 'tool',
tool: data.tool,
status: 'running'
}]);
break;
case 'tool_result':
setCurrentTool(null);
// Update tool message with result
setMessages(prev => prev.map(m =>
m.role === 'tool' && m.tool === data.tool && m.status === 'running'
? { ...m, status: 'complete', result: data.result }
: m
));
break;
case 'spec_updated':
// Canvas gets the new spec - this is the magic!
setSpec(data.spec);
break;
case 'done':
setIsThinking(false);
// Mark last message as complete
setMessages(prev => prev.map((m, i) =>
i === prev.length - 1 ? { ...m, complete: true } : m
));
break;
}
};
// Set study context
if (studyId) {
ws.current.onopen = () => {
ws.current?.send(JSON.stringify({
type: 'set_study',
study_id: studyId
}));
};
}
return () => ws.current?.close();
}, [studyId]);
// Send message
const sendMessage = useCallback((content: string) => {
if (!ws.current) return;
setIsThinking(true);
setMessages(prev => [...prev, {
id: Date.now().toString(),
role: 'user',
content
}]);
ws.current.send(JSON.stringify({
type: 'message',
content
}));
}, []);
// Notify Claude about canvas edits
const notifyCanvasEdit = useCallback((patch: any) => {
ws.current?.send(JSON.stringify({
type: 'canvas_edit',
patch
}));
}, []);
return {
spec,
messages,
isThinking,
currentTool,
sendMessage,
notifyCanvasEdit
};
}
2. Integrated Canvas View
// pages/CanvasView.tsx (revised)
export function CanvasView() {
const { '*': studyId } = useParams();
// Single hook manages everything
const {
spec,
messages,
isThinking,
currentTool,
sendMessage,
notifyCanvasEdit
} = useAtomizerSocket(studyId);
// When user edits canvas, notify Claude
const handleSpecChange = useCallback((newSpec: AtomizerSpec) => {
// This is called by SpecRenderer when user makes edits
notifyCanvasEdit({
type: 'spec_replace',
spec: newSpec
});
}, [notifyCanvasEdit]);
return (
<div className="h-screen flex">
{/* Canvas - receives spec from WebSocket */}
<div className="flex-1">
<SpecRenderer
spec={spec}
onChange={handleSpecChange} // User edits flow back
highlightNode={currentTool ? getAffectedNode(currentTool) : undefined}
/>
</div>
{/* Chat Panel */}
<div className="w-96 border-l">
<ChatPanel
messages={messages}
isThinking={isThinking}
currentTool={currentTool}
onSend={sendMessage}
/>
</div>
</div>
);
}
3. Visual Feedback for Tool Calls
When Claude calls a tool, the canvas shows visual feedback:
// components/canvas/SpecRenderer.tsx
function SpecRenderer({ spec, highlightNode, onChange }) {
// When a tool is targeting a node, highlight it
const getNodeStyle = (nodeId: string) => {
if (highlightNode === nodeId) {
return {
boxShadow: '0 0 0 3px #f59e0b', // Amber glow
animation: 'pulse 1s infinite'
};
}
return {};
};
// When new nodes are added, animate them
const [newNodes, setNewNodes] = useState<Set<string>>(new Set());
useEffect(() => {
if (spec) {
const currentIds = new Set([
...spec.design_variables.map(d => d.id),
...spec.extractors.map(e => e.id),
...spec.objectives.map(o => o.id),
...spec.constraints.map(c => c.id)
]);
// Find truly new nodes
const added = [...currentIds].filter(id => !prevIds.current.has(id));
if (added.length > 0) {
setNewNodes(new Set(added));
setTimeout(() => setNewNodes(new Set()), 1000); // Clear animation
}
prevIds.current = currentIds;
}
}, [spec]);
// Render with animations
return (
<ReactFlow nodes={nodes.map(n => ({
...n,
style: {
...getNodeStyle(n.id),
...(newNodes.has(n.id) ? { animation: 'slideIn 0.5s ease-out' } : {})
}
}))} />
);
}
User Experience Flow
Flow 1: Create Study Through Chat
User: "Create a bracket optimization. I want to minimize mass while keeping
stress below 200 MPa. The thickness can vary from 2mm to 10mm."
Claude: [Internal: Parse intent, no interview needed - user gave full spec]
🔧 Creating study "bracket_optimization"...
🔧 Adding design variable: thickness [2mm - 10mm]
🔧 Adding extractor: mass (BDF mass extraction)
🔧 Adding extractor: max_stress (von Mises stress)
🔧 Adding objective: minimize mass
🔧 Adding constraint: stress <= 200 MPa
✓ Study created! I've configured:
- 1 design variable (thickness)
- 2 extractors (mass, stress)
- 1 objective (minimize mass)
- 1 constraint (max stress)
The canvas shows your optimization setup. You can:
- Click any node to adjust settings
- Ask me to add more variables or constraints
- Say "run" to start optimization
[Canvas shows: DV→Model→Extractors→Objectives/Constraints, all connected]
Flow 2: Interview-Guided Creation
User: "Help me set up a new optimization"
Claude: Let's set up your optimization step by step.
First, what's the path to your NX simulation file?
(You can drag & drop or type the path)
User: "studies/M1_Mirror/model.sim"
Claude: 🔧 Loading model...
[tool: introspect_model]
Found your model! It has 15 expressions. I noticed these look like
good design variables:
- rib_thickness (current: 8mm)
- mirror_depth (current: 50mm)
- support_angle (current: 45°)
What are you trying to optimize?
- Minimize mass?
- Minimize wavefront error (WFE)?
- Both? (I can set up multi-objective)
[Canvas shows: Model node appears]
User: "minimize WFE and mass"
Claude: 🔧 Adding Zernike WFE extractor
🔧 Adding mass extractor
🔧 Adding objectives (NSGA-II multi-objective)
Got it - multi-objective optimization for WFE and mass.
Which parameters should I vary?
I suggest: rib_thickness, mirror_depth, support_angle
[Canvas shows: Model→WFE Extractor→Obj, Model→Mass Extractor→Obj]
User: "Yes, use those, but keep support_angle fixed"
Claude: 🔧 Adding design variable: rib_thickness [4mm - 16mm]
🔧 Adding design variable: mirror_depth [30mm - 80mm]
Perfect. Any constraints?
- Maximum mass?
- Maximum stress?
- Minimum frequency?
[Canvas shows: 2 DVs connected to Model]
User: "max stress 150 MPa"
Claude: 🔧 Adding stress extractor
🔧 Adding constraint: stress <= 150 MPa
✓ Configuration complete! Ready to optimize?
[Canvas shows: Full optimization flow]
Flow 3: User Edits Canvas, Claude Sees
[User clicks thickness DV node, changes max from 10mm to 15mm]
[WebSocket sends: { type: "canvas_edit", patch: { path: "design_variables.0.bounds.max", value: 15 } }]
Claude: [Receives canvas edit notification in context]
User: "Is this range reasonable?"
Claude: I see you changed thickness max to 15mm. That's quite thick for a
bracket - typical ranges are 2-10mm.
With 15mm max, you might see:
- Higher mass designs explored
- Potentially better stiffness
- Longer optimization time (larger search space)
Want me to adjust or keep it?
Implementation Plan
Phase 1: Unified WebSocket (1 week)
-
Backend: Create
/api/atomizer/wsendpoint- Single connection for chat + spec sync
- Streaming response support
- Canvas edit notifications
-
Frontend: Create
useAtomizerSockethook- Replaces
useChat+useSpecWebSocket - Single source of truth for spec state
- Replaces
-
Integration: Wire SpecRenderer to socket
- Receive spec updates from Claude's tools
- Send edit notifications back
Phase 2: Enhanced Tools (1 week)
- Add remaining write tools
- Implement
introspect_modelfor NX expression discovery - Add
create_studyfor new study creation - Add file operation tools
Phase 3: Interview Engine (1 week)
- Implement
InterviewEngineclass - Add interview tools to Claude
- Test guided creation flow
- Add smart defaults and recommendations
Phase 4: Polish (1 week)
- Visual feedback for tool calls
- Node highlight during modification
- Animation for new nodes
- Error recovery and reconnection
- Performance optimization
Success Metrics
- Creation Time: User can create complete study in <3 minutes through chat
- Edit Latency: Canvas updates within 200ms of Claude's tool call
- Sync Reliability: 100% of user edits reflected in Claude's context
- Interview Success: 90% of studies created through interview are valid
Key Differences from Current Implementation
| Current | Target |
|---|---|
| Separate chat/canvas WebSockets | Single unified WebSocket |
| Claude doesn't see canvas state | Real-time canvas state in context |
| Manual spec refresh | Automatic spec push on changes |
| No interview engine | Guided creation through chat |
| Limited tools | Full Claude Code-like tool set |
| Mode switching breaks connection | Seamless power mode |
This is the architecture that makes Atomizer truly powerful - where Claude and Canvas work together as one system.