1240 lines
41 KiB
Markdown
1240 lines
41 KiB
Markdown
|
|
# Claude + Canvas Integration Project
|
|||
|
|
|
|||
|
|
## Project Overview
|
|||
|
|
|
|||
|
|
**Project Name**: Unified Claude + Canvas Integration
|
|||
|
|
**Goal**: Transform Atomizer's dashboard into a bi-directional Claude + Canvas experience where Claude and the user co-edit the same `atomizer_spec.json` in real-time.
|
|||
|
|
|
|||
|
|
**Core Principle**: `atomizer_spec.json` is the single source of truth. Both Claude (via tools) and the user (via canvas UI) read and write to it, with changes instantly reflected on both sides.
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## Current State
|
|||
|
|
|
|||
|
|
### What Exists (✅ Working)
|
|||
|
|
|
|||
|
|
| Component | File | Status |
|
|||
|
|
|-----------|------|--------|
|
|||
|
|
| Power Mode WebSocket | `backend/api/routes/claude.py` | ✅ `/ws/power` endpoint |
|
|||
|
|
| Write Tools | `backend/api/services/claude_agent.py` | ✅ 6 tools implemented |
|
|||
|
|
| `spec_modified` Events | `backend/api/routes/claude.py:480-487` | ✅ Sent on tool use |
|
|||
|
|
| Canvas Reload | `frontend/src/hooks/useChat.ts:247-262` | ✅ Triggers `reloadSpec()` |
|
|||
|
|
| Spec Store | `frontend/src/hooks/useSpecStore.ts` | ✅ Manages spec state |
|
|||
|
|
| Spec Renderer | `frontend/src/components/canvas/SpecRenderer.tsx` | ✅ Renders spec as nodes |
|
|||
|
|
|
|||
|
|
### What's Missing (❌ To Build)
|
|||
|
|
|
|||
|
|
| Feature | Priority | Effort |
|
|||
|
|
|---------|----------|--------|
|
|||
|
|
| Canvas state in Claude's context | P0 | Easy |
|
|||
|
|
| Full spec in `spec_updated` payload | P0 | Easy |
|
|||
|
|
| User edits notify Claude | P0 | Easy |
|
|||
|
|
| Streaming responses | P1 | Medium |
|
|||
|
|
| `create_study` tool | P1 | Medium |
|
|||
|
|
| Interview Engine | P2 | Medium |
|
|||
|
|
| Tool call UI indicators | P2 | Easy |
|
|||
|
|
| Node animations | P3 | Easy |
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## Architecture
|
|||
|
|
|
|||
|
|
### Target Data Flow
|
|||
|
|
|
|||
|
|
```
|
|||
|
|
┌─────────────────────────────────────────────────────────────────────┐
|
|||
|
|
│ atomizer_spec.json │
|
|||
|
|
│ (Single Source of Truth) │
|
|||
|
|
└───────────────────────────────┬─────────────────────────────────────┘
|
|||
|
|
│
|
|||
|
|
┌───────────┴───────────┐
|
|||
|
|
│ │
|
|||
|
|
▼ ▼
|
|||
|
|
┌───────────────────────────┐ ┌───────────────────────────┐
|
|||
|
|
│ Claude Agent │ │ Canvas UI │
|
|||
|
|
│ (AtomizerClaudeAgent) │ │ (SpecRenderer) │
|
|||
|
|
├───────────────────────────┤ ├───────────────────────────┤
|
|||
|
|
│ • Reads spec for context │ │ • Renders spec as nodes │
|
|||
|
|
│ • Writes via tools │ │ • User edits nodes │
|
|||
|
|
│ • Sees user's edits │ │ • Receives Claude's edits │
|
|||
|
|
└───────────────────────────┘ └───────────────────────────┘
|
|||
|
|
│ │
|
|||
|
|
│ WebSocket │
|
|||
|
|
│ (bi-directional) │
|
|||
|
|
│ │
|
|||
|
|
└───────────┬───────────┘
|
|||
|
|
│
|
|||
|
|
┌───────────┴───────────┐
|
|||
|
|
│ /api/claude/ws/power │
|
|||
|
|
│ (Enhanced Endpoint) │
|
|||
|
|
└───────────────────────┘
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### Message Protocol (Enhanced)
|
|||
|
|
|
|||
|
|
**Client → Server:**
|
|||
|
|
```typescript
|
|||
|
|
// Chat message
|
|||
|
|
{ type: "message", content: "Add a thickness variable 2-10mm" }
|
|||
|
|
|
|||
|
|
// User edited canvas (NEW)
|
|||
|
|
{ type: "canvas_edit", spec: { /* full spec */ } }
|
|||
|
|
|
|||
|
|
// Switch study
|
|||
|
|
{ type: "set_study", study_id: "bracket_v1" }
|
|||
|
|
|
|||
|
|
// Heartbeat
|
|||
|
|
{ type: "ping" }
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Server → Client:**
|
|||
|
|
```typescript
|
|||
|
|
// Streaming text (NEW - replaces single "text" message)
|
|||
|
|
{ type: "text_delta", content: "Adding" }
|
|||
|
|
{ type: "text_delta", content: " thickness" }
|
|||
|
|
{ type: "text_delta", content: " variable..." }
|
|||
|
|
|
|||
|
|
// Tool started (NEW)
|
|||
|
|
{ type: "tool_start", tool: "add_design_variable", input: {...} }
|
|||
|
|
|
|||
|
|
// Tool completed
|
|||
|
|
{ type: "tool_result", tool: "add_design_variable", result: "✓ Added..." }
|
|||
|
|
|
|||
|
|
// Spec updated - NOW INCLUDES FULL SPEC (CHANGED)
|
|||
|
|
{ type: "spec_updated", spec: { /* full atomizer_spec.json */ } }
|
|||
|
|
|
|||
|
|
// Response complete
|
|||
|
|
{ type: "done" }
|
|||
|
|
|
|||
|
|
// Error
|
|||
|
|
{ type: "error", message: "..." }
|
|||
|
|
|
|||
|
|
// Heartbeat response
|
|||
|
|
{ type: "pong" }
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## Implementation Tasks
|
|||
|
|
|
|||
|
|
### Phase 1: Core Bi-directional Sync (P0)
|
|||
|
|
|
|||
|
|
#### Task 1.1: Add Canvas State to Claude's Context
|
|||
|
|
**File**: `backend/api/services/claude_agent.py`
|
|||
|
|
|
|||
|
|
**Current** (`_build_system_prompt`):
|
|||
|
|
```python
|
|||
|
|
if self.study_id and self.study_dir and self.study_dir.exists():
|
|||
|
|
context = self._get_study_context()
|
|||
|
|
base_prompt += f"\n## Current Study: {self.study_id}\n{context}\n"
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Change**: Add method to format current spec as context, call it from WebSocket handler.
|
|||
|
|
|
|||
|
|
```python
|
|||
|
|
def set_canvas_state(self, spec: Dict[str, Any]) -> None:
|
|||
|
|
"""Update the current canvas state for context"""
|
|||
|
|
self.canvas_state = spec
|
|||
|
|
|
|||
|
|
def _format_canvas_context(self) -> str:
|
|||
|
|
"""Format current canvas state for Claude's system prompt"""
|
|||
|
|
if not self.canvas_state:
|
|||
|
|
return ""
|
|||
|
|
|
|||
|
|
spec = self.canvas_state
|
|||
|
|
lines = ["\n## Current Canvas State\n"]
|
|||
|
|
lines.append("The user can see this canvas. When you modify it, they see changes in real-time.\n")
|
|||
|
|
|
|||
|
|
# Design Variables
|
|||
|
|
dvs = spec.get('design_variables', [])
|
|||
|
|
if dvs:
|
|||
|
|
lines.append(f"**Design Variables ({len(dvs)}):**")
|
|||
|
|
for dv in dvs:
|
|||
|
|
bounds = dv.get('bounds', {})
|
|||
|
|
lines.append(f" - `{dv.get('id')}`: {dv.get('name')} [{bounds.get('min')}, {bounds.get('max')}]")
|
|||
|
|
|
|||
|
|
# Extractors
|
|||
|
|
exts = spec.get('extractors', [])
|
|||
|
|
if exts:
|
|||
|
|
lines.append(f"\n**Extractors ({len(exts)}):**")
|
|||
|
|
for ext in exts:
|
|||
|
|
lines.append(f" - `{ext.get('id')}`: {ext.get('name')} ({ext.get('type')})")
|
|||
|
|
|
|||
|
|
# Objectives
|
|||
|
|
objs = spec.get('objectives', [])
|
|||
|
|
if objs:
|
|||
|
|
lines.append(f"\n**Objectives ({len(objs)}):**")
|
|||
|
|
for obj in objs:
|
|||
|
|
lines.append(f" - `{obj.get('id')}`: {obj.get('name')} ({obj.get('direction')})")
|
|||
|
|
|
|||
|
|
# Constraints
|
|||
|
|
cons = spec.get('constraints', [])
|
|||
|
|
if cons:
|
|||
|
|
lines.append(f"\n**Constraints ({len(cons)}):**")
|
|||
|
|
for con in cons:
|
|||
|
|
lines.append(f" - `{con.get('id')}`: {con.get('name')} {con.get('operator')} {con.get('threshold')}")
|
|||
|
|
|
|||
|
|
# Model
|
|||
|
|
model = spec.get('model', {})
|
|||
|
|
if model.get('sim', {}).get('path'):
|
|||
|
|
lines.append(f"\n**Model**: {model['sim']['path']}")
|
|||
|
|
|
|||
|
|
return "\n".join(lines)
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Update `_build_system_prompt`**:
|
|||
|
|
```python
|
|||
|
|
def _build_system_prompt(self) -> str:
|
|||
|
|
base_prompt = """...""" # existing prompt
|
|||
|
|
|
|||
|
|
# Add study context
|
|||
|
|
if self.study_id and self.study_dir and self.study_dir.exists():
|
|||
|
|
context = self._get_study_context()
|
|||
|
|
base_prompt += f"\n## Current Study: {self.study_id}\n{context}\n"
|
|||
|
|
|
|||
|
|
# Add canvas state (NEW)
|
|||
|
|
canvas_context = self._format_canvas_context()
|
|||
|
|
if canvas_context:
|
|||
|
|
base_prompt += canvas_context
|
|||
|
|
|
|||
|
|
return base_prompt
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
#### Task 1.2: Send Full Spec in `spec_updated`
|
|||
|
|
**File**: `backend/api/routes/claude.py`
|
|||
|
|
|
|||
|
|
**Current** (line 483-487):
|
|||
|
|
```python
|
|||
|
|
await websocket.send_json({
|
|||
|
|
"type": "spec_modified",
|
|||
|
|
"tool": tool_call["tool"],
|
|||
|
|
"changes": tool_call["result_preview"],
|
|||
|
|
})
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Change**: Send full spec instead of just changes.
|
|||
|
|
|
|||
|
|
```python
|
|||
|
|
# After any write tool completes, send full spec
|
|||
|
|
if tool_call["tool"] in ["add_design_variable", "add_extractor",
|
|||
|
|
"add_objective", "add_constraint",
|
|||
|
|
"update_spec_field", "remove_node"]:
|
|||
|
|
# Load the updated spec
|
|||
|
|
spec = agent.load_current_spec()
|
|||
|
|
await websocket.send_json({
|
|||
|
|
"type": "spec_updated",
|
|||
|
|
"tool": tool_call["tool"],
|
|||
|
|
"spec": spec, # Full spec!
|
|||
|
|
})
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Add to `AtomizerClaudeAgent`**:
|
|||
|
|
```python
|
|||
|
|
def load_current_spec(self) -> Optional[Dict[str, Any]]:
|
|||
|
|
"""Load the current atomizer_spec.json"""
|
|||
|
|
if not self.study_dir:
|
|||
|
|
return None
|
|||
|
|
spec_path = self.study_dir / "atomizer_spec.json"
|
|||
|
|
if not spec_path.exists():
|
|||
|
|
return None
|
|||
|
|
with open(spec_path, 'r', encoding='utf-8') as f:
|
|||
|
|
return json.load(f)
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
#### Task 1.3: Handle User Canvas Edits
|
|||
|
|
**File**: `backend/api/routes/claude.py`
|
|||
|
|
|
|||
|
|
**Add to `power_mode_websocket`** (after line 524):
|
|||
|
|
```python
|
|||
|
|
elif data.get("type") == "canvas_edit":
|
|||
|
|
# User made a manual edit to the canvas
|
|||
|
|
spec = data.get("spec")
|
|||
|
|
if spec:
|
|||
|
|
# Update agent's canvas state so Claude sees the change
|
|||
|
|
agent.set_canvas_state(spec)
|
|||
|
|
# Optionally save to file if the frontend already saved
|
|||
|
|
# (or let frontend handle saving)
|
|||
|
|
await websocket.send_json({
|
|||
|
|
"type": "canvas_edit_received",
|
|||
|
|
"acknowledged": True
|
|||
|
|
})
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
#### Task 1.4: Frontend - Send Canvas Edits to Claude
|
|||
|
|
**File**: `frontend/src/hooks/useChat.ts`
|
|||
|
|
|
|||
|
|
**Add to state/hook**:
|
|||
|
|
```typescript
|
|||
|
|
// Add to sendMessage or create new function
|
|||
|
|
const notifyCanvasEdit = useCallback((spec: AtomizerSpec) => {
|
|||
|
|
if (wsRef.current?.readyState === WebSocket.OPEN) {
|
|||
|
|
wsRef.current.send(JSON.stringify({
|
|||
|
|
type: 'canvas_edit',
|
|||
|
|
spec: spec
|
|||
|
|
}));
|
|||
|
|
}
|
|||
|
|
}, []);
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Return from hook**:
|
|||
|
|
```typescript
|
|||
|
|
return {
|
|||
|
|
// ... existing
|
|||
|
|
notifyCanvasEdit, // NEW
|
|||
|
|
};
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
#### Task 1.5: Frontend - Use Full Spec from `spec_updated`
|
|||
|
|
**File**: `frontend/src/hooks/useChat.ts`
|
|||
|
|
|
|||
|
|
**Current** (line 247-262):
|
|||
|
|
```typescript
|
|||
|
|
case 'spec_modified':
|
|||
|
|
console.log('[useChat] Spec was modified by assistant:', data.tool, data.changes);
|
|||
|
|
if (onCanvasModification) {
|
|||
|
|
onCanvasModification({
|
|||
|
|
action: 'add_node',
|
|||
|
|
data: { _refresh: true, tool: data.tool, changes: data.changes },
|
|||
|
|
});
|
|||
|
|
}
|
|||
|
|
break;
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Change**: Use the spec directly instead of triggering reload.
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
case 'spec_updated':
|
|||
|
|
console.log('[useChat] Spec updated by assistant:', data.tool);
|
|||
|
|
// Directly update spec store instead of triggering HTTP reload
|
|||
|
|
if (data.spec && onSpecUpdated) {
|
|||
|
|
onSpecUpdated(data.spec);
|
|||
|
|
}
|
|||
|
|
break;
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Add callback to hook options**:
|
|||
|
|
```typescript
|
|||
|
|
interface UseChatOptions {
|
|||
|
|
// ... existing
|
|||
|
|
onSpecUpdated?: (spec: AtomizerSpec) => void; // NEW
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
#### Task 1.6: Wire Canvas to Use Direct Spec Updates
|
|||
|
|
**File**: `frontend/src/pages/CanvasView.tsx`
|
|||
|
|
|
|||
|
|
**Current** (line 57-64):
|
|||
|
|
```typescript
|
|||
|
|
onCanvasModification: chatPowerMode ? (modification) => {
|
|||
|
|
console.log('Canvas modification from Claude:', modification);
|
|||
|
|
showNotification(`Claude: ${modification.action}...`);
|
|||
|
|
reloadSpec();
|
|||
|
|
} : undefined,
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Change**: Use `onSpecUpdated` callback.
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
const { setSpec } = useSpecStore();
|
|||
|
|
|
|||
|
|
// In useChat options:
|
|||
|
|
onSpecUpdated: chatPowerMode ? (spec) => {
|
|||
|
|
console.log('Spec updated by Claude');
|
|||
|
|
setSpec(spec); // Direct update, no HTTP reload
|
|||
|
|
showNotification('Canvas updated by Claude');
|
|||
|
|
} : undefined,
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
### Phase 2: Streaming Responses (P1)
|
|||
|
|
|
|||
|
|
#### Task 2.1: Implement Streaming in Claude Agent
|
|||
|
|
**File**: `backend/api/services/claude_agent.py`
|
|||
|
|
|
|||
|
|
**Add new method**:
|
|||
|
|
```python
|
|||
|
|
async def chat_stream(
|
|||
|
|
self,
|
|||
|
|
message: str,
|
|||
|
|
conversation_history: List[Dict[str, Any]]
|
|||
|
|
) -> AsyncGenerator[Dict[str, Any], None]:
|
|||
|
|
"""
|
|||
|
|
Stream chat response with tool calls.
|
|||
|
|
Yields events: text_delta, tool_start, tool_result, done
|
|||
|
|
"""
|
|||
|
|
# Rebuild system prompt with current canvas state
|
|||
|
|
self.system_prompt = self._build_system_prompt()
|
|||
|
|
|
|||
|
|
messages = conversation_history + [{"role": "user", "content": message}]
|
|||
|
|
|
|||
|
|
# Use streaming API
|
|||
|
|
with self.client.messages.stream(
|
|||
|
|
model="claude-sonnet-4-20250514",
|
|||
|
|
max_tokens=4096,
|
|||
|
|
system=self.system_prompt,
|
|||
|
|
messages=messages,
|
|||
|
|
tools=self.tools
|
|||
|
|
) as stream:
|
|||
|
|
for event in stream:
|
|||
|
|
if event.type == "content_block_delta":
|
|||
|
|
if hasattr(event.delta, "text"):
|
|||
|
|
yield {"type": "text_delta", "content": event.delta.text}
|
|||
|
|
|
|||
|
|
# Get final response for tool handling
|
|||
|
|
response = stream.get_final_message()
|
|||
|
|
|
|||
|
|
# Process tool calls
|
|||
|
|
for block in response.content:
|
|||
|
|
if block.type == "tool_use":
|
|||
|
|
yield {"type": "tool_start", "tool": block.name, "input": block.input}
|
|||
|
|
|
|||
|
|
# Execute tool
|
|||
|
|
result = self._execute_tool_sync(block.name, block.input)
|
|||
|
|
|
|||
|
|
yield {"type": "tool_result", "tool": block.name, "result": result}
|
|||
|
|
|
|||
|
|
# If spec changed, yield updated spec
|
|||
|
|
if block.name in ["add_design_variable", "add_extractor",
|
|||
|
|
"add_objective", "add_constraint",
|
|||
|
|
"update_spec_field", "remove_node"]:
|
|||
|
|
spec = self.load_current_spec()
|
|||
|
|
if spec:
|
|||
|
|
yield {"type": "spec_updated", "spec": spec}
|
|||
|
|
|
|||
|
|
yield {"type": "done"}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
#### Task 2.2: Use Streaming in WebSocket Handler
|
|||
|
|
**File**: `backend/api/routes/claude.py`
|
|||
|
|
|
|||
|
|
**Change `power_mode_websocket`** (line 464-501):
|
|||
|
|
```python
|
|||
|
|
if data.get("type") == "message":
|
|||
|
|
content = data.get("content", "")
|
|||
|
|
if not content:
|
|||
|
|
continue
|
|||
|
|
|
|||
|
|
try:
|
|||
|
|
# Stream the response
|
|||
|
|
async for event in agent.chat_stream(content, conversation_history):
|
|||
|
|
await websocket.send_json(event)
|
|||
|
|
|
|||
|
|
# Update conversation history
|
|||
|
|
# (need to track this differently with streaming)
|
|||
|
|
|
|||
|
|
except Exception as e:
|
|||
|
|
import traceback
|
|||
|
|
traceback.print_exc()
|
|||
|
|
await websocket.send_json({
|
|||
|
|
"type": "error",
|
|||
|
|
"message": str(e),
|
|||
|
|
})
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
#### Task 2.3: Frontend - Handle Streaming Text
|
|||
|
|
**File**: `frontend/src/hooks/useChat.ts`
|
|||
|
|
|
|||
|
|
**Add state for streaming**:
|
|||
|
|
```typescript
|
|||
|
|
const [streamingText, setStreamingText] = useState<string>('');
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Handle `text_delta`**:
|
|||
|
|
```typescript
|
|||
|
|
case 'text_delta':
|
|||
|
|
setStreamingText(prev => prev + data.content);
|
|||
|
|
break;
|
|||
|
|
|
|||
|
|
case 'done':
|
|||
|
|
// Finalize the streaming message
|
|||
|
|
if (streamingText) {
|
|||
|
|
setState(prev => ({
|
|||
|
|
...prev,
|
|||
|
|
messages: [...prev.messages, {
|
|||
|
|
id: Date.now().toString(),
|
|||
|
|
role: 'assistant',
|
|||
|
|
content: streamingText
|
|||
|
|
}]
|
|||
|
|
}));
|
|||
|
|
setStreamingText('');
|
|||
|
|
}
|
|||
|
|
setState(prev => ({ ...prev, isThinking: false }));
|
|||
|
|
break;
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
### Phase 3: Study Creation (P1)
|
|||
|
|
|
|||
|
|
#### Task 3.1: Add `create_study` Tool
|
|||
|
|
**File**: `backend/api/services/claude_agent.py`
|
|||
|
|
|
|||
|
|
**Add to `_define_tools()`**:
|
|||
|
|
```python
|
|||
|
|
{
|
|||
|
|
"name": "create_study",
|
|||
|
|
"description": "Create a new optimization study with directory structure and atomizer_spec.json. Use this when the user wants to start a new optimization from scratch.",
|
|||
|
|
"input_schema": {
|
|||
|
|
"type": "object",
|
|||
|
|
"properties": {
|
|||
|
|
"name": {
|
|||
|
|
"type": "string",
|
|||
|
|
"description": "Study name in snake_case (e.g., 'bracket_mass_v1')"
|
|||
|
|
},
|
|||
|
|
"category": {
|
|||
|
|
"type": "string",
|
|||
|
|
"description": "Category folder (e.g., 'Simple_Bracket', 'M1_Mirror'). Optional."
|
|||
|
|
},
|
|||
|
|
"model_path": {
|
|||
|
|
"type": "string",
|
|||
|
|
"description": "Path to NX simulation file (.sim). Optional, can be set later."
|
|||
|
|
},
|
|||
|
|
"description": {
|
|||
|
|
"type": "string",
|
|||
|
|
"description": "Brief description of the optimization goal"
|
|||
|
|
}
|
|||
|
|
},
|
|||
|
|
"required": ["name"]
|
|||
|
|
}
|
|||
|
|
},
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Add implementation**:
|
|||
|
|
```python
|
|||
|
|
def _tool_create_study(self, params: Dict[str, Any]) -> str:
|
|||
|
|
"""Create a new study with directory structure and atomizer_spec.json"""
|
|||
|
|
study_name = params['name']
|
|||
|
|
category = params.get('category', '')
|
|||
|
|
model_path = params.get('model_path', '')
|
|||
|
|
description = params.get('description', '')
|
|||
|
|
|
|||
|
|
# Build study path
|
|||
|
|
if category:
|
|||
|
|
study_dir = STUDIES_DIR / category / study_name
|
|||
|
|
else:
|
|||
|
|
study_dir = STUDIES_DIR / study_name
|
|||
|
|
|
|||
|
|
# Check if exists
|
|||
|
|
if study_dir.exists():
|
|||
|
|
return f"✗ Study '{study_name}' already exists at {study_dir}"
|
|||
|
|
|
|||
|
|
# Create directory structure
|
|||
|
|
study_dir.mkdir(parents=True, exist_ok=True)
|
|||
|
|
(study_dir / "1_setup").mkdir(exist_ok=True)
|
|||
|
|
(study_dir / "2_iterations").mkdir(exist_ok=True)
|
|||
|
|
(study_dir / "3_results").mkdir(exist_ok=True)
|
|||
|
|
|
|||
|
|
# Create initial spec
|
|||
|
|
spec = {
|
|||
|
|
"meta": {
|
|||
|
|
"version": "2.0",
|
|||
|
|
"study_name": study_name,
|
|||
|
|
"description": description,
|
|||
|
|
"created_at": datetime.now().isoformat(),
|
|||
|
|
"created_by": "claude_agent"
|
|||
|
|
},
|
|||
|
|
"model": {
|
|||
|
|
"sim": {
|
|||
|
|
"path": model_path,
|
|||
|
|
"solver": "nastran"
|
|||
|
|
}
|
|||
|
|
},
|
|||
|
|
"design_variables": [],
|
|||
|
|
"extractors": [],
|
|||
|
|
"objectives": [],
|
|||
|
|
"constraints": [],
|
|||
|
|
"optimization": {
|
|||
|
|
"algorithm": {"type": "TPE"},
|
|||
|
|
"budget": {"max_trials": 100}
|
|||
|
|
},
|
|||
|
|
"canvas": {
|
|||
|
|
"edges": [],
|
|||
|
|
"layout_version": "2.0"
|
|||
|
|
}
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
# Save spec
|
|||
|
|
spec_path = study_dir / "atomizer_spec.json"
|
|||
|
|
with open(spec_path, 'w', encoding='utf-8') as f:
|
|||
|
|
json.dump(spec, f, indent=2)
|
|||
|
|
|
|||
|
|
# Update agent context
|
|||
|
|
self.study_id = f"{category}/{study_name}" if category else study_name
|
|||
|
|
self.study_dir = study_dir
|
|||
|
|
self.canvas_state = spec
|
|||
|
|
|
|||
|
|
return f"✓ Created study '{study_name}' at {study_dir}\n\nThe canvas is now showing this empty study. You can start adding design variables, extractors, and objectives."
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Add to tool dispatcher**:
|
|||
|
|
```python
|
|||
|
|
elif tool_name == "create_study":
|
|||
|
|
return self._tool_create_study(tool_input)
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
### Phase 4: Interview Engine (P2)
|
|||
|
|
|
|||
|
|
#### Task 4.1: Create Interview Engine Class
|
|||
|
|
**File**: `backend/api/services/interview_engine.py` (NEW)
|
|||
|
|
|
|||
|
|
```python
|
|||
|
|
"""
|
|||
|
|
Interview Engine for guided study creation.
|
|||
|
|
|
|||
|
|
Walks the user through creating an optimization study step-by-step,
|
|||
|
|
building the atomizer_spec.json incrementally.
|
|||
|
|
"""
|
|||
|
|
|
|||
|
|
from typing import Dict, List, Optional, Any
|
|||
|
|
from dataclasses import dataclass, field
|
|||
|
|
from enum import Enum
|
|||
|
|
import re
|
|||
|
|
|
|||
|
|
|
|||
|
|
class InterviewPhase(Enum):
|
|||
|
|
WELCOME = "welcome"
|
|||
|
|
MODEL = "model"
|
|||
|
|
OBJECTIVES = "objectives"
|
|||
|
|
DESIGN_VARS = "design_vars"
|
|||
|
|
CONSTRAINTS = "constraints"
|
|||
|
|
METHOD = "method"
|
|||
|
|
REVIEW = "review"
|
|||
|
|
COMPLETE = "complete"
|
|||
|
|
|
|||
|
|
|
|||
|
|
@dataclass
|
|||
|
|
class InterviewState:
|
|||
|
|
phase: InterviewPhase = InterviewPhase.WELCOME
|
|||
|
|
collected: Dict[str, Any] = field(default_factory=dict)
|
|||
|
|
spec: Dict[str, Any] = field(default_factory=dict)
|
|||
|
|
model_expressions: List[Dict] = field(default_factory=list)
|
|||
|
|
|
|||
|
|
|
|||
|
|
class InterviewEngine:
|
|||
|
|
"""Guided study creation through conversation"""
|
|||
|
|
|
|||
|
|
PHASE_QUESTIONS = {
|
|||
|
|
InterviewPhase.WELCOME: "What kind of optimization do you want to set up? (e.g., minimize mass of a bracket, reduce wavefront error of a mirror)",
|
|||
|
|
InterviewPhase.MODEL: "What's the path to your NX simulation file (.sim)?\n(You can type the path or I can help you find it)",
|
|||
|
|
InterviewPhase.OBJECTIVES: "What do you want to optimize?\n\nCommon objectives:\n- Minimize mass/weight\n- Minimize displacement (maximize stiffness)\n- Minimize stress\n- Minimize wavefront error (WFE)\n\nYou can have multiple objectives (multi-objective optimization).",
|
|||
|
|
InterviewPhase.DESIGN_VARS: "Which parameters should vary during optimization?\n\n{suggestions}",
|
|||
|
|
InterviewPhase.CONSTRAINTS: "Any constraints to respect?\n\nExamples:\n- Maximum stress ≤ 200 MPa\n- Minimum frequency ≥ 50 Hz\n- Maximum mass ≤ 5 kg\n\n(Say 'none' if no constraints)",
|
|||
|
|
InterviewPhase.METHOD: "Based on your setup, I recommend **{method}**.\n\nReason: {reason}\n\nShould I use this method?",
|
|||
|
|
InterviewPhase.REVIEW: "Here's your configuration:\n\n{summary}\n\nReady to create the study? (yes/no)",
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
def __init__(self):
|
|||
|
|
self.state = InterviewState()
|
|||
|
|
self._init_spec()
|
|||
|
|
|
|||
|
|
def _init_spec(self):
|
|||
|
|
"""Initialize empty spec structure"""
|
|||
|
|
self.state.spec = {
|
|||
|
|
"meta": {"version": "2.0"},
|
|||
|
|
"model": {"sim": {"path": "", "solver": "nastran"}},
|
|||
|
|
"design_variables": [],
|
|||
|
|
"extractors": [],
|
|||
|
|
"objectives": [],
|
|||
|
|
"constraints": [],
|
|||
|
|
"optimization": {
|
|||
|
|
"algorithm": {"type": "TPE"},
|
|||
|
|
"budget": {"max_trials": 100}
|
|||
|
|
},
|
|||
|
|
"canvas": {"edges": [], "layout_version": "2.0"}
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
def get_current_question(self) -> str:
|
|||
|
|
"""Get the question for the current phase"""
|
|||
|
|
question = self.PHASE_QUESTIONS.get(self.state.phase, "")
|
|||
|
|
|
|||
|
|
# Dynamic substitutions
|
|||
|
|
if self.state.phase == InterviewPhase.DESIGN_VARS:
|
|||
|
|
suggestions = self._format_dv_suggestions()
|
|||
|
|
question = question.format(suggestions=suggestions)
|
|||
|
|
elif self.state.phase == InterviewPhase.METHOD:
|
|||
|
|
method, reason = self._recommend_method()
|
|||
|
|
question = question.format(method=method, reason=reason)
|
|||
|
|
elif self.state.phase == InterviewPhase.REVIEW:
|
|||
|
|
summary = self._format_summary()
|
|||
|
|
question = question.format(summary=summary)
|
|||
|
|
|
|||
|
|
return question
|
|||
|
|
|
|||
|
|
def process_answer(self, answer: str) -> Dict[str, Any]:
|
|||
|
|
"""Process user's answer and advance interview"""
|
|||
|
|
phase = self.state.phase
|
|||
|
|
result = {
|
|||
|
|
"phase": phase.value,
|
|||
|
|
"spec_changes": [],
|
|||
|
|
"next_phase": None,
|
|||
|
|
"question": None,
|
|||
|
|
"complete": False,
|
|||
|
|
"error": None
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
try:
|
|||
|
|
if phase == InterviewPhase.WELCOME:
|
|||
|
|
self._process_welcome(answer)
|
|||
|
|
elif phase == InterviewPhase.MODEL:
|
|||
|
|
self._process_model(answer, result)
|
|||
|
|
elif phase == InterviewPhase.OBJECTIVES:
|
|||
|
|
self._process_objectives(answer, result)
|
|||
|
|
elif phase == InterviewPhase.DESIGN_VARS:
|
|||
|
|
self._process_design_vars(answer, result)
|
|||
|
|
elif phase == InterviewPhase.CONSTRAINTS:
|
|||
|
|
self._process_constraints(answer, result)
|
|||
|
|
elif phase == InterviewPhase.METHOD:
|
|||
|
|
self._process_method(answer, result)
|
|||
|
|
elif phase == InterviewPhase.REVIEW:
|
|||
|
|
self._process_review(answer, result)
|
|||
|
|
|
|||
|
|
# Advance to next phase
|
|||
|
|
self._advance_phase()
|
|||
|
|
|
|||
|
|
if self.state.phase == InterviewPhase.COMPLETE:
|
|||
|
|
result["complete"] = True
|
|||
|
|
else:
|
|||
|
|
result["next_phase"] = self.state.phase.value
|
|||
|
|
result["question"] = self.get_current_question()
|
|||
|
|
|
|||
|
|
except Exception as e:
|
|||
|
|
result["error"] = str(e)
|
|||
|
|
|
|||
|
|
return result
|
|||
|
|
|
|||
|
|
def _advance_phase(self):
|
|||
|
|
"""Move to next phase"""
|
|||
|
|
phases = list(InterviewPhase)
|
|||
|
|
current_idx = phases.index(self.state.phase)
|
|||
|
|
if current_idx < len(phases) - 1:
|
|||
|
|
self.state.phase = phases[current_idx + 1]
|
|||
|
|
|
|||
|
|
def _process_welcome(self, answer: str):
|
|||
|
|
"""Extract optimization type from welcome"""
|
|||
|
|
self.state.collected["goal"] = answer
|
|||
|
|
|
|||
|
|
# Try to infer study name
|
|||
|
|
words = answer.lower().split()
|
|||
|
|
if "bracket" in words:
|
|||
|
|
self.state.collected["geometry_type"] = "bracket"
|
|||
|
|
elif "mirror" in words:
|
|||
|
|
self.state.collected["geometry_type"] = "mirror"
|
|||
|
|
elif "beam" in words:
|
|||
|
|
self.state.collected["geometry_type"] = "beam"
|
|||
|
|
|
|||
|
|
def _process_model(self, answer: str, result: Dict):
|
|||
|
|
"""Extract model path"""
|
|||
|
|
# Extract path from answer
|
|||
|
|
path = answer.strip().strip('"').strip("'")
|
|||
|
|
self.state.spec["model"]["sim"]["path"] = path
|
|||
|
|
self.state.collected["model_path"] = path
|
|||
|
|
result["spec_changes"].append(f"Set model path: {path}")
|
|||
|
|
|
|||
|
|
def _process_objectives(self, answer: str, result: Dict):
|
|||
|
|
"""Extract objectives from natural language"""
|
|||
|
|
answer_lower = answer.lower()
|
|||
|
|
objectives = []
|
|||
|
|
extractors = []
|
|||
|
|
|
|||
|
|
# Mass/weight
|
|||
|
|
if any(w in answer_lower for w in ["mass", "weight", "light"]):
|
|||
|
|
extractors.append({
|
|||
|
|
"id": "ext_mass",
|
|||
|
|
"name": "Mass",
|
|||
|
|
"type": "bdf_mass",
|
|||
|
|
"enabled": True
|
|||
|
|
})
|
|||
|
|
objectives.append({
|
|||
|
|
"id": "obj_mass",
|
|||
|
|
"name": "Mass",
|
|||
|
|
"direction": "minimize",
|
|||
|
|
"source": {"extractor_id": "ext_mass", "output_key": "mass"},
|
|||
|
|
"enabled": True
|
|||
|
|
})
|
|||
|
|
result["spec_changes"].append("Added objective: minimize mass")
|
|||
|
|
|
|||
|
|
# Displacement/stiffness
|
|||
|
|
if any(w in answer_lower for w in ["displacement", "stiff", "deflection"]):
|
|||
|
|
extractors.append({
|
|||
|
|
"id": "ext_disp",
|
|||
|
|
"name": "Max Displacement",
|
|||
|
|
"type": "displacement",
|
|||
|
|
"config": {"node_set": "all", "direction": "magnitude"},
|
|||
|
|
"enabled": True
|
|||
|
|
})
|
|||
|
|
objectives.append({
|
|||
|
|
"id": "obj_disp",
|
|||
|
|
"name": "Max Displacement",
|
|||
|
|
"direction": "minimize",
|
|||
|
|
"source": {"extractor_id": "ext_disp", "output_key": "max_displacement"},
|
|||
|
|
"enabled": True
|
|||
|
|
})
|
|||
|
|
result["spec_changes"].append("Added objective: minimize displacement")
|
|||
|
|
|
|||
|
|
# Stress
|
|||
|
|
if "stress" in answer_lower and "constraint" not in answer_lower:
|
|||
|
|
extractors.append({
|
|||
|
|
"id": "ext_stress",
|
|||
|
|
"name": "Max Stress",
|
|||
|
|
"type": "stress",
|
|||
|
|
"config": {"stress_type": "von_mises"},
|
|||
|
|
"enabled": True
|
|||
|
|
})
|
|||
|
|
objectives.append({
|
|||
|
|
"id": "obj_stress",
|
|||
|
|
"name": "Max Stress",
|
|||
|
|
"direction": "minimize",
|
|||
|
|
"source": {"extractor_id": "ext_stress", "output_key": "max_stress"},
|
|||
|
|
"enabled": True
|
|||
|
|
})
|
|||
|
|
result["spec_changes"].append("Added objective: minimize stress")
|
|||
|
|
|
|||
|
|
# WFE/wavefront
|
|||
|
|
if any(w in answer_lower for w in ["wfe", "wavefront", "optical", "zernike"]):
|
|||
|
|
extractors.append({
|
|||
|
|
"id": "ext_wfe",
|
|||
|
|
"name": "Wavefront Error",
|
|||
|
|
"type": "zernike",
|
|||
|
|
"config": {"terms": [4, 5, 6, 7, 8, 9, 10, 11]},
|
|||
|
|
"enabled": True
|
|||
|
|
})
|
|||
|
|
objectives.append({
|
|||
|
|
"id": "obj_wfe",
|
|||
|
|
"name": "WFE RMS",
|
|||
|
|
"direction": "minimize",
|
|||
|
|
"source": {"extractor_id": "ext_wfe", "output_key": "wfe_rms"},
|
|||
|
|
"enabled": True
|
|||
|
|
})
|
|||
|
|
result["spec_changes"].append("Added objective: minimize wavefront error")
|
|||
|
|
|
|||
|
|
self.state.spec["extractors"].extend(extractors)
|
|||
|
|
self.state.spec["objectives"].extend(objectives)
|
|||
|
|
self.state.collected["objectives"] = [o["name"] for o in objectives]
|
|||
|
|
|
|||
|
|
def _process_design_vars(self, answer: str, result: Dict):
|
|||
|
|
"""Extract design variables"""
|
|||
|
|
answer_lower = answer.lower()
|
|||
|
|
dvs = []
|
|||
|
|
|
|||
|
|
# Parse patterns like "thickness 2-10mm" or "thickness from 2 to 10"
|
|||
|
|
# Pattern: name [range]
|
|||
|
|
patterns = [
|
|||
|
|
r'(\w+)\s+(\d+(?:\.\d+)?)\s*[-–to]+\s*(\d+(?:\.\d+)?)\s*(mm|deg|°)?',
|
|||
|
|
r'(\w+)\s+\[(\d+(?:\.\d+)?),?\s*(\d+(?:\.\d+)?)\]',
|
|||
|
|
]
|
|||
|
|
|
|||
|
|
for pattern in patterns:
|
|||
|
|
matches = re.findall(pattern, answer_lower)
|
|||
|
|
for match in matches:
|
|||
|
|
name = match[0]
|
|||
|
|
min_val = float(match[1])
|
|||
|
|
max_val = float(match[2])
|
|||
|
|
unit = match[3] if len(match) > 3 else ""
|
|||
|
|
|
|||
|
|
dv = {
|
|||
|
|
"id": f"dv_{name}",
|
|||
|
|
"name": name.replace("_", " ").title(),
|
|||
|
|
"expression_name": name,
|
|||
|
|
"type": "continuous",
|
|||
|
|
"bounds": {"min": min_val, "max": max_val},
|
|||
|
|
"baseline": (min_val + max_val) / 2,
|
|||
|
|
"enabled": True
|
|||
|
|
}
|
|||
|
|
if unit:
|
|||
|
|
dv["units"] = unit
|
|||
|
|
dvs.append(dv)
|
|||
|
|
result["spec_changes"].append(f"Added design variable: {name} [{min_val}, {max_val}]")
|
|||
|
|
|
|||
|
|
# If no pattern matched, try to use suggestions
|
|||
|
|
if not dvs and self.state.model_expressions:
|
|||
|
|
# User might have said "yes" or named expressions without ranges
|
|||
|
|
for expr in self.state.model_expressions[:3]: # Use top 3
|
|||
|
|
if expr["name"].lower() in answer_lower or "yes" in answer_lower or "all" in answer_lower:
|
|||
|
|
val = expr["value"]
|
|||
|
|
dv = {
|
|||
|
|
"id": f"dv_{expr['name']}",
|
|||
|
|
"name": expr["name"],
|
|||
|
|
"expression_name": expr["name"],
|
|||
|
|
"type": "continuous",
|
|||
|
|
"bounds": {"min": val * 0.5, "max": val * 1.5},
|
|||
|
|
"baseline": val,
|
|||
|
|
"enabled": True
|
|||
|
|
}
|
|||
|
|
dvs.append(dv)
|
|||
|
|
result["spec_changes"].append(f"Added design variable: {expr['name']} [{val*0.5:.2f}, {val*1.5:.2f}]")
|
|||
|
|
|
|||
|
|
self.state.spec["design_variables"].extend(dvs)
|
|||
|
|
self.state.collected["design_vars"] = [dv["name"] for dv in dvs]
|
|||
|
|
|
|||
|
|
def _process_constraints(self, answer: str, result: Dict):
|
|||
|
|
"""Extract constraints"""
|
|||
|
|
answer_lower = answer.lower()
|
|||
|
|
|
|||
|
|
if answer_lower in ["none", "no", "skip", "n/a"]:
|
|||
|
|
return
|
|||
|
|
|
|||
|
|
constraints = []
|
|||
|
|
|
|||
|
|
# Pattern: "stress < 200" or "stress <= 200 MPa"
|
|||
|
|
stress_match = re.search(r'stress\s*([<>=≤≥]+)\s*(\d+(?:\.\d+)?)', answer_lower)
|
|||
|
|
if stress_match:
|
|||
|
|
op = stress_match.group(1).replace("≤", "<=").replace("≥", ">=")
|
|||
|
|
val = float(stress_match.group(2))
|
|||
|
|
|
|||
|
|
# Add extractor if not exists
|
|||
|
|
if not any(e["type"] == "stress" for e in self.state.spec["extractors"]):
|
|||
|
|
self.state.spec["extractors"].append({
|
|||
|
|
"id": "ext_stress_con",
|
|||
|
|
"name": "Stress for Constraint",
|
|||
|
|
"type": "stress",
|
|||
|
|
"config": {"stress_type": "von_mises"},
|
|||
|
|
"enabled": True
|
|||
|
|
})
|
|||
|
|
|
|||
|
|
constraints.append({
|
|||
|
|
"id": "con_stress",
|
|||
|
|
"name": "Max Stress",
|
|||
|
|
"operator": op if op in ["<=", ">=", "<", ">", "=="] else "<=",
|
|||
|
|
"threshold": val,
|
|||
|
|
"source": {"extractor_id": "ext_stress_con", "output_key": "max_stress"},
|
|||
|
|
"enabled": True
|
|||
|
|
})
|
|||
|
|
result["spec_changes"].append(f"Added constraint: stress {op} {val}")
|
|||
|
|
|
|||
|
|
# Similar patterns for frequency, mass, displacement...
|
|||
|
|
|
|||
|
|
self.state.spec["constraints"].extend(constraints)
|
|||
|
|
|
|||
|
|
def _process_method(self, answer: str, result: Dict):
|
|||
|
|
"""Confirm or change optimization method"""
|
|||
|
|
answer_lower = answer.lower()
|
|||
|
|
|
|||
|
|
if any(w in answer_lower for w in ["yes", "ok", "sure", "good", "proceed"]):
|
|||
|
|
# Keep recommended method
|
|||
|
|
pass
|
|||
|
|
elif "nsga" in answer_lower:
|
|||
|
|
self.state.spec["optimization"]["algorithm"]["type"] = "NSGA-II"
|
|||
|
|
elif "tpe" in answer_lower:
|
|||
|
|
self.state.spec["optimization"]["algorithm"]["type"] = "TPE"
|
|||
|
|
elif "cma" in answer_lower:
|
|||
|
|
self.state.spec["optimization"]["algorithm"]["type"] = "CMA-ES"
|
|||
|
|
|
|||
|
|
result["spec_changes"].append(f"Set method: {self.state.spec['optimization']['algorithm']['type']}")
|
|||
|
|
|
|||
|
|
def _process_review(self, answer: str, result: Dict):
|
|||
|
|
"""Confirm or revise"""
|
|||
|
|
answer_lower = answer.lower()
|
|||
|
|
|
|||
|
|
if any(w in answer_lower for w in ["yes", "ok", "create", "proceed", "looks good"]):
|
|||
|
|
result["complete"] = True
|
|||
|
|
else:
|
|||
|
|
# User wants changes - stay in review or go back
|
|||
|
|
result["error"] = "What would you like to change?"
|
|||
|
|
|
|||
|
|
def _recommend_method(self) -> tuple:
|
|||
|
|
"""Recommend optimization method based on problem"""
|
|||
|
|
n_obj = len(self.state.spec["objectives"])
|
|||
|
|
n_dv = len(self.state.spec["design_variables"])
|
|||
|
|
|
|||
|
|
if n_obj > 1:
|
|||
|
|
return "NSGA-II", f"You have {n_obj} objectives, which requires multi-objective optimization"
|
|||
|
|
elif n_dv > 10:
|
|||
|
|
return "CMA-ES", f"With {n_dv} design variables, CMA-ES handles high dimensions well"
|
|||
|
|
else:
|
|||
|
|
return "TPE", "TPE (Bayesian optimization) is efficient for single-objective problems"
|
|||
|
|
|
|||
|
|
def _format_dv_suggestions(self) -> str:
|
|||
|
|
"""Format design variable suggestions"""
|
|||
|
|
if self.state.model_expressions:
|
|||
|
|
lines = ["I found these expressions in your model:"]
|
|||
|
|
for expr in self.state.model_expressions[:5]:
|
|||
|
|
lines.append(f" - {expr['name']} = {expr['value']}")
|
|||
|
|
lines.append("\nWhich ones should vary? (or describe your own)")
|
|||
|
|
return "\n".join(lines)
|
|||
|
|
return "Describe the parameters and their ranges (e.g., 'thickness 2-10mm, width 5-20mm')"
|
|||
|
|
|
|||
|
|
def _format_summary(self) -> str:
|
|||
|
|
"""Format configuration summary"""
|
|||
|
|
spec = self.state.spec
|
|||
|
|
lines = []
|
|||
|
|
|
|||
|
|
lines.append(f"**Model**: {spec['model']['sim']['path'] or 'Not set'}")
|
|||
|
|
|
|||
|
|
lines.append(f"\n**Design Variables ({len(spec['design_variables'])}):**")
|
|||
|
|
for dv in spec["design_variables"]:
|
|||
|
|
b = dv["bounds"]
|
|||
|
|
lines.append(f" - {dv['name']}: [{b['min']}, {b['max']}]")
|
|||
|
|
|
|||
|
|
lines.append(f"\n**Objectives ({len(spec['objectives'])}):**")
|
|||
|
|
for obj in spec["objectives"]:
|
|||
|
|
lines.append(f" - {obj['direction']} {obj['name']}")
|
|||
|
|
|
|||
|
|
lines.append(f"\n**Constraints ({len(spec['constraints'])}):**")
|
|||
|
|
if spec["constraints"]:
|
|||
|
|
for con in spec["constraints"]:
|
|||
|
|
lines.append(f" - {con['name']} {con['operator']} {con['threshold']}")
|
|||
|
|
else:
|
|||
|
|
lines.append(" - None")
|
|||
|
|
|
|||
|
|
lines.append(f"\n**Method**: {spec['optimization']['algorithm']['type']}")
|
|||
|
|
|
|||
|
|
return "\n".join(lines)
|
|||
|
|
|
|||
|
|
def get_spec(self) -> Dict[str, Any]:
|
|||
|
|
"""Get the built spec"""
|
|||
|
|
return self.state.spec
|
|||
|
|
|
|||
|
|
def set_model_expressions(self, expressions: List[Dict]):
|
|||
|
|
"""Set model expressions for DV suggestions"""
|
|||
|
|
self.state.model_expressions = expressions
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
#### Task 4.2: Add Interview Tools to Claude Agent
|
|||
|
|
**File**: `backend/api/services/claude_agent.py`
|
|||
|
|
|
|||
|
|
**Add tools**:
|
|||
|
|
```python
|
|||
|
|
{
|
|||
|
|
"name": "start_interview",
|
|||
|
|
"description": "Start a guided interview to create a new optimization study. Use this when the user wants help setting up an optimization but hasn't provided full details.",
|
|||
|
|
"input_schema": {
|
|||
|
|
"type": "object",
|
|||
|
|
"properties": {},
|
|||
|
|
"required": []
|
|||
|
|
}
|
|||
|
|
},
|
|||
|
|
{
|
|||
|
|
"name": "interview_answer",
|
|||
|
|
"description": "Process the user's answer during an interview. Extract relevant information and advance the interview.",
|
|||
|
|
"input_schema": {
|
|||
|
|
"type": "object",
|
|||
|
|
"properties": {
|
|||
|
|
"answer": {
|
|||
|
|
"type": "string",
|
|||
|
|
"description": "The user's answer to the current interview question"
|
|||
|
|
}
|
|||
|
|
},
|
|||
|
|
"required": ["answer"]
|
|||
|
|
}
|
|||
|
|
},
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Add state and implementations**:
|
|||
|
|
```python
|
|||
|
|
def __init__(self, study_id: Optional[str] = None):
|
|||
|
|
# ... existing
|
|||
|
|
self.interview: Optional[InterviewEngine] = None
|
|||
|
|
|
|||
|
|
def _tool_start_interview(self, params: Dict[str, Any]) -> str:
|
|||
|
|
"""Start guided study creation"""
|
|||
|
|
from api.services.interview_engine import InterviewEngine
|
|||
|
|
self.interview = InterviewEngine()
|
|||
|
|
question = self.interview.get_current_question()
|
|||
|
|
return f"Let's set up your optimization step by step.\n\n{question}"
|
|||
|
|
|
|||
|
|
def _tool_interview_answer(self, params: Dict[str, Any]) -> str:
|
|||
|
|
"""Process interview answer"""
|
|||
|
|
if not self.interview:
|
|||
|
|
return "No interview in progress. Use start_interview first."
|
|||
|
|
|
|||
|
|
result = self.interview.process_answer(params["answer"])
|
|||
|
|
|
|||
|
|
response_parts = []
|
|||
|
|
|
|||
|
|
# Show what was extracted
|
|||
|
|
if result["spec_changes"]:
|
|||
|
|
response_parts.append("**Updated:**")
|
|||
|
|
for change in result["spec_changes"]:
|
|||
|
|
response_parts.append(f" ✓ {change}")
|
|||
|
|
|
|||
|
|
if result["error"]:
|
|||
|
|
response_parts.append(f"\n{result['error']}")
|
|||
|
|
elif result["complete"]:
|
|||
|
|
# Create the study
|
|||
|
|
spec = self.interview.get_spec()
|
|||
|
|
# ... create study directory and save spec
|
|||
|
|
response_parts.append("\n✓ **Interview complete!** Creating your study...")
|
|||
|
|
self.canvas_state = spec
|
|||
|
|
elif result["question"]:
|
|||
|
|
response_parts.append(f"\n{result['question']}")
|
|||
|
|
|
|||
|
|
return "\n".join(response_parts)
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
### Phase 5: UI Polish (P2/P3)
|
|||
|
|
|
|||
|
|
#### Task 5.1: Tool Call Indicators
|
|||
|
|
**File**: `frontend/src/components/chat/ToolIndicator.tsx` (NEW)
|
|||
|
|
|
|||
|
|
```typescript
|
|||
|
|
import { Loader2, Check, Variable, Cpu, Target, Lock, FolderPlus, Search, Wrench } from 'lucide-react';
|
|||
|
|
|
|||
|
|
interface ToolIndicatorProps {
|
|||
|
|
tool: string;
|
|||
|
|
status: 'running' | 'complete';
|
|||
|
|
result?: string;
|
|||
|
|
}
|
|||
|
|
|
|||
|
|
const TOOL_ICONS: Record<string, React.ComponentType<any>> = {
|
|||
|
|
add_design_variable: Variable,
|
|||
|
|
add_extractor: Cpu,
|
|||
|
|
add_objective: Target,
|
|||
|
|
add_constraint: Lock,
|
|||
|
|
create_study: FolderPlus,
|
|||
|
|
introspect_model: Search,
|
|||
|
|
};
|
|||
|
|
|
|||
|
|
const TOOL_LABELS: Record<string, string> = {
|
|||
|
|
add_design_variable: 'Adding design variable',
|
|||
|
|
add_extractor: 'Adding extractor',
|
|||
|
|
add_objective: 'Adding objective',
|
|||
|
|
add_constraint: 'Adding constraint',
|
|||
|
|
create_study: 'Creating study',
|
|||
|
|
update_spec_field: 'Updating configuration',
|
|||
|
|
remove_node: 'Removing node',
|
|||
|
|
};
|
|||
|
|
|
|||
|
|
export function ToolIndicator({ tool, status, result }: ToolIndicatorProps) {
|
|||
|
|
const Icon = TOOL_ICONS[tool] || Wrench;
|
|||
|
|
const label = TOOL_LABELS[tool] || tool;
|
|||
|
|
|
|||
|
|
return (
|
|||
|
|
<div className={`flex items-center gap-2 px-3 py-2 rounded-lg text-sm ${
|
|||
|
|
status === 'running'
|
|||
|
|
? 'bg-amber-500/10 text-amber-400 border border-amber-500/20'
|
|||
|
|
: 'bg-green-500/10 text-green-400 border border-green-500/20'
|
|||
|
|
}`}>
|
|||
|
|
{status === 'running' ? (
|
|||
|
|
<Loader2 className="w-4 h-4 animate-spin" />
|
|||
|
|
) : (
|
|||
|
|
<Check className="w-4 h-4" />
|
|||
|
|
)}
|
|||
|
|
<Icon className="w-4 h-4" />
|
|||
|
|
<span className="font-medium">{label}</span>
|
|||
|
|
{status === 'complete' && result && (
|
|||
|
|
<span className="text-xs opacity-75 ml-2">{result}</span>
|
|||
|
|
)}
|
|||
|
|
</div>
|
|||
|
|
);
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
#### Task 5.2: Display Tool Calls in Chat
|
|||
|
|
**File**: `frontend/src/components/chat/ChatMessage.tsx`
|
|||
|
|
|
|||
|
|
**Add tool call rendering**:
|
|||
|
|
```typescript
|
|||
|
|
import { ToolIndicator } from './ToolIndicator';
|
|||
|
|
|
|||
|
|
// In message rendering:
|
|||
|
|
{message.toolCalls?.map((tc, idx) => (
|
|||
|
|
<ToolIndicator
|
|||
|
|
key={idx}
|
|||
|
|
tool={tc.tool}
|
|||
|
|
status={tc.status}
|
|||
|
|
result={tc.result}
|
|||
|
|
/>
|
|||
|
|
))}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## File Summary
|
|||
|
|
|
|||
|
|
### Files to Modify
|
|||
|
|
|
|||
|
|
| File | Changes |
|
|||
|
|
|------|---------|
|
|||
|
|
| `backend/api/services/claude_agent.py` | Add canvas context, streaming, create_study, interview tools |
|
|||
|
|
| `backend/api/routes/claude.py` | Send full spec, handle canvas_edit, use streaming |
|
|||
|
|
| `frontend/src/hooks/useChat.ts` | Add notifyCanvasEdit, handle streaming, onSpecUpdated |
|
|||
|
|
| `frontend/src/pages/CanvasView.tsx` | Wire onSpecUpdated, pass notifyCanvasEdit |
|
|||
|
|
| `frontend/src/components/canvas/SpecRenderer.tsx` | Call notifyCanvasEdit on user edits |
|
|||
|
|
|
|||
|
|
### Files to Create
|
|||
|
|
|
|||
|
|
| File | Purpose |
|
|||
|
|
|------|---------|
|
|||
|
|
| `backend/api/services/interview_engine.py` | Guided study creation |
|
|||
|
|
| `frontend/src/components/chat/ToolIndicator.tsx` | Tool call UI |
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## Testing Checklist
|
|||
|
|
|
|||
|
|
### Phase 1 Tests
|
|||
|
|
- [ ] Claude mentions current canvas state in response
|
|||
|
|
- [ ] Canvas updates without HTTP reload when Claude modifies spec
|
|||
|
|
- [ ] User edits canvas → ask Claude about it → Claude knows the change
|
|||
|
|
|
|||
|
|
### Phase 2 Tests
|
|||
|
|
- [ ] Text streams in as Claude types
|
|||
|
|
- [ ] Tool calls show "running" then "complete" status
|
|||
|
|
|
|||
|
|
### Phase 3 Tests
|
|||
|
|
- [ ] "Create a new study called bracket_v2" creates directory + spec
|
|||
|
|
- [ ] Canvas shows new empty study
|
|||
|
|
|
|||
|
|
### Phase 4 Tests
|
|||
|
|
- [ ] "Help me set up an optimization" starts interview
|
|||
|
|
- [ ] Each answer updates canvas incrementally
|
|||
|
|
- [ ] Interview completes with valid spec
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## Ralph Loop Execution Notes
|
|||
|
|
|
|||
|
|
1. **Start with Phase 1** - It's the foundation for everything else
|
|||
|
|
2. **Test each task individually** before moving to next
|
|||
|
|
3. **The key insight**: `atomizer_spec.json` is the single source of truth
|
|||
|
|
4. **Don't break existing functionality** - Power mode should still work
|
|||
|
|
|
|||
|
|
### Quick Wins (Do First)
|
|||
|
|
1. Task 1.1 - Add canvas state to Claude context (easy, high value)
|
|||
|
|
2. Task 1.2 - Send full spec in response (easy, removes HTTP reload)
|
|||
|
|
3. Task 1.5 - Frontend use spec directly (easy, faster updates)
|
|||
|
|
|
|||
|
|
### Recommended Order
|
|||
|
|
```
|
|||
|
|
1.1 → 1.2 → 1.5 → 1.6 → 1.3 → 1.4 → 2.1 → 2.2 → 2.3 → 3.1 → 4.1 → 4.2 → 5.1 → 5.2
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
## Success Criteria
|
|||
|
|
|
|||
|
|
The project is complete when:
|
|||
|
|
|
|||
|
|
1. **Bi-directional sync works**: Claude modifies → Canvas updates instantly. User edits → Claude sees in next message.
|
|||
|
|
|
|||
|
|
2. **Streaming works**: Text appears as Claude types, tool calls show in real-time.
|
|||
|
|
|
|||
|
|
3. **Study creation works**: User can say "create bracket optimization with mass objective" and see it built on canvas.
|
|||
|
|
|
|||
|
|
4. **Interview works**: User can say "help me set up an optimization" and be guided through the process.
|
|||
|
|
|
|||
|
|
5. **No HTTP reloads**: Canvas updates purely through WebSocket.
|