docs: Major documentation overhaul - restructure folders, update tagline, add Getting Started guide
- Restructure docs/ folder (remove numeric prefixes): - 04_USER_GUIDES -> guides/ - 05_API_REFERENCE -> api/ - 06_PHYSICS -> physics/ - 07_DEVELOPMENT -> development/ - 08_ARCHIVE -> archive/ - 09_DIAGRAMS -> diagrams/ - Replace tagline 'Talk, don't click' with 'LLM-driven optimization framework' in 9 files - Create comprehensive docs/GETTING_STARTED.md: - Prerequisites and quick setup - Project structure overview - First study tutorial (Claude or manual) - Dashboard usage guide - Neural acceleration introduction - Rewrite docs/00_INDEX.md with correct paths and modern structure - Archive obsolete files: - 01_PROTOCOLS.md -> archive/historical/01_PROTOCOLS_legacy.md - 03_GETTING_STARTED.md -> archive/historical/ - ATOMIZER_PODCAST_BRIEFING.md -> archive/marketing/ - Update timestamps to 2026-01-20 across all key files - Update .gitignore to exclude docs/generated/ - Version bump: ATOMIZER_CONTEXT v1.8 -> v2.0
This commit is contained in:
@@ -10,7 +10,7 @@ Load this FIRST on every new session, then route to specific protocols.
|
||||
|
||||
**Atomizer** is an LLM-first FEA (Finite Element Analysis) optimization framework. Users describe optimization problems in natural language, and Claude orchestrates the entire workflow: model introspection, config generation, optimization execution, and results analysis.
|
||||
|
||||
**Philosophy**: Talk, don't click. Engineers describe what they want; AI handles the rest.
|
||||
**Philosophy**: LLM-driven optimization. Engineers describe what they want; AI handles the rest.
|
||||
|
||||
---
|
||||
|
||||
@@ -501,7 +501,8 @@ The `DashboardDB` class creates Optuna-compatible schema for dashboard integrati
|
||||
|
||||
| Component | Version | Last Updated |
|
||||
|-----------|---------|--------------|
|
||||
| ATOMIZER_CONTEXT | 1.8 | 2025-12-28 |
|
||||
| ATOMIZER_CONTEXT | 2.0 | 2026-01-20 |
|
||||
| Documentation Structure | 2.0 | 2026-01-20 |
|
||||
| BaseOptimizationRunner | 1.0 | 2025-12-07 |
|
||||
| GenericSurrogate | 1.0 | 2025-12-07 |
|
||||
| Study State Detector | 1.0 | 2025-12-07 |
|
||||
@@ -520,4 +521,4 @@ The `DashboardDB` class creates Optuna-compatible schema for dashboard integrati
|
||||
|
||||
---
|
||||
|
||||
*Atomizer: Where engineers talk, AI optimizes.*
|
||||
*Atomizer: LLM-driven structural optimization for engineering.*
|
||||
|
||||
@@ -24,7 +24,7 @@ requires_skills: []
|
||||
|
||||
**Your Identity**: You are **Atomizer Claude** - a domain expert in FEA, optimization algorithms, and the Atomizer codebase. Not a generic assistant.
|
||||
|
||||
**Core Philosophy**: "Talk, don't click." Users describe what they want; you configure and execute.
|
||||
**Core Philosophy**: LLM-driven optimization. Users describe what they want; you configure and execute.
|
||||
|
||||
**NEW in v3.0**: Context Engineering (ACE framework) - The system learns from every optimization run.
|
||||
|
||||
|
||||
@@ -21,7 +21,7 @@ requires_skills: []
|
||||
|
||||
**Your Identity**: You are **Atomizer Claude** - a domain expert in FEA, optimization algorithms, and the Atomizer codebase. Not a generic assistant.
|
||||
|
||||
**Core Philosophy**: "Talk, don't click." Users describe what they want; you configure and execute.
|
||||
**Core Philosophy**: LLM-driven optimization. Users describe what they want; you configure and execute.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -112,7 +112,7 @@ python -m optimization_engine.insights generate studies/my_mirror --type zernike
|
||||
| 1-10 µm | **RECOMMENDED**: Use OPD method |
|
||||
| < 1 µm | Both methods equivalent |
|
||||
|
||||
**Related Documentation**: [ZERNIKE_OPD_METHOD.md](../../../docs/06_PHYSICS/ZERNIKE_OPD_METHOD.md)
|
||||
**Related Documentation**: [ZERNIKE_OPD_METHOD.md](../../../docs/physics/ZERNIKE_OPD_METHOD.md)
|
||||
|
||||
---
|
||||
|
||||
@@ -284,6 +284,6 @@ python -m optimization_engine.insights recommend studies/my_study
|
||||
## Related Documentation
|
||||
|
||||
- **Protocol Specification**: `docs/protocols/system/SYS_17_STUDY_INSIGHTS.md`
|
||||
- **OPD Method Physics**: `docs/06_PHYSICS/ZERNIKE_OPD_METHOD.md`
|
||||
- **OPD Method Physics**: `docs/physics/ZERNIKE_OPD_METHOD.md`
|
||||
- **Zernike Integration**: `docs/ZERNIKE_INTEGRATION.md`
|
||||
- **Extractor Catalog**: `.claude/skills/modules/extractors-catalog.md`
|
||||
|
||||
3
.gitignore
vendored
3
.gitignore
vendored
@@ -109,3 +109,6 @@ _dat_run*.dat
|
||||
# Claude session temp files
|
||||
.claude-mcp-*.json
|
||||
.claude-prompt-*.md
|
||||
|
||||
# Auto-generated documentation (regenerate with: python -m optimization_engine.auto_doc all)
|
||||
docs/generated/
|
||||
|
||||
@@ -68,7 +68,7 @@ This file provides:
|
||||
|
||||
## Core Philosophy
|
||||
|
||||
**Talk, don't click.** Users describe what they want in plain language. You interpret, configure, execute, and explain.
|
||||
**LLM-driven optimization framework.** Users describe what they want in plain language. You interpret, configure, execute, and explain.
|
||||
|
||||
## Context Loading Layers
|
||||
|
||||
@@ -207,9 +207,9 @@ Atomizer/
|
||||
|
||||
| Feature | Documentation |
|
||||
|---------|--------------|
|
||||
| **Canvas Builder** | `docs/04_USER_GUIDES/CANVAS.md` |
|
||||
| **Dashboard Overview** | `docs/04_USER_GUIDES/DASHBOARD.md` |
|
||||
| **Implementation Status** | `docs/04_USER_GUIDES/DASHBOARD_IMPLEMENTATION_STATUS.md` |
|
||||
| **Canvas Builder** | `docs/guides/CANVAS.md` |
|
||||
| **Dashboard Overview** | `docs/guides/DASHBOARD.md` |
|
||||
| **Implementation Status** | `docs/guides/DASHBOARD_IMPLEMENTATION_STATUS.md` |
|
||||
|
||||
**Canvas V3.1 Features (AtomizerSpec v2.0):**
|
||||
- **AtomizerSpec v2.0**: Unified JSON configuration format
|
||||
|
||||
16
README.md
16
README.md
@@ -1,11 +1,11 @@
|
||||
# Atomizer
|
||||
|
||||
> **Talk, don't click.** AI-native structural optimization for Siemens NX with neural network acceleration.
|
||||
> **LLM-driven structural optimization framework** for Siemens NX with neural network acceleration.
|
||||
|
||||
[](https://www.python.org/downloads/)
|
||||
[](https://www.plm.automation.siemens.com/global/en/products/nx/)
|
||||
[](LICENSE)
|
||||
[](docs/06_PHYSICS/)
|
||||
[](docs/physics/)
|
||||
|
||||
---
|
||||
|
||||
@@ -280,7 +280,7 @@ Atomizer/
|
||||
├── studies/ # Optimization studies by geometry
|
||||
├── docs/ # Documentation
|
||||
│ ├── protocols/ # Protocol specifications
|
||||
│ └── 06_PHYSICS/ # Physics domain docs
|
||||
│ └── physics/ # Physics domain docs
|
||||
├── knowledge_base/ # LAC persistent learning
|
||||
│ └── lac/ # Session insights, failures, patterns
|
||||
└── nx_journals/ # NX Open automation scripts
|
||||
@@ -306,12 +306,12 @@ Atomizer/
|
||||
| [CLAUDE.md](CLAUDE.md) | System instructions for Claude |
|
||||
| [.claude/ATOMIZER_CONTEXT.md](.claude/ATOMIZER_CONTEXT.md) | Session context loader |
|
||||
| [docs/protocols/](docs/protocols/) | Protocol specifications |
|
||||
| [docs/06_PHYSICS/](docs/06_PHYSICS/) | Physics domain documentation |
|
||||
| [docs/physics/](docs/physics/) | Physics domain documentation |
|
||||
|
||||
### Physics Documentation
|
||||
|
||||
- [ZERNIKE_FUNDAMENTALS.md](docs/06_PHYSICS/ZERNIKE_FUNDAMENTALS.md) - Zernike polynomial basics
|
||||
- [ZERNIKE_OPD_METHOD.md](docs/06_PHYSICS/ZERNIKE_OPD_METHOD.md) - OPD method for lateral displacement
|
||||
- [ZERNIKE_FUNDAMENTALS.md](docs/physics/ZERNIKE_FUNDAMENTALS.md) - Zernike polynomial basics
|
||||
- [ZERNIKE_OPD_METHOD.md](docs/physics/ZERNIKE_OPD_METHOD.md) - OPD method for lateral displacement
|
||||
|
||||
---
|
||||
|
||||
@@ -356,8 +356,8 @@ Python and dependencies are pre-configured. Do not install additional packages.
|
||||
|
||||
## License
|
||||
|
||||
Proprietary - Atomaste 2025
|
||||
Proprietary - Atomaste 2026
|
||||
|
||||
---
|
||||
|
||||
*Atomizer: Where engineers talk, AI optimizes.*
|
||||
*Atomizer: LLM-driven structural optimization for engineering.*
|
||||
|
||||
445
docs/00_INDEX.md
445
docs/00_INDEX.md
@@ -1,9 +1,7 @@
|
||||
# Atomizer Documentation Index
|
||||
|
||||
**Welcome to the Atomizer documentation!** This index provides a structured navigation hub for all documentation resources.
|
||||
|
||||
**Last Updated**: 2025-11-25
|
||||
**Project Version**: 0.95.0 (95% complete - Neural Integration Complete!)
|
||||
**Last Updated**: 2026-01-20
|
||||
**Project Version**: 1.0.0 (AtomizerSpec v2.0 - Full LLM Integration)
|
||||
|
||||
---
|
||||
|
||||
@@ -11,361 +9,212 @@
|
||||
|
||||
New to Atomizer? Start here:
|
||||
|
||||
1. **[README.md](../README.md)** - Project overview, philosophy, and quick start guide
|
||||
2. **[Getting Started Tutorial](HOW_TO_EXTEND_OPTIMIZATION.md)** - Create your first optimization study
|
||||
3. **[Neural Features Guide](NEURAL_FEATURES_COMPLETE.md)** - Neural network acceleration (NEW!)
|
||||
4. **[Example Studies](../studies/)** - Working examples (UAV arm with neural, bracket)
|
||||
1. **[README.md](../README.md)** - Project overview and philosophy
|
||||
2. **[Getting Started](GETTING_STARTED.md)** - Installation, first study, dashboard
|
||||
3. **[Protocol System](protocols/README.md)** - How Atomizer is organized
|
||||
4. **[Example Studies](../studies/)** - Working examples
|
||||
|
||||
---
|
||||
|
||||
## Documentation Structure
|
||||
|
||||
### 🧠 Neural Network Acceleration (NEW!)
|
||||
### Core Documentation
|
||||
|
||||
**Core Neural Documentation**:
|
||||
- **[NEURAL_FEATURES_COMPLETE.md](NEURAL_FEATURES_COMPLETE.md)** - Complete guide to all neural features
|
||||
- **[NEURAL_WORKFLOW_TUTORIAL.md](NEURAL_WORKFLOW_TUTORIAL.md)** - Step-by-step: data → training → optimization
|
||||
- **[GNN_ARCHITECTURE.md](GNN_ARCHITECTURE.md)** - Technical deep-dive into GNN models
|
||||
- **[PHYSICS_LOSS_GUIDE.md](PHYSICS_LOSS_GUIDE.md)** - Loss function selection guide
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| **[GETTING_STARTED.md](GETTING_STARTED.md)** | Setup, first study, dashboard basics |
|
||||
| **[ARCHITECTURE.md](ARCHITECTURE.md)** | System architecture, hooks, data flow |
|
||||
| **[protocols/README.md](protocols/README.md)** | Protocol Operating System overview |
|
||||
|
||||
**Integration Documentation**:
|
||||
- **[ATOMIZER_FIELD_INTEGRATION_PLAN.md](ATOMIZER_FIELD_INTEGRATION_PLAN.md)** - Integration roadmap (COMPLETE)
|
||||
- **[ATOMIZER_FIELD_NEURAL_OPTIMIZATION_GUIDE.md](ATOMIZER_FIELD_NEURAL_OPTIMIZATION_GUIDE.md)** - API reference
|
||||
### Protocol System
|
||||
|
||||
**Quick Commands**:
|
||||
```bash
|
||||
# Run neural-accelerated optimization
|
||||
python run_optimization.py --trials 5000 --use-neural
|
||||
The Protocol Operating System (POS) provides structured workflows:
|
||||
|
||||
# Train new model
|
||||
cd atomizer-field && python train_parametric.py --epochs 200
|
||||
```
|
||||
protocols/
|
||||
├── README.md # Protocol system overview
|
||||
├── operations/ # How-to guides (OP_01-08)
|
||||
│ ├── OP_01_CREATE_STUDY.md
|
||||
│ ├── OP_02_RUN_OPTIMIZATION.md
|
||||
│ ├── OP_03_MONITOR_PROGRESS.md
|
||||
│ ├── OP_04_ANALYZE_RESULTS.md
|
||||
│ ├── OP_05_EXPORT_TRAINING_DATA.md
|
||||
│ ├── OP_06_TROUBLESHOOT.md
|
||||
│ ├── OP_07_DISK_OPTIMIZATION.md
|
||||
│ └── OP_08_GENERATE_REPORT.md
|
||||
├── system/ # Technical specifications (SYS_10-18)
|
||||
│ ├── SYS_10_IMSO.md # Intelligent optimization
|
||||
│ ├── SYS_11_MULTI_OBJECTIVE.md
|
||||
│ ├── SYS_12_EXTRACTOR_LIBRARY.md
|
||||
│ ├── SYS_13_DASHBOARD_TRACKING.md
|
||||
│ ├── SYS_14_NEURAL_ACCELERATION.md
|
||||
│ ├── SYS_15_METHOD_SELECTOR.md
|
||||
│ ├── SYS_16_SELF_AWARE_TURBO.md
|
||||
│ ├── SYS_17_STUDY_INSIGHTS.md
|
||||
│ └── SYS_18_CONTEXT_ENGINEERING.md
|
||||
└── extensions/ # Extensibility (EXT_01-04)
|
||||
├── EXT_01_CREATE_EXTRACTOR.md
|
||||
├── EXT_02_CREATE_HOOK.md
|
||||
├── EXT_03_CREATE_PROTOCOL.md
|
||||
└── EXT_04_CREATE_SKILL.md
|
||||
```
|
||||
|
||||
### 📋 01. Core Specifications
|
||||
### User Guides
|
||||
|
||||
**[PROTOCOLS.md](PROTOCOLS.md)** - Master protocol specifications (ALL PROTOCOLS IN ONE PLACE)
|
||||
- Protocol 10: Intelligent Multi-Strategy Optimization (IMSO)
|
||||
- Protocol 11: Multi-Objective Support (MANDATORY for all components)
|
||||
- Protocol 13: Real-Time Dashboard Tracking
|
||||
Located in `guides/`:
|
||||
|
||||
**Individual Protocol Documents** (detailed specifications):
|
||||
- [PROTOCOL_10_IMSO.md](PROTOCOL_10_IMSO.md) - Adaptive characterization, landscape analysis
|
||||
- [PROTOCOL_10_V2_IMPLEMENTATION.md](PROTOCOL_10_V2_IMPLEMENTATION.md) - Implementation summary
|
||||
- [PROTOCOL_10_V2_FIXES.md](PROTOCOL_10_V2_FIXES.md) - Bug fixes and improvements
|
||||
- [PROTOCOL_11_MULTI_OBJECTIVE_SUPPORT.md](PROTOCOL_11_MULTI_OBJECTIVE_SUPPORT.md) - Multi-objective requirements
|
||||
- [FIX_SUMMARY_PROTOCOL_11.md](FIX_SUMMARY_PROTOCOL_11.md) - Protocol 11 bug fixes
|
||||
- [PROTOCOL_13_DASHBOARD.md](PROTOCOL_13_DASHBOARD.md) - Dashboard implementation complete spec
|
||||
| Guide | Purpose |
|
||||
|-------|---------|
|
||||
| **[CANVAS.md](guides/CANVAS.md)** | Visual study builder (AtomizerSpec v2.0) |
|
||||
| **[DASHBOARD.md](guides/DASHBOARD.md)** | Dashboard overview and features |
|
||||
| **[NEURAL_FEATURES_COMPLETE.md](guides/NEURAL_FEATURES_COMPLETE.md)** | Neural acceleration guide |
|
||||
| **[NEURAL_WORKFLOW_TUTORIAL.md](guides/NEURAL_WORKFLOW_TUTORIAL.md)** | Data → Training → Optimization |
|
||||
| **[hybrid_mode.md](guides/hybrid_mode.md)** | Hybrid FEA/NN optimization |
|
||||
| **[TRAINING_DATA_EXPORT_GUIDE.md](guides/TRAINING_DATA_EXPORT_GUIDE.md)** | Exporting data for neural training |
|
||||
|
||||
### 🏗️ 02. Architecture & Design
|
||||
### API Reference
|
||||
|
||||
**Visual Architecture** (🆕 Comprehensive Diagrams):
|
||||
- [**Architecture Overview**](09_DIAGRAMS/architecture_overview.md) - Complete system architecture with Mermaid diagrams
|
||||
- High-level system architecture
|
||||
- Component interactions
|
||||
- Data flow diagrams
|
||||
- Philosophy and design principles
|
||||
- Technology stack
|
||||
- [**Protocol Workflows**](09_DIAGRAMS/protocol_workflows.md) - Detailed protocol execution flows
|
||||
- Protocol 10: IMSO workflow
|
||||
- Protocol 11: Multi-objective decision trees
|
||||
- Protocol 13: Real-time tracking
|
||||
- LLM-assisted workflow (Hybrid Mode)
|
||||
- All protocols integrated
|
||||
Located in `api/`:
|
||||
|
||||
**System Architecture**:
|
||||
- [HOOK_ARCHITECTURE.md](HOOK_ARCHITECTURE.md) - Plugin system and lifecycle hooks
|
||||
- [NX_SESSION_MANAGEMENT.md](NX_SESSION_MANAGEMENT.md) - NX Nastran integration details
|
||||
- [SYSTEM_CONFIGURATION.md](SYSTEM_CONFIGURATION.md) - Configuration format and options
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| **[system_configuration.md](api/system_configuration.md)** | Configuration format reference |
|
||||
| **[nx_integration.md](api/nx_integration.md)** | NX Open API integration |
|
||||
| **[GNN_ARCHITECTURE.md](api/GNN_ARCHITECTURE.md)** | Graph Neural Network details |
|
||||
| **[NXOPEN_RESOURCES.md](api/NXOPEN_RESOURCES.md)** | NX Open documentation resources |
|
||||
|
||||
**Extractors & Data Flow**:
|
||||
- [HOW_TO_EXTEND_OPTIMIZATION.md](HOW_TO_EXTEND_OPTIMIZATION.md) - Unified extractor library (Protocol 12)
|
||||
### Physics Documentation
|
||||
|
||||
### 📊 03. Dashboard
|
||||
Located in `physics/`:
|
||||
|
||||
**Dashboard Documentation**:
|
||||
- [DASHBOARD_MASTER_PLAN.md](DASHBOARD_MASTER_PLAN.md) - Complete 3-page architecture blueprint
|
||||
- [DASHBOARD_REACT_IMPLEMENTATION.md](DASHBOARD_REACT_IMPLEMENTATION.md) - React frontend implementation guide
|
||||
- [DASHBOARD_IMPLEMENTATION_STATUS.md](DASHBOARD_IMPLEMENTATION_STATUS.md) - Current progress and testing
|
||||
- [DASHBOARD_SESSION_SUMMARY.md](DASHBOARD_SESSION_SUMMARY.md) - Features and usage summary
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| **[ZERNIKE_FUNDAMENTALS.md](physics/ZERNIKE_FUNDAMENTALS.md)** | Zernike polynomial basics |
|
||||
| **[ZERNIKE_OPD_METHOD.md](physics/ZERNIKE_OPD_METHOD.md)** | OPD method for mirror optimization |
|
||||
|
||||
**Quick Commands**:
|
||||
```bash
|
||||
# Start backend (port 8000)
|
||||
cd atomizer-dashboard/backend && python -m uvicorn api.main:app --reload --port 8000
|
||||
### Development
|
||||
|
||||
# Start frontend (port 3001)
|
||||
cd atomizer-dashboard/frontend && npm run dev
|
||||
```
|
||||
Located in `development/`:
|
||||
|
||||
### 🔧 04. Development
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| **[DEVELOPMENT_GUIDANCE.md](development/DEVELOPMENT_GUIDANCE.md)** | Development guidelines |
|
||||
| **[DEVELOPMENT_ROADMAP.md](development/DEVELOPMENT_ROADMAP.md)** | Future plans |
|
||||
| **[Philosophy.md](development/Philosophy.md)** | Design philosophy |
|
||||
|
||||
**For Contributors**:
|
||||
- [../DEVELOPMENT.md](../DEVELOPMENT.md) - Development guide, workflow, testing
|
||||
- [TODAY_PLAN_NOV18.md](TODAY_PLAN_NOV18.md) - Daily development planning example
|
||||
- [LESSONS_LEARNED.md](LESSONS_LEARNED.md) - Lessons from development sessions
|
||||
### Diagrams
|
||||
|
||||
**Phase Planning**:
|
||||
- [PHASE_3_1_COMPLETION_SUMMARY.md](PHASE_3_1_COMPLETION_SUMMARY.md) - Phase 3.1 completion
|
||||
- [PHASE_3_2_INTEGRATION_PLAN.md](PHASE_3_2_INTEGRATION_PLAN.md) - Current phase plan
|
||||
Located in `diagrams/`:
|
||||
|
||||
### 📖 05. User Guides
|
||||
|
||||
**Creating & Running Studies**:
|
||||
- [HOW_TO_EXTEND_OPTIMIZATION.md](HOW_TO_EXTEND_OPTIMIZATION.md) - Complete guide
|
||||
- Creating custom extractors
|
||||
- Defining objectives
|
||||
- Setting up design variables
|
||||
- Configuring constraints
|
||||
|
||||
**Using the Dashboard**:
|
||||
- Start with [DASHBOARD_SESSION_SUMMARY.md](DASHBOARD_SESSION_SUMMARY.md)
|
||||
- See [DASHBOARD_MASTER_PLAN.md](DASHBOARD_MASTER_PLAN.md) for full capabilities
|
||||
|
||||
**Multi-Objective Optimization**:
|
||||
- Read [PROTOCOL_11_MULTI_OBJECTIVE_SUPPORT.md](PROTOCOL_11_MULTI_OBJECTIVE_SUPPORT.md)
|
||||
- Check example: `studies/bracket_stiffness_optimization_V3/`
|
||||
|
||||
### 🔬 06. Advanced Topics
|
||||
|
||||
**Intelligent Optimization (Protocol 10)**:
|
||||
- [PROTOCOL_10_IMSO.md](PROTOCOL_10_IMSO.md) - How it works
|
||||
- [PROTOCOL_10_V2_IMPLEMENTATION.md](PROTOCOL_10_V2_IMPLEMENTATION.md) - Implementation details
|
||||
- [PROTOCOL_10_V2_FIXES.md](PROTOCOL_10_V2_FIXES.md) - Bug fixes and improvements
|
||||
|
||||
**LLM Integration** (Hybrid Mode):
|
||||
- [HYBRID_MODE_GUIDE.md](HYBRID_MODE_GUIDE.md) - Using LLM-assisted workflows
|
||||
|
||||
**NX Integration**:
|
||||
- [NX_SESSION_MANAGEMENT.md](NX_SESSION_MANAGEMENT.md) - Session handling, solving, extraction
|
||||
- [NASTRAN_VISUALIZATION_RESEARCH.md](NASTRAN_VISUALIZATION_RESEARCH.md) - Visualizing OP2/BDF results with pyNastran + PyVista
|
||||
|
||||
### 📚 07. Session Summaries & Historical
|
||||
|
||||
**Recent Sessions** (Nov 2025):
|
||||
- [GOOD_MORNING_NOV18.md](GOOD_MORNING_NOV18.md) - Morning summary Nov 18
|
||||
- [DASHBOARD_SESSION_SUMMARY.md](DASHBOARD_SESSION_SUMMARY.md) - Dashboard completion
|
||||
- [PROTOCOL_13_DASHBOARD.md](PROTOCOL_13_DASHBOARD.md) - Protocol 13 summary
|
||||
|
||||
**Historical Documents** (archived for reference):
|
||||
- Various session summaries in docs/ folder
|
||||
- Phase completion documents
|
||||
|
||||
### 🎨 09. Visual Diagrams
|
||||
|
||||
**Comprehensive Visual Documentation**:
|
||||
- [**Diagram Index**](09_DIAGRAMS/00_INDEX.md) - All visual documentation hub
|
||||
- [**Architecture Overview**](09_DIAGRAMS/architecture_overview.md) - System architecture diagrams
|
||||
- [**Protocol Workflows**](09_DIAGRAMS/protocol_workflows.md) - Protocol execution flows
|
||||
|
||||
**Viewing Diagrams**:
|
||||
- Render automatically in GitHub and VS Code (with Markdown Preview Mermaid extension)
|
||||
- Copy to https://mermaid.live/ for online viewing
|
||||
- Supported by MkDocs, Docusaurus, and most documentation generators
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| **[architecture_overview.md](diagrams/architecture_overview.md)** | System architecture diagrams |
|
||||
| **[protocol_workflows.md](diagrams/protocol_workflows.md)** | Protocol execution flows |
|
||||
|
||||
---
|
||||
|
||||
## Documentation by Role
|
||||
## Documentation by Task
|
||||
|
||||
### For New Users
|
||||
### Creating Studies
|
||||
|
||||
Start here for a guided learning path:
|
||||
1. **[GETTING_STARTED.md](GETTING_STARTED.md)** - First study tutorial
|
||||
2. **[OP_01_CREATE_STUDY.md](protocols/operations/OP_01_CREATE_STUDY.md)** - Detailed creation protocol
|
||||
3. **[guides/CANVAS.md](guides/CANVAS.md)** - Visual study builder
|
||||
|
||||
1. Read [../README.md](../README.md) - Understand the project
|
||||
2. Review [PROTOCOLS.md](PROTOCOLS.md) - Learn about the architecture
|
||||
3. Try [HOW_TO_EXTEND_OPTIMIZATION.md](HOW_TO_EXTEND_OPTIMIZATION.md) - Build your first study
|
||||
4. Explore dashboard with [DASHBOARD_SESSION_SUMMARY.md](DASHBOARD_SESSION_SUMMARY.md)
|
||||
### Running Optimizations
|
||||
|
||||
### For Developers
|
||||
1. **[OP_02_RUN_OPTIMIZATION.md](protocols/operations/OP_02_RUN_OPTIMIZATION.md)** - Run protocol
|
||||
2. **[SYS_10_IMSO.md](protocols/system/SYS_10_IMSO.md)** - Intelligent optimization
|
||||
3. **[SYS_15_METHOD_SELECTOR.md](protocols/system/SYS_15_METHOD_SELECTOR.md)** - Method selection
|
||||
|
||||
Contributing to Atomizer:
|
||||
### Neural Acceleration
|
||||
|
||||
1. [../DEVELOPMENT.md](../DEVELOPMENT.md) - Development workflow and guidelines
|
||||
2. [PROTOCOLS.md](PROTOCOLS.md) - Understand protocol-based architecture
|
||||
3. [HOOK_ARCHITECTURE.md](HOOK_ARCHITECTURE.md) - Plugin system internals
|
||||
4. [HOW_TO_EXTEND_OPTIMIZATION.md](HOW_TO_EXTEND_OPTIMIZATION.md) - Extractor library details
|
||||
1. **[guides/NEURAL_FEATURES_COMPLETE.md](guides/NEURAL_FEATURES_COMPLETE.md)** - Overview
|
||||
2. **[SYS_14_NEURAL_ACCELERATION.md](protocols/system/SYS_14_NEURAL_ACCELERATION.md)** - Technical spec
|
||||
3. **[guides/NEURAL_WORKFLOW_TUTORIAL.md](guides/NEURAL_WORKFLOW_TUTORIAL.md)** - Step-by-step
|
||||
|
||||
### For Researchers
|
||||
### Analyzing Results
|
||||
|
||||
Using Atomizer for research:
|
||||
1. **[OP_04_ANALYZE_RESULTS.md](protocols/operations/OP_04_ANALYZE_RESULTS.md)** - Analysis protocol
|
||||
2. **[SYS_17_STUDY_INSIGHTS.md](protocols/system/SYS_17_STUDY_INSIGHTS.md)** - Physics visualizations
|
||||
|
||||
1. [PROTOCOL_10_IMSO.md](PROTOCOL_10_IMSO.md) - Intelligent optimization algorithms
|
||||
2. [PROTOCOL_11_MULTI_OBJECTIVE_SUPPORT.md](PROTOCOL_11_MULTI_OBJECTIVE_SUPPORT.md) - Multi-objective capabilities
|
||||
3. [DASHBOARD_MASTER_PLAN.md](DASHBOARD_MASTER_PLAN.md) - Visualization and analysis tools
|
||||
4. Example studies in `studies/` folder
|
||||
### Troubleshooting
|
||||
|
||||
1. **[OP_06_TROUBLESHOOT.md](protocols/operations/OP_06_TROUBLESHOOT.md)** - Troubleshooting guide
|
||||
2. **[GETTING_STARTED.md#troubleshooting](GETTING_STARTED.md#troubleshooting)** - Common issues
|
||||
|
||||
---
|
||||
|
||||
## Protocol Quick Reference
|
||||
## Quick Reference
|
||||
|
||||
| Protocol | Name | Status | Priority | Version |
|
||||
|----------|------|--------|----------|---------|
|
||||
| **10** | Intelligent Multi-Strategy Optimization | ✅ Complete | P0 | v2.1 |
|
||||
| **11** | Multi-Objective Support | ✅ Complete | P0 (MANDATORY) | v1.0 |
|
||||
| **13** | Real-Time Dashboard Tracking | ✅ Complete | P1 | v1.0 |
|
||||
| **Neural** | GNN Acceleration (AtomizerField) | ✅ Complete | P0 | v1.0 |
|
||||
### Protocol Summary
|
||||
|
||||
**See [PROTOCOLS.md](PROTOCOLS.md) for complete specifications.**
|
||||
| Protocol | Name | Purpose |
|
||||
|----------|------|---------|
|
||||
| **OP_01** | Create Study | Study creation workflow |
|
||||
| **OP_02** | Run Optimization | Execution workflow |
|
||||
| **OP_03** | Monitor Progress | Real-time monitoring |
|
||||
| **OP_04** | Analyze Results | Results analysis |
|
||||
| **OP_06** | Troubleshoot | Debugging issues |
|
||||
| **SYS_10** | IMSO | Intelligent optimization |
|
||||
| **SYS_12** | Extractors | Physics extraction library |
|
||||
| **SYS_14** | Neural | Neural network acceleration |
|
||||
|
||||
## Neural Features Quick Reference
|
||||
|
||||
| Feature | Status | Performance |
|
||||
|---------|--------|-------------|
|
||||
| **Parametric GNN** | ✅ Production | 4.5ms inference, 2,200x speedup |
|
||||
| **Field Predictor GNN** | ✅ Production | 50ms inference, full field output |
|
||||
| **Physics-Informed Loss** | ✅ Production | <5% prediction error |
|
||||
| **Hybrid Optimization** | ✅ Production | 97% NN usage rate |
|
||||
| **Uncertainty Quantification** | ✅ Production | Ensemble-based confidence |
|
||||
| **Training Pipeline** | ✅ Production | BDF/OP2 → GNN → Deploy |
|
||||
|
||||
**See [NEURAL_FEATURES_COMPLETE.md](NEURAL_FEATURES_COMPLETE.md) for details.**
|
||||
|
||||
---
|
||||
|
||||
## Common Tasks
|
||||
|
||||
### Running an Optimization
|
||||
### Essential Commands
|
||||
|
||||
```bash
|
||||
# Navigate to study
|
||||
cd studies/my_study
|
||||
# Activate environment
|
||||
conda activate atomizer
|
||||
|
||||
# Run optimization
|
||||
python run_optimization.py --trials 50
|
||||
python run_optimization.py --start --trials 50
|
||||
|
||||
# View in dashboard
|
||||
# Open http://localhost:3001 and select study
|
||||
# Resume interrupted run
|
||||
python run_optimization.py --start --resume
|
||||
|
||||
# Start dashboard
|
||||
python launch_dashboard.py
|
||||
|
||||
# Neural turbo mode
|
||||
python run_nn_optimization.py --turbo --nn-trials 5000
|
||||
```
|
||||
|
||||
### Creating a New Study
|
||||
### Key Extractors
|
||||
|
||||
```bash
|
||||
# Use template (recommended)
|
||||
python create_study.py --name my_study --model path/to/model.prt
|
||||
| ID | Physics | Function |
|
||||
|----|---------|----------|
|
||||
| E1 | Displacement | `extract_displacement()` |
|
||||
| E2 | Frequency | `extract_frequency()` |
|
||||
| E3 | Stress | `extract_solid_stress()` |
|
||||
| E4 | Mass (BDF) | `extract_mass_from_bdf()` |
|
||||
| E5 | Mass (CAD) | `extract_mass_from_expression()` |
|
||||
| E22 | Zernike OPD | `extract_zernike_opd()` |
|
||||
|
||||
# Or manually
|
||||
mkdir -p studies/my_study/1_setup/model
|
||||
# Copy model files
|
||||
# Edit optimization_config.json
|
||||
# Create run_optimization.py
|
||||
```
|
||||
|
||||
### Checking Protocol 10 Intelligence Reports
|
||||
|
||||
```bash
|
||||
# View characterization progress
|
||||
cat studies/my_study/2_results/intelligent_optimizer/characterization_progress.json
|
||||
|
||||
# View final intelligence report
|
||||
cat studies/my_study/2_results/intelligent_optimizer/intelligence_report.json
|
||||
|
||||
# View strategy transitions
|
||||
cat studies/my_study/2_results/intelligent_optimizer/strategy_transitions.json
|
||||
```
|
||||
Full catalog: [SYS_12_EXTRACTOR_LIBRARY.md](protocols/system/SYS_12_EXTRACTOR_LIBRARY.md)
|
||||
|
||||
---
|
||||
|
||||
## File Organization
|
||||
## Archive
|
||||
|
||||
```
|
||||
Atomizer/
|
||||
├── README.md # Project overview
|
||||
├── DEVELOPMENT.md # Development guide
|
||||
├── docs/
|
||||
│ ├── 00_INDEX.md # THIS FILE - Documentation hub
|
||||
│ ├── PROTOCOLS.md # Master protocol specifications
|
||||
│ ├── PROTOCOL_10_*.md # Protocol 10 detailed docs
|
||||
│ ├── PROTOCOL_11_*.md # Protocol 11 detailed docs
|
||||
│ ├── PROTOCOL_13_*.md # Protocol 13 detailed docs
|
||||
│ ├── DASHBOARD_*.md # Dashboard documentation
|
||||
│ ├── HOOK_ARCHITECTURE.md # Plugin system
|
||||
│ ├── NX_SESSION_MANAGEMENT.md # NX integration
|
||||
│ ├── HOW_TO_EXTEND_OPTIMIZATION.md # User guide
|
||||
│ └── [session summaries] # Historical documents
|
||||
├── optimization_engine/ # Core optimization code
|
||||
├── atomizer-dashboard/ # Dashboard frontend & backend
|
||||
├── studies/ # Optimization studies
|
||||
└── examples/ # Example models
|
||||
```
|
||||
Historical documents are preserved in `archive/`:
|
||||
|
||||
- `archive/historical/` - Legacy documents, old protocols
|
||||
- `archive/marketing/` - Briefings, presentations
|
||||
- `archive/session_summaries/` - Past development sessions
|
||||
|
||||
---
|
||||
|
||||
## Getting Help
|
||||
## LLM Resources
|
||||
|
||||
### Documentation Issues
|
||||
For Claude/AI integration:
|
||||
|
||||
- **Missing information?** Check [PROTOCOLS.md](PROTOCOLS.md) for comprehensive specs
|
||||
- **Protocol questions?** See individual protocol docs (PROTOCOL_XX_*.md)
|
||||
- **Dashboard issues?** Check [DASHBOARD_IMPLEMENTATION_STATUS.md](DASHBOARD_IMPLEMENTATION_STATUS.md)
|
||||
|
||||
### Technical Issues
|
||||
|
||||
- **NX integration problems?** See [NX_SESSION_MANAGEMENT.md](NX_SESSION_MANAGEMENT.md)
|
||||
- **Multi-objective errors?** Check [PROTOCOL_11_MULTI_OBJECTIVE_SUPPORT.md](PROTOCOL_11_MULTI_OBJECTIVE_SUPPORT.md)
|
||||
- **Protocol 10 not working?** See [PROTOCOL_10_V2_FIXES.md](PROTOCOL_10_V2_FIXES.md)
|
||||
|
||||
### Community
|
||||
|
||||
- **GitHub Issues**: https://github.com/yourusername/Atomizer/issues
|
||||
- **Discussions**: https://github.com/yourusername/Atomizer/discussions
|
||||
- **Email**: your.email@example.com
|
||||
| Resource | Purpose |
|
||||
|----------|---------|
|
||||
| **[../CLAUDE.md](../CLAUDE.md)** | System instructions |
|
||||
| **[../.claude/ATOMIZER_CONTEXT.md](../.claude/ATOMIZER_CONTEXT.md)** | Session context |
|
||||
| **[../.claude/skills/](../.claude/skills/)** | Skill modules |
|
||||
|
||||
---
|
||||
|
||||
## Document Conventions
|
||||
|
||||
### Naming System
|
||||
|
||||
Documentation files use numbered prefixes for organization:
|
||||
- `00_*` - Index and navigation files
|
||||
- `01_*` - Core specifications (protocols)
|
||||
- `02_*` - Architecture documentation
|
||||
- `03_*` - User guides
|
||||
- Individual protocol docs use descriptive names (PROTOCOL_XX_NAME.md)
|
||||
|
||||
### Status Indicators
|
||||
|
||||
- ✅ Complete - Fully implemented and tested
|
||||
- 🔨 In Progress - Active development
|
||||
- 📋 Planned - Design phase
|
||||
- ⏳ Pending - Not yet started
|
||||
|
||||
### Version Format
|
||||
|
||||
- **Major.Minor.Patch** (e.g., v2.1.0)
|
||||
- **Major**: Breaking changes or architectural redesign
|
||||
- **Minor**: New features, backward compatible
|
||||
- **Patch**: Bug fixes
|
||||
|
||||
---
|
||||
|
||||
## Contributing to Documentation
|
||||
|
||||
### Updating Documentation
|
||||
|
||||
1. Keep [00_INDEX.md](00_INDEX.md) (this file) up to date with new docs
|
||||
2. Update [PROTOCOLS.md](PROTOCOLS.md) when adding/modifying protocols
|
||||
3. Maintain [../DEVELOPMENT.md](../DEVELOPMENT.md) with current status
|
||||
4. Add session summaries for major development sessions
|
||||
|
||||
### Documentation Style
|
||||
|
||||
- Use clear, concise language
|
||||
- Include code examples
|
||||
- Add diagrams for complex concepts
|
||||
- Follow Markdown best practices
|
||||
- Keep table of contents updated
|
||||
|
||||
### Review Process
|
||||
|
||||
1. Create pull request with documentation changes
|
||||
2. Ensure cross-references are valid
|
||||
3. Update index files (this file, PROTOCOLS.md)
|
||||
4. Check for broken links
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-11-21
|
||||
**Maintained By**: Atomizer Development Team
|
||||
**Next Review**: When new protocols or major features are added
|
||||
|
||||
For questions about this documentation structure, open an issue on GitHub.
|
||||
**Last Updated**: 2026-01-20
|
||||
**Maintained By**: Antoine / Atomaste
|
||||
|
||||
401
docs/GETTING_STARTED.md
Normal file
401
docs/GETTING_STARTED.md
Normal file
@@ -0,0 +1,401 @@
|
||||
# Getting Started with Atomizer
|
||||
|
||||
**Last Updated**: 2026-01-20
|
||||
|
||||
This guide walks you through setting up Atomizer and running your first optimization study.
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Prerequisites](#prerequisites)
|
||||
2. [Quick Setup](#quick-setup)
|
||||
3. [Project Structure](#project-structure)
|
||||
4. [Your First Optimization Study](#your-first-optimization-study)
|
||||
5. [Using the Dashboard](#using-the-dashboard)
|
||||
6. [Neural Acceleration (Optional)](#neural-acceleration-optional)
|
||||
7. [Next Steps](#next-steps)
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Required Software
|
||||
|
||||
| Software | Version | Purpose |
|
||||
|----------|---------|---------|
|
||||
| **Siemens NX** | 2506+ | CAD/FEA modeling |
|
||||
| **NX Nastran** | Included with NX | FEA solver |
|
||||
| **Python** | 3.10+ | Core engine |
|
||||
| **Anaconda** | Latest | Environment management |
|
||||
| **Git** | Latest | Version control |
|
||||
|
||||
### Hardware Recommendations
|
||||
|
||||
- **RAM**: 16GB minimum, 32GB recommended
|
||||
- **Storage**: SSD with 50GB+ free space (FEA files are large)
|
||||
- **CPU**: Multi-core for parallel FEA runs
|
||||
|
||||
---
|
||||
|
||||
## Quick Setup
|
||||
|
||||
### 1. Clone the Repository
|
||||
|
||||
```bash
|
||||
git clone http://192.168.86.50:3000/Antoine/Atomizer.git
|
||||
cd Atomizer
|
||||
```
|
||||
|
||||
### 2. Activate the Conda Environment
|
||||
|
||||
The `atomizer` environment is pre-configured with all dependencies.
|
||||
|
||||
```bash
|
||||
conda activate atomizer
|
||||
```
|
||||
|
||||
**Important**: Always use this environment. Do not install additional packages.
|
||||
|
||||
### 3. Verify Installation
|
||||
|
||||
```bash
|
||||
# Check Python path
|
||||
python --version # Should show Python 3.10+
|
||||
|
||||
# Verify core imports
|
||||
python -c "from optimization_engine.core.runner import OptimizationRunner; print('OK')"
|
||||
```
|
||||
|
||||
### 4. Configure NX Path (if needed)
|
||||
|
||||
The default NX installation path is `C:\Program Files\Siemens\NX2506\`. If yours differs, update it in your study's `optimization_config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"nx_settings": {
|
||||
"nx_install_path": "C:\\Program Files\\Siemens\\NX2506"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
Atomizer/
|
||||
├── CLAUDE.md # AI assistant instructions
|
||||
├── README.md # Project overview
|
||||
├── .claude/ # LLM configuration
|
||||
│ ├── ATOMIZER_CONTEXT.md # Session context
|
||||
│ └── skills/ # Claude skill modules
|
||||
├── optimization_engine/ # Core Python package
|
||||
│ ├── core/ # Optimization runners
|
||||
│ ├── extractors/ # Physics extraction (20+)
|
||||
│ ├── nx/ # NX/Nastran integration
|
||||
│ ├── gnn/ # Neural network surrogates
|
||||
│ └── study/ # Study management
|
||||
├── atomizer-dashboard/ # Web dashboard
|
||||
│ ├── backend/ # FastAPI server
|
||||
│ └── frontend/ # React UI
|
||||
├── studies/ # Your optimization studies
|
||||
│ ├── M1_Mirror/ # Mirror studies
|
||||
│ ├── Simple_Bracket/ # Bracket studies
|
||||
│ └── ...
|
||||
├── docs/ # Documentation
|
||||
│ ├── protocols/ # Protocol Operating System
|
||||
│ ├── guides/ # User guides
|
||||
│ └── physics/ # Physics documentation
|
||||
└── knowledge_base/ # Learning system (LAC)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Your First Optimization Study
|
||||
|
||||
### Option A: Using Claude (Recommended)
|
||||
|
||||
The easiest way to create a study is through natural language:
|
||||
|
||||
```
|
||||
You: "Create a new study to optimize my bracket for minimum mass
|
||||
with stress under 200 MPa. The model is at C:\Models\bracket.prt"
|
||||
|
||||
Claude: [Analyzes model, creates study, generates configuration]
|
||||
```
|
||||
|
||||
### Option B: Manual Creation
|
||||
|
||||
#### Step 1: Create Study Directory
|
||||
|
||||
```bash
|
||||
# Create study under appropriate geometry type
|
||||
mkdir -p studies/Simple_Bracket/my_first_study/1_setup/model
|
||||
mkdir -p studies/Simple_Bracket/my_first_study/2_iterations
|
||||
mkdir -p studies/Simple_Bracket/my_first_study/3_results
|
||||
```
|
||||
|
||||
#### Step 2: Copy NX Files
|
||||
|
||||
Copy your NX model files to the study:
|
||||
- `Model.prt` - Geometry part
|
||||
- `Model_fem1.fem` - FEM file
|
||||
- `Model_sim1.sim` - Simulation file
|
||||
- `Model_fem1_i.prt` - Idealized part (IMPORTANT!)
|
||||
|
||||
```bash
|
||||
cp /path/to/your/model/* studies/Simple_Bracket/my_first_study/1_setup/model/
|
||||
```
|
||||
|
||||
#### Step 3: Create Configuration
|
||||
|
||||
Create `optimization_config.json` in your study root:
|
||||
|
||||
```json
|
||||
{
|
||||
"study_name": "my_first_study",
|
||||
"description": "Bracket mass optimization with stress constraint",
|
||||
|
||||
"design_variables": [
|
||||
{
|
||||
"name": "thickness",
|
||||
"expression_name": "web_thickness",
|
||||
"min": 2.0,
|
||||
"max": 10.0,
|
||||
"initial": 5.0
|
||||
}
|
||||
],
|
||||
|
||||
"objectives": [
|
||||
{
|
||||
"name": "mass",
|
||||
"type": "minimize",
|
||||
"extractor": "extract_mass_from_bdf"
|
||||
}
|
||||
],
|
||||
|
||||
"constraints": [
|
||||
{
|
||||
"name": "max_stress",
|
||||
"type": "less_than",
|
||||
"value": 200.0,
|
||||
"extractor": "extract_solid_stress",
|
||||
"extractor_args": {"element_type": "ctetra"}
|
||||
}
|
||||
],
|
||||
|
||||
"optimization": {
|
||||
"method": "TPE",
|
||||
"n_trials": 50
|
||||
},
|
||||
|
||||
"nx_settings": {
|
||||
"nx_install_path": "C:\\Program Files\\Siemens\\NX2506",
|
||||
"simulation_timeout_s": 600
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Step 4: Run the Optimization
|
||||
|
||||
```bash
|
||||
cd studies/Simple_Bracket/my_first_study
|
||||
python run_optimization.py --start --trials 50
|
||||
```
|
||||
|
||||
### Understanding the Output
|
||||
|
||||
During optimization:
|
||||
- **Trial folders** are created in `2_iterations/` (trial_0001, trial_0002, ...)
|
||||
- **Results** are logged to `3_results/study.db` (Optuna database)
|
||||
- **Progress** is printed to console and logged to `optimization.log`
|
||||
|
||||
```
|
||||
Trial 15/50: mass=2.34 kg, stress=185.2 MPa [FEASIBLE]
|
||||
Best so far: mass=2.12 kg (trial #12)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Using the Dashboard
|
||||
|
||||
### Starting the Dashboard
|
||||
|
||||
```bash
|
||||
# From project root
|
||||
python launch_dashboard.py
|
||||
```
|
||||
|
||||
This starts:
|
||||
- **Backend**: FastAPI at http://localhost:8000
|
||||
- **Frontend**: React at http://localhost:3003
|
||||
|
||||
### Dashboard Features
|
||||
|
||||
| Tab | Purpose |
|
||||
|-----|---------|
|
||||
| **Home** | Study selection, creation |
|
||||
| **Canvas** | Visual study builder (AtomizerSpec v2.0) |
|
||||
| **Dashboard** | Real-time monitoring, convergence plots |
|
||||
| **Analysis** | Pareto fronts, parallel coordinates |
|
||||
| **Insights** | Physics visualizations (Zernike, stress fields) |
|
||||
|
||||
### Canvas Builder
|
||||
|
||||
The Canvas provides a visual, node-based interface:
|
||||
|
||||
1. **Add Model Node** - Select your .sim file
|
||||
2. **Add Design Variables** - Link to NX expressions
|
||||
3. **Add Extractors** - Choose physics to extract
|
||||
4. **Add Objectives** - Define what to optimize
|
||||
5. **Connect Nodes** - Create the optimization flow
|
||||
6. **Execute** - Generate and run the study
|
||||
|
||||
---
|
||||
|
||||
## Neural Acceleration (Optional)
|
||||
|
||||
For studies with 50+ completed FEA trials, you can train a neural surrogate for 2000x+ speedup.
|
||||
|
||||
### When to Use Neural Acceleration
|
||||
|
||||
| Scenario | Use Neural? |
|
||||
|----------|-------------|
|
||||
| < 30 trials needed | No - FEA is fine |
|
||||
| 30-100 trials | Maybe - depends on FEA time |
|
||||
| > 100 trials | Yes - significant time savings |
|
||||
| Exploratory optimization | Yes - explore more designs |
|
||||
|
||||
### Training a Surrogate
|
||||
|
||||
```bash
|
||||
cd studies/M1_Mirror/m1_mirror_adaptive_V15
|
||||
|
||||
# Train on existing FEA data
|
||||
python -m optimization_engine.gnn.train_zernike_gnn V15 --epochs 200
|
||||
```
|
||||
|
||||
### Running Turbo Mode
|
||||
|
||||
```bash
|
||||
# Run 5000 GNN predictions, validate top candidates with FEA
|
||||
python run_nn_optimization.py --turbo --nn-trials 5000
|
||||
```
|
||||
|
||||
### Performance Comparison
|
||||
|
||||
| Method | Time per Evaluation | 100 Trials |
|
||||
|--------|---------------------|------------|
|
||||
| FEA only | 10-30 minutes | 17-50 hours |
|
||||
| GNN Turbo | 4.5 milliseconds | ~30 seconds |
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
### Learn the Protocol System
|
||||
|
||||
Atomizer uses a layered protocol system:
|
||||
|
||||
| Layer | Location | Purpose |
|
||||
|-------|----------|---------|
|
||||
| **Operations** | `docs/protocols/operations/` | How to create, run, analyze |
|
||||
| **System** | `docs/protocols/system/` | Technical specifications |
|
||||
| **Extensions** | `docs/protocols/extensions/` | How to extend Atomizer |
|
||||
|
||||
Key protocols to read:
|
||||
- **OP_01**: Creating studies
|
||||
- **OP_02**: Running optimizations
|
||||
- **SYS_12**: Available extractors
|
||||
- **SYS_14**: Neural acceleration
|
||||
|
||||
### Explore Available Extractors
|
||||
|
||||
Atomizer includes 20+ physics extractors:
|
||||
|
||||
| Category | Examples |
|
||||
|----------|----------|
|
||||
| **Mechanical** | Displacement, stress, strain energy |
|
||||
| **Modal** | Frequency, mode shapes |
|
||||
| **Thermal** | Temperature, heat flux |
|
||||
| **Mass** | BDF mass, CAD mass |
|
||||
| **Optical** | Zernike wavefront error |
|
||||
|
||||
Full catalog: `docs/protocols/system/SYS_12_EXTRACTOR_LIBRARY.md`
|
||||
|
||||
### Use Claude for Complex Tasks
|
||||
|
||||
For complex optimizations, describe your goals naturally:
|
||||
|
||||
```
|
||||
"Set up a multi-objective optimization for my UAV arm:
|
||||
- Minimize mass
|
||||
- Maximize first natural frequency
|
||||
- Keep stress under 150 MPa
|
||||
Use NSGA-II with 100 trials"
|
||||
```
|
||||
|
||||
Claude will:
|
||||
1. Analyze your model
|
||||
2. Suggest appropriate extractors
|
||||
3. Configure the optimization
|
||||
4. Generate all necessary files
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
| Issue | Solution |
|
||||
|-------|----------|
|
||||
| "NX not found" | Check `nx_install_path` in config |
|
||||
| "Mesh not updating" | Ensure `*_i.prt` (idealized part) is copied |
|
||||
| "Solver timeout" | Increase `simulation_timeout_s` |
|
||||
| "Import error" | Verify `conda activate atomizer` |
|
||||
|
||||
### Getting Help
|
||||
|
||||
1. Check `docs/protocols/operations/OP_06_TROUBLESHOOT.md`
|
||||
2. Ask Claude: "Why is my optimization failing?"
|
||||
3. Review `3_results/optimization.log`
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Essential Commands
|
||||
|
||||
```bash
|
||||
# Activate environment
|
||||
conda activate atomizer
|
||||
|
||||
# Run optimization
|
||||
python run_optimization.py --start --trials 50
|
||||
|
||||
# Resume interrupted run
|
||||
python run_optimization.py --start --resume
|
||||
|
||||
# Test single trial
|
||||
python run_optimization.py --test
|
||||
|
||||
# Start dashboard
|
||||
python launch_dashboard.py
|
||||
|
||||
# Check study status
|
||||
python -c "from optimization_engine.study.state import get_study_status; print(get_study_status('.'))"
|
||||
```
|
||||
|
||||
### Key Files
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `optimization_config.json` | Study configuration |
|
||||
| `atomizer_spec.json` | AtomizerSpec v2.0 (Canvas) |
|
||||
| `run_optimization.py` | FEA optimization script |
|
||||
| `3_results/study.db` | Optuna database |
|
||||
|
||||
---
|
||||
|
||||
*Ready to optimize? Start with a simple study, then explore advanced features like neural acceleration and multi-objective optimization.*
|
||||
@@ -1,7 +1,7 @@
|
||||
# Atomizer Protocol Specifications
|
||||
|
||||
**Last Updated**: 2025-11-21
|
||||
**Status**: Active
|
||||
**Last Updated**: 2026-01-20
|
||||
**Status**: ARCHIVED - See docs/protocols/ for current protocol documentation
|
||||
**Applies To**: All Atomizer optimization systems
|
||||
|
||||
---
|
||||
@@ -43,7 +43,7 @@ graph LR
|
||||
style Atomizer fill:#e1f5fe
|
||||
```
|
||||
|
||||
**Core Philosophy**: "Talk, don't click."
|
||||
**Core Philosophy**: LLM-driven FEA optimization.
|
||||
|
||||
---
|
||||
|
||||
@@ -750,7 +750,7 @@ The system prompt is structured to maximize KV-cache hits:
|
||||
```
|
||||
[SECTION 1: STABLE - Never changes]
|
||||
- Atomizer identity and capabilities
|
||||
- Core principles (talk don't click)
|
||||
- Core principles (LLM-driven optimization)
|
||||
- Tool schemas and definitions
|
||||
- Base protocol routing table
|
||||
|
||||
@@ -780,7 +780,7 @@ You are assisting with **Atomizer**, an LLM-first FEA optimization framework.
|
||||
- Neural acceleration (600-1000x speedup)
|
||||
|
||||
## Principles
|
||||
1. Talk, don't click - users describe goals in plain language
|
||||
1. LLM-driven - users describe goals in plain language
|
||||
2. Never modify master models - work on copies
|
||||
3. Always validate before running
|
||||
4. Document everything
|
||||
|
||||
@@ -50,7 +50,7 @@ Implement an **Interview Mode** that systematically gathers engineering requirem
|
||||
|
||||
| Principle | Implementation |
|
||||
|-----------|----------------|
|
||||
| **Talk, don't click** | Natural conversation, not forms |
|
||||
| **Conversational interface** | Natural conversation, not forms |
|
||||
| **Intelligence first** | Auto-detect what's possible, ask about intent |
|
||||
| **No assumptions** | Ask instead of guessing on critical decisions |
|
||||
| **Adaptive depth** | Simple studies = fewer questions |
|
||||
|
||||
737
docs/plans/CLAUDE_CANVAS_INTEGRATION_V2.md
Normal file
737
docs/plans/CLAUDE_CANVAS_INTEGRATION_V2.md
Normal file
@@ -0,0 +1,737 @@
|
||||
# Claude + Canvas Integration V2
|
||||
|
||||
## The Vision
|
||||
|
||||
**Side-by-side LLM + Canvas** where:
|
||||
1. **Claude talks → Canvas updates in real-time** (user sees nodes appear/change)
|
||||
2. **User tweaks Canvas → Claude sees changes** (bi-directional sync)
|
||||
3. **Full Claude Code-level power** through the dashboard chat
|
||||
4. **Interview-driven study creation** entirely through chat
|
||||
|
||||
The user can:
|
||||
- Describe what they want in natural language
|
||||
- Watch the canvas build itself
|
||||
- Make quick manual tweaks
|
||||
- Continue the conversation with Claude seeing their changes
|
||||
- Have Claude execute protocols, create files, run optimizations
|
||||
|
||||
---
|
||||
|
||||
## Current State vs Target
|
||||
|
||||
### What We Have Now
|
||||
|
||||
```
|
||||
┌──────────────────┐ ┌──────────────────┐
|
||||
│ Chat Panel │ │ Canvas │
|
||||
│ (Power Mode) │ │ (SpecRenderer) │
|
||||
├──────────────────┤ ├──────────────────┤
|
||||
│ - Anthropic API │ │ - Loads spec │
|
||||
│ - Write tools │ │ - User edits │
|
||||
│ - spec_modified │--->│ - Auto-refresh │
|
||||
│ events │ │ on event │
|
||||
└──────────────────┘ └──────────────────┘
|
||||
│ │
|
||||
│ No real-time │
|
||||
│ canvas state │
|
||||
│ in Claude context │
|
||||
└──────────────────────┘
|
||||
```
|
||||
|
||||
**Gaps:**
|
||||
1. Claude doesn't see current canvas state in real-time
|
||||
2. No interview engine for guided study creation
|
||||
3. Limited tool set (no file ops, no protocol execution)
|
||||
4. No streaming for tool calls
|
||||
5. Mode switching requires reconnection
|
||||
|
||||
### What We Want
|
||||
|
||||
```
|
||||
┌───────────────────────────────────────────────────────────────────┐
|
||||
│ ATOMIZER DASHBOARD │
|
||||
├────────────────────────────┬──────────────────────────────────────┤
|
||||
│ │ │
|
||||
│ CHAT PANEL │ CANVAS │
|
||||
│ (Atomizer Assistant) │ (SpecRenderer) │
|
||||
│ │ │
|
||||
│ ┌──────────────────────┐ │ ┌────────────────────────────────┐ │
|
||||
│ │ "Create a bracket │ │ │ │ │
|
||||
│ │ optimization with │ │ │ [DV: thickness] │ │
|
||||
│ │ mass and stiffness" │ │ │ │ │ │
|
||||
│ └──────────────────────┘ │ │ ▼ │ │
|
||||
│ │ │ │ [Model Node] │ │
|
||||
│ ▼ │ │ │ │ │
|
||||
│ ┌──────────────────────┐ │ │ ▼ │ │
|
||||
│ │ 🔧 Adding thickness │ │ │ [Ext: mass]──>[Obj: min] │ │
|
||||
│ │ 🔧 Adding mass ext │◄─┼──┤ [Ext: disp]──>[Obj: min] │ │
|
||||
│ │ 🔧 Adding objective │ │ │ │ │
|
||||
│ │ │ │ │ (nodes appear in real-time) │ │
|
||||
│ │ ✓ Study configured! │ │ │ │ │
|
||||
│ └──────────────────────┘ │ └────────────────────────────────┘ │
|
||||
│ │ │
|
||||
│ ┌──────────────────────┐ │ User can click any node to edit │
|
||||
│ │ Claude sees the │ │ Claude sees user's edits │
|
||||
│ │ canvas state and │◄─┼──────────────────────────────────────│
|
||||
│ │ user's manual edits │ │ │
|
||||
│ └──────────────────────┘ │ │
|
||||
└────────────────────────────┴──────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
### 1. WebSocket Hub (Bi-directional Sync)
|
||||
|
||||
```
|
||||
┌─────────────────────┐
|
||||
│ WebSocket Hub │
|
||||
│ (Single Connection)│
|
||||
└─────────┬───────────┘
|
||||
│
|
||||
┌────────────────────┼────────────────────┐
|
||||
│ │ │
|
||||
▼ ▼ ▼
|
||||
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
|
||||
│ Chat Panel │ │ Canvas │ │ Spec Store │
|
||||
│ │ │ │ │ │
|
||||
│ - Send messages │ │ - User edits │ │ - Single source │
|
||||
│ - Receive text │ │ - Node add/del │ │ of truth │
|
||||
│ - See tool calls│ │ - Edge changes │ │ - Validates │
|
||||
└─────────────────┘ └─────────────────┘ └─────────────────┘
|
||||
|
||||
Message Types:
|
||||
Client → Server:
|
||||
{ type: "message", content: "..." } # Chat message
|
||||
{ type: "canvas_edit", patch: {...} } # User made canvas change
|
||||
{ type: "set_study", study_id: "..." } # Switch study
|
||||
{ type: "ping" } # Heartbeat
|
||||
|
||||
Server → Client:
|
||||
{ type: "text", content: "...", done: false } # Streaming text
|
||||
{ type: "tool_start", tool: "...", input: {...} }
|
||||
{ type: "tool_result", tool: "...", result: "..." }
|
||||
{ type: "spec_updated", spec: {...} } # Full spec after change
|
||||
{ type: "canvas_patch", patch: {...} } # Incremental update
|
||||
{ type: "done" } # Response complete
|
||||
{ type: "pong" } # Heartbeat response
|
||||
```
|
||||
|
||||
### 2. Enhanced Claude Agent
|
||||
|
||||
The `AtomizerClaudeAgent` needs to be more like **Claude Code**:
|
||||
|
||||
```python
|
||||
class AtomizerClaudeAgent:
|
||||
"""Full-power Claude agent with Claude Code-like capabilities"""
|
||||
|
||||
def __init__(self, study_id: Optional[str] = None):
|
||||
self.client = anthropic.Anthropic()
|
||||
self.study_id = study_id
|
||||
self.spec_store = SpecStore(study_id) # Real-time spec access
|
||||
self.interview_state = None # For guided creation
|
||||
self.tools = self._define_full_tools()
|
||||
|
||||
async def chat_stream(
|
||||
self,
|
||||
message: str,
|
||||
conversation: List[Dict],
|
||||
canvas_state: Optional[Dict] = None # Current canvas from frontend
|
||||
) -> AsyncGenerator[Dict, None]:
|
||||
"""Stream responses with tool calls"""
|
||||
|
||||
# Build context with current canvas state
|
||||
system = self._build_system_prompt(canvas_state)
|
||||
|
||||
# Stream the response
|
||||
with self.client.messages.stream(
|
||||
model="claude-sonnet-4-20250514",
|
||||
max_tokens=8192,
|
||||
system=system,
|
||||
messages=conversation + [{"role": "user", "content": message}],
|
||||
tools=self.tools
|
||||
) as stream:
|
||||
for event in stream:
|
||||
if event.type == "content_block_delta":
|
||||
if event.delta.type == "text_delta":
|
||||
yield {"type": "text", "content": event.delta.text}
|
||||
|
||||
elif event.type == "content_block_start":
|
||||
if event.content_block.type == "tool_use":
|
||||
yield {
|
||||
"type": "tool_start",
|
||||
"tool": event.content_block.name,
|
||||
"input": {} # Will be completed
|
||||
}
|
||||
|
||||
# Handle tool calls after stream
|
||||
response = stream.get_final_message()
|
||||
for block in response.content:
|
||||
if block.type == "tool_use":
|
||||
result = await self._execute_tool(block.name, block.input)
|
||||
yield {
|
||||
"type": "tool_result",
|
||||
"tool": block.name,
|
||||
"result": result["result"],
|
||||
"spec_changed": result.get("spec_changed", False)
|
||||
}
|
||||
|
||||
# If spec changed, send the updated spec
|
||||
if result.get("spec_changed"):
|
||||
yield {
|
||||
"type": "spec_updated",
|
||||
"spec": self.spec_store.get_dict()
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Full Tool Set
|
||||
|
||||
Claude needs more tools to match Claude Code power:
|
||||
|
||||
```python
|
||||
FULL_TOOLS = [
|
||||
# === READ TOOLS ===
|
||||
"read_study_config", # Read atomizer_spec.json
|
||||
"query_trials", # Query optimization database
|
||||
"list_studies", # List available studies
|
||||
"read_file", # Read any file in study
|
||||
"list_files", # List files in study directory
|
||||
"read_nx_expressions", # Get NX model expressions
|
||||
|
||||
# === WRITE TOOLS (Spec Modification) ===
|
||||
"add_design_variable", # Add DV to spec
|
||||
"add_extractor", # Add extractor (built-in or custom)
|
||||
"add_objective", # Add objective
|
||||
"add_constraint", # Add constraint
|
||||
"update_spec_field", # Update any spec field by path
|
||||
"remove_node", # Remove any node by ID
|
||||
"update_canvas_layout", # Reposition nodes for better layout
|
||||
|
||||
# === STUDY MANAGEMENT ===
|
||||
"create_study", # Create new study directory + spec
|
||||
"clone_study", # Clone existing study
|
||||
"validate_spec", # Validate current spec
|
||||
"migrate_config", # Migrate legacy config to spec v2
|
||||
|
||||
# === OPTIMIZATION CONTROL ===
|
||||
"start_optimization", # Start optimization run
|
||||
"stop_optimization", # Stop running optimization
|
||||
"get_optimization_status",# Check if running, trial count
|
||||
|
||||
# === FILE OPERATIONS ===
|
||||
"write_file", # Write file to study directory
|
||||
"create_directory", # Create directory in study
|
||||
|
||||
# === NX INTEGRATION ===
|
||||
"introspect_model", # Get model info (expressions, features)
|
||||
"suggest_design_vars", # AI-suggest design variables from model
|
||||
|
||||
# === INTERVIEW/GUIDED CREATION ===
|
||||
"start_interview", # Begin guided study creation
|
||||
"process_answer", # Process user's interview answer
|
||||
"get_interview_state", # Get current interview progress
|
||||
]
|
||||
```
|
||||
|
||||
### 4. Interview Engine Integration
|
||||
|
||||
The interview happens **through chat**, not a separate UI:
|
||||
|
||||
```python
|
||||
class InterviewEngine:
|
||||
"""Guided study creation through conversation"""
|
||||
|
||||
PHASES = [
|
||||
("model", "Let's set up your model. What's the path to your NX simulation file?"),
|
||||
("objectives", "What do you want to optimize? (e.g., minimize mass, minimize displacement)"),
|
||||
("design_vars", "Which parameters can I vary? I can suggest some based on your model."),
|
||||
("constraints", "Any constraints to respect? (e.g., max stress, min frequency)"),
|
||||
("method", "I recommend {method} for this problem. Should I configure it?"),
|
||||
("review", "Here's the complete configuration. Ready to create the study?"),
|
||||
]
|
||||
|
||||
def __init__(self, spec_store: SpecStore):
|
||||
self.spec_store = spec_store
|
||||
self.current_phase = 0
|
||||
self.collected_data = {}
|
||||
|
||||
def get_current_question(self) -> str:
|
||||
phase_name, question = self.PHASES[self.current_phase]
|
||||
# Customize question based on collected data
|
||||
if phase_name == "method":
|
||||
method = self._recommend_method()
|
||||
question = question.format(method=method)
|
||||
return question
|
||||
|
||||
def process_answer(self, answer: str) -> Dict:
|
||||
"""Process answer and build spec incrementally"""
|
||||
phase_name, _ = self.PHASES[self.current_phase]
|
||||
|
||||
# Extract structured data from answer
|
||||
extracted = self._extract_for_phase(phase_name, answer)
|
||||
self.collected_data[phase_name] = extracted
|
||||
|
||||
# Update spec with extracted data
|
||||
spec_update = self._apply_to_spec(phase_name, extracted)
|
||||
|
||||
# Advance to next phase
|
||||
self.current_phase += 1
|
||||
|
||||
return {
|
||||
"phase": phase_name,
|
||||
"extracted": extracted,
|
||||
"spec_update": spec_update,
|
||||
"next_question": self.get_current_question() if self.current_phase < len(self.PHASES) else None,
|
||||
"complete": self.current_phase >= len(self.PHASES)
|
||||
}
|
||||
```
|
||||
|
||||
Claude uses the interview through tools:
|
||||
|
||||
```python
|
||||
async def _tool_start_interview(self, params: Dict) -> str:
|
||||
"""Start guided study creation"""
|
||||
self.interview_state = InterviewEngine(self.spec_store)
|
||||
return {
|
||||
"status": "started",
|
||||
"first_question": self.interview_state.get_current_question()
|
||||
}
|
||||
|
||||
async def _tool_process_answer(self, params: Dict) -> str:
|
||||
"""Process user's answer in interview"""
|
||||
if not self.interview_state:
|
||||
return {"error": "No interview in progress"}
|
||||
|
||||
result = self.interview_state.process_answer(params["answer"])
|
||||
|
||||
if result["spec_update"]:
|
||||
# Spec was updated - this will trigger canvas update
|
||||
return {
|
||||
"status": "updated",
|
||||
"spec_changed": True,
|
||||
"next_question": result["next_question"],
|
||||
"complete": result["complete"]
|
||||
}
|
||||
|
||||
return result
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Frontend Implementation
|
||||
|
||||
### 1. Unified WebSocket Hook
|
||||
|
||||
```typescript
|
||||
// hooks/useAtomizerSocket.ts
|
||||
export function useAtomizerSocket(studyId: string | undefined) {
|
||||
const [spec, setSpec] = useState<AtomizerSpec | null>(null);
|
||||
const [messages, setMessages] = useState<ChatMessage[]>([]);
|
||||
const [isThinking, setIsThinking] = useState(false);
|
||||
const [currentTool, setCurrentTool] = useState<string | null>(null);
|
||||
|
||||
const ws = useRef<WebSocket | null>(null);
|
||||
|
||||
// Single WebSocket connection for everything
|
||||
useEffect(() => {
|
||||
const protocol = window.location.protocol === 'https:' ? 'wss:' : 'ws:';
|
||||
const host = import.meta.env.DEV ? 'localhost:8001' : window.location.host;
|
||||
ws.current = new WebSocket(`${protocol}//${host}/api/atomizer/ws`);
|
||||
|
||||
ws.current.onmessage = (event) => {
|
||||
const data = JSON.parse(event.data);
|
||||
|
||||
switch (data.type) {
|
||||
case 'text':
|
||||
// Streaming text from Claude
|
||||
setMessages(prev => {
|
||||
const last = prev[prev.length - 1];
|
||||
if (last?.role === 'assistant' && !last.complete) {
|
||||
return [...prev.slice(0, -1), {
|
||||
...last,
|
||||
content: last.content + data.content
|
||||
}];
|
||||
}
|
||||
return [...prev, {
|
||||
id: Date.now().toString(),
|
||||
role: 'assistant',
|
||||
content: data.content,
|
||||
complete: false
|
||||
}];
|
||||
});
|
||||
break;
|
||||
|
||||
case 'tool_start':
|
||||
setCurrentTool(data.tool);
|
||||
// Add tool indicator to chat
|
||||
setMessages(prev => [...prev, {
|
||||
id: Date.now().toString(),
|
||||
role: 'tool',
|
||||
tool: data.tool,
|
||||
status: 'running'
|
||||
}]);
|
||||
break;
|
||||
|
||||
case 'tool_result':
|
||||
setCurrentTool(null);
|
||||
// Update tool message with result
|
||||
setMessages(prev => prev.map(m =>
|
||||
m.role === 'tool' && m.tool === data.tool && m.status === 'running'
|
||||
? { ...m, status: 'complete', result: data.result }
|
||||
: m
|
||||
));
|
||||
break;
|
||||
|
||||
case 'spec_updated':
|
||||
// Canvas gets the new spec - this is the magic!
|
||||
setSpec(data.spec);
|
||||
break;
|
||||
|
||||
case 'done':
|
||||
setIsThinking(false);
|
||||
// Mark last message as complete
|
||||
setMessages(prev => prev.map((m, i) =>
|
||||
i === prev.length - 1 ? { ...m, complete: true } : m
|
||||
));
|
||||
break;
|
||||
}
|
||||
};
|
||||
|
||||
// Set study context
|
||||
if (studyId) {
|
||||
ws.current.onopen = () => {
|
||||
ws.current?.send(JSON.stringify({
|
||||
type: 'set_study',
|
||||
study_id: studyId
|
||||
}));
|
||||
};
|
||||
}
|
||||
|
||||
return () => ws.current?.close();
|
||||
}, [studyId]);
|
||||
|
||||
// Send message
|
||||
const sendMessage = useCallback((content: string) => {
|
||||
if (!ws.current) return;
|
||||
|
||||
setIsThinking(true);
|
||||
setMessages(prev => [...prev, {
|
||||
id: Date.now().toString(),
|
||||
role: 'user',
|
||||
content
|
||||
}]);
|
||||
|
||||
ws.current.send(JSON.stringify({
|
||||
type: 'message',
|
||||
content
|
||||
}));
|
||||
}, []);
|
||||
|
||||
// Notify Claude about canvas edits
|
||||
const notifyCanvasEdit = useCallback((patch: any) => {
|
||||
ws.current?.send(JSON.stringify({
|
||||
type: 'canvas_edit',
|
||||
patch
|
||||
}));
|
||||
}, []);
|
||||
|
||||
return {
|
||||
spec,
|
||||
messages,
|
||||
isThinking,
|
||||
currentTool,
|
||||
sendMessage,
|
||||
notifyCanvasEdit
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Integrated Canvas View
|
||||
|
||||
```typescript
|
||||
// pages/CanvasView.tsx (revised)
|
||||
export function CanvasView() {
|
||||
const { '*': studyId } = useParams();
|
||||
|
||||
// Single hook manages everything
|
||||
const {
|
||||
spec,
|
||||
messages,
|
||||
isThinking,
|
||||
currentTool,
|
||||
sendMessage,
|
||||
notifyCanvasEdit
|
||||
} = useAtomizerSocket(studyId);
|
||||
|
||||
// When user edits canvas, notify Claude
|
||||
const handleSpecChange = useCallback((newSpec: AtomizerSpec) => {
|
||||
// This is called by SpecRenderer when user makes edits
|
||||
notifyCanvasEdit({
|
||||
type: 'spec_replace',
|
||||
spec: newSpec
|
||||
});
|
||||
}, [notifyCanvasEdit]);
|
||||
|
||||
return (
|
||||
<div className="h-screen flex">
|
||||
{/* Canvas - receives spec from WebSocket */}
|
||||
<div className="flex-1">
|
||||
<SpecRenderer
|
||||
spec={spec}
|
||||
onChange={handleSpecChange} // User edits flow back
|
||||
highlightNode={currentTool ? getAffectedNode(currentTool) : undefined}
|
||||
/>
|
||||
</div>
|
||||
|
||||
{/* Chat Panel */}
|
||||
<div className="w-96 border-l">
|
||||
<ChatPanel
|
||||
messages={messages}
|
||||
isThinking={isThinking}
|
||||
currentTool={currentTool}
|
||||
onSend={sendMessage}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Visual Feedback for Tool Calls
|
||||
|
||||
When Claude calls a tool, the canvas shows visual feedback:
|
||||
|
||||
```typescript
|
||||
// components/canvas/SpecRenderer.tsx
|
||||
function SpecRenderer({ spec, highlightNode, onChange }) {
|
||||
// When a tool is targeting a node, highlight it
|
||||
const getNodeStyle = (nodeId: string) => {
|
||||
if (highlightNode === nodeId) {
|
||||
return {
|
||||
boxShadow: '0 0 0 3px #f59e0b', // Amber glow
|
||||
animation: 'pulse 1s infinite'
|
||||
};
|
||||
}
|
||||
return {};
|
||||
};
|
||||
|
||||
// When new nodes are added, animate them
|
||||
const [newNodes, setNewNodes] = useState<Set<string>>(new Set());
|
||||
|
||||
useEffect(() => {
|
||||
if (spec) {
|
||||
const currentIds = new Set([
|
||||
...spec.design_variables.map(d => d.id),
|
||||
...spec.extractors.map(e => e.id),
|
||||
...spec.objectives.map(o => o.id),
|
||||
...spec.constraints.map(c => c.id)
|
||||
]);
|
||||
|
||||
// Find truly new nodes
|
||||
const added = [...currentIds].filter(id => !prevIds.current.has(id));
|
||||
if (added.length > 0) {
|
||||
setNewNodes(new Set(added));
|
||||
setTimeout(() => setNewNodes(new Set()), 1000); // Clear animation
|
||||
}
|
||||
prevIds.current = currentIds;
|
||||
}
|
||||
}, [spec]);
|
||||
|
||||
// Render with animations
|
||||
return (
|
||||
<ReactFlow nodes={nodes.map(n => ({
|
||||
...n,
|
||||
style: {
|
||||
...getNodeStyle(n.id),
|
||||
...(newNodes.has(n.id) ? { animation: 'slideIn 0.5s ease-out' } : {})
|
||||
}
|
||||
}))} />
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## User Experience Flow
|
||||
|
||||
### Flow 1: Create Study Through Chat
|
||||
|
||||
```
|
||||
User: "Create a bracket optimization. I want to minimize mass while keeping
|
||||
stress below 200 MPa. The thickness can vary from 2mm to 10mm."
|
||||
|
||||
Claude: [Internal: Parse intent, no interview needed - user gave full spec]
|
||||
|
||||
🔧 Creating study "bracket_optimization"...
|
||||
🔧 Adding design variable: thickness [2mm - 10mm]
|
||||
🔧 Adding extractor: mass (BDF mass extraction)
|
||||
🔧 Adding extractor: max_stress (von Mises stress)
|
||||
🔧 Adding objective: minimize mass
|
||||
🔧 Adding constraint: stress <= 200 MPa
|
||||
|
||||
✓ Study created! I've configured:
|
||||
- 1 design variable (thickness)
|
||||
- 2 extractors (mass, stress)
|
||||
- 1 objective (minimize mass)
|
||||
- 1 constraint (max stress)
|
||||
|
||||
The canvas shows your optimization setup. You can:
|
||||
- Click any node to adjust settings
|
||||
- Ask me to add more variables or constraints
|
||||
- Say "run" to start optimization
|
||||
|
||||
[Canvas shows: DV→Model→Extractors→Objectives/Constraints, all connected]
|
||||
```
|
||||
|
||||
### Flow 2: Interview-Guided Creation
|
||||
|
||||
```
|
||||
User: "Help me set up a new optimization"
|
||||
|
||||
Claude: Let's set up your optimization step by step.
|
||||
|
||||
First, what's the path to your NX simulation file?
|
||||
(You can drag & drop or type the path)
|
||||
|
||||
User: "studies/M1_Mirror/model.sim"
|
||||
|
||||
Claude: 🔧 Loading model...
|
||||
[tool: introspect_model]
|
||||
|
||||
Found your model! It has 15 expressions. I noticed these look like
|
||||
good design variables:
|
||||
- rib_thickness (current: 8mm)
|
||||
- mirror_depth (current: 50mm)
|
||||
- support_angle (current: 45°)
|
||||
|
||||
What are you trying to optimize?
|
||||
- Minimize mass?
|
||||
- Minimize wavefront error (WFE)?
|
||||
- Both? (I can set up multi-objective)
|
||||
|
||||
[Canvas shows: Model node appears]
|
||||
|
||||
User: "minimize WFE and mass"
|
||||
|
||||
Claude: 🔧 Adding Zernike WFE extractor
|
||||
🔧 Adding mass extractor
|
||||
🔧 Adding objectives (NSGA-II multi-objective)
|
||||
|
||||
Got it - multi-objective optimization for WFE and mass.
|
||||
|
||||
Which parameters should I vary?
|
||||
I suggest: rib_thickness, mirror_depth, support_angle
|
||||
|
||||
[Canvas shows: Model→WFE Extractor→Obj, Model→Mass Extractor→Obj]
|
||||
|
||||
User: "Yes, use those, but keep support_angle fixed"
|
||||
|
||||
Claude: 🔧 Adding design variable: rib_thickness [4mm - 16mm]
|
||||
🔧 Adding design variable: mirror_depth [30mm - 80mm]
|
||||
|
||||
Perfect. Any constraints?
|
||||
- Maximum mass?
|
||||
- Maximum stress?
|
||||
- Minimum frequency?
|
||||
|
||||
[Canvas shows: 2 DVs connected to Model]
|
||||
|
||||
User: "max stress 150 MPa"
|
||||
|
||||
Claude: 🔧 Adding stress extractor
|
||||
🔧 Adding constraint: stress <= 150 MPa
|
||||
|
||||
✓ Configuration complete! Ready to optimize?
|
||||
|
||||
[Canvas shows: Full optimization flow]
|
||||
```
|
||||
|
||||
### Flow 3: User Edits Canvas, Claude Sees
|
||||
|
||||
```
|
||||
[User clicks thickness DV node, changes max from 10mm to 15mm]
|
||||
|
||||
[WebSocket sends: { type: "canvas_edit", patch: { path: "design_variables.0.bounds.max", value: 15 } }]
|
||||
|
||||
Claude: [Receives canvas edit notification in context]
|
||||
|
||||
User: "Is this range reasonable?"
|
||||
|
||||
Claude: I see you changed thickness max to 15mm. That's quite thick for a
|
||||
bracket - typical ranges are 2-10mm.
|
||||
|
||||
With 15mm max, you might see:
|
||||
- Higher mass designs explored
|
||||
- Potentially better stiffness
|
||||
- Longer optimization time (larger search space)
|
||||
|
||||
Want me to adjust or keep it?
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
### Phase 1: Unified WebSocket (1 week)
|
||||
|
||||
1. **Backend**: Create `/api/atomizer/ws` endpoint
|
||||
- Single connection for chat + spec sync
|
||||
- Streaming response support
|
||||
- Canvas edit notifications
|
||||
|
||||
2. **Frontend**: Create `useAtomizerSocket` hook
|
||||
- Replaces `useChat` + `useSpecWebSocket`
|
||||
- Single source of truth for spec state
|
||||
|
||||
3. **Integration**: Wire SpecRenderer to socket
|
||||
- Receive spec updates from Claude's tools
|
||||
- Send edit notifications back
|
||||
|
||||
### Phase 2: Enhanced Tools (1 week)
|
||||
|
||||
1. Add remaining write tools
|
||||
2. Implement `introspect_model` for NX expression discovery
|
||||
3. Add `create_study` for new study creation
|
||||
4. Add file operation tools
|
||||
|
||||
### Phase 3: Interview Engine (1 week)
|
||||
|
||||
1. Implement `InterviewEngine` class
|
||||
2. Add interview tools to Claude
|
||||
3. Test guided creation flow
|
||||
4. Add smart defaults and recommendations
|
||||
|
||||
### Phase 4: Polish (1 week)
|
||||
|
||||
1. Visual feedback for tool calls
|
||||
2. Node highlight during modification
|
||||
3. Animation for new nodes
|
||||
4. Error recovery and reconnection
|
||||
5. Performance optimization
|
||||
|
||||
---
|
||||
|
||||
## Success Metrics
|
||||
|
||||
1. **Creation Time**: User can create complete study in <3 minutes through chat
|
||||
2. **Edit Latency**: Canvas updates within 200ms of Claude's tool call
|
||||
3. **Sync Reliability**: 100% of user edits reflected in Claude's context
|
||||
4. **Interview Success**: 90% of studies created through interview are valid
|
||||
|
||||
---
|
||||
|
||||
## Key Differences from Current Implementation
|
||||
|
||||
| Current | Target |
|
||||
|---------|--------|
|
||||
| Separate chat/canvas WebSockets | Single unified WebSocket |
|
||||
| Claude doesn't see canvas state | Real-time canvas state in context |
|
||||
| Manual spec refresh | Automatic spec push on changes |
|
||||
| No interview engine | Guided creation through chat |
|
||||
| Limited tools | Full Claude Code-like tool set |
|
||||
| Mode switching breaks connection | Seamless power mode |
|
||||
|
||||
---
|
||||
|
||||
*This is the architecture that makes Atomizer truly powerful - where Claude and Canvas work together as one system.*
|
||||
1239
docs/plans/CLAUDE_CANVAS_PROJECT.md
Normal file
1239
docs/plans/CLAUDE_CANVAS_PROJECT.md
Normal file
File diff suppressed because it is too large
Load Diff
693
docs/plans/DASHBOARD_CLAUDE_CODE_INTEGRATION.md
Normal file
693
docs/plans/DASHBOARD_CLAUDE_CODE_INTEGRATION.md
Normal file
@@ -0,0 +1,693 @@
|
||||
# Dashboard Claude Code Integration Plan
|
||||
|
||||
**Date**: January 16, 2026
|
||||
**Status**: 🟢 IMPLEMENTED
|
||||
**Priority**: CRITICAL
|
||||
**Implemented**: January 16, 2026
|
||||
|
||||
---
|
||||
|
||||
## Problem Statement
|
||||
|
||||
The dashboard chat assistant is **fundamentally underpowered** compared to Claude Code CLI. Users expect the same level of intelligence, proactivity, and capability when interacting with the dashboard as they get from the terminal.
|
||||
|
||||
### Current Experience (Terminal - Claude Code CLI)
|
||||
```
|
||||
User: "Add 10 new design variables to the M1 mirror study"
|
||||
|
||||
Claude Code:
|
||||
1. Reads optimization_config.json
|
||||
2. Understands the current structure
|
||||
3. Adds 10 variables with intelligent defaults
|
||||
4. ACTUALLY MODIFIES the file
|
||||
5. Shows the diff
|
||||
6. Can immediately run/test
|
||||
```
|
||||
|
||||
### Current Experience (Dashboard Chat)
|
||||
```
|
||||
User: "Add 10 new design variables"
|
||||
|
||||
Dashboard Chat:
|
||||
1. Calls MCP tool canvas_add_node
|
||||
2. Returns JSON instruction
|
||||
3. Frontend SHOULD apply it but doesn't
|
||||
4. Nothing visible happens
|
||||
5. User frustrated
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Root Cause Analysis
|
||||
|
||||
### Issue 1: MCP Tools Don't Actually Modify Anything
|
||||
|
||||
The current MCP tools (`canvas_add_node`, etc.) just return instructions like:
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"modification": {
|
||||
"action": "add_node",
|
||||
"nodeType": "designVar",
|
||||
"data": {...}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The **frontend is supposed to receive and apply these**, but:
|
||||
- WebSocket message handling may not process tool results
|
||||
- No automatic application of modifications
|
||||
- User sees "success" message but nothing changes
|
||||
|
||||
### Issue 2: Claude API vs Claude Code CLI
|
||||
|
||||
| Capability | Claude API (Dashboard) | Claude Code CLI (Terminal) |
|
||||
|------------|------------------------|---------------------------|
|
||||
| Read files | Via MCP tool | Native |
|
||||
| Write files | Via MCP tool (limited) | Native |
|
||||
| Run commands | Via MCP tool (limited) | Native |
|
||||
| Edit in place | NO | YES |
|
||||
| Git operations | NO | YES |
|
||||
| Multi-step reasoning | Limited | Full |
|
||||
| Tool chaining | Awkward | Natural |
|
||||
| Context window | 200k | Unlimited (summarization) |
|
||||
|
||||
### Issue 3: Model Capability Gap
|
||||
|
||||
Dashboard uses Claude API (likely Sonnet or Haiku for cost). Terminal uses **Opus 4.5** with full Claude Code capabilities.
|
||||
|
||||
---
|
||||
|
||||
## Proposed Solution: Claude Code CLI Backend
|
||||
|
||||
Instead of MCP tools calling Python scripts, **spawn actual Claude Code CLI sessions** in the backend that have full power.
|
||||
|
||||
### Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ DASHBOARD FRONTEND │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Canvas Builder │ Chat Panel │ Study Views │ Results │
|
||||
└────────┬────────────────┬─────────────────────────────────────┬─┘
|
||||
│ │ │
|
||||
│ WebSocket │ REST API │
|
||||
▼ ▼ │
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ BACKEND (FastAPI) │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────────────┐ │
|
||||
│ │ CLAUDE CODE SESSION MANAGER │ │
|
||||
│ │ │ │
|
||||
│ │ - Spawns claude CLI processes │ │
|
||||
│ │ - Maintains conversation context │ │
|
||||
│ │ - Streams output back to frontend │ │
|
||||
│ │ - Has FULL Atomizer codebase access │ │
|
||||
│ │ - Uses Opus 4.5 model │ │
|
||||
│ │ - Can edit files, run commands, modify studies │ │
|
||||
│ │ │ │
|
||||
│ └─────────────────────────────────────────────────────────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌─────────────────────────────────────────────────────────┐ │
|
||||
│ │ ATOMIZER CODEBASE │ │
|
||||
│ │ │ │
|
||||
│ │ studies/ optimization_engine/ │ │
|
||||
│ │ M1_Mirror/ extractors/ │ │
|
||||
│ │ optimization_config.json runner.py │ │
|
||||
│ │ run_optimization.py ... │ │
|
||||
│ │ │ │
|
||||
│ └─────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Key Changes
|
||||
|
||||
1. **Backend spawns Claude Code CLI** instead of calling Claude API
|
||||
2. **Full file system access** - Claude can read/write any file
|
||||
3. **Full command execution** - Run Python, git, npm, etc.
|
||||
4. **Opus 4.5 model** - Same intelligence as terminal
|
||||
5. **Stream output** - Real-time feedback to user
|
||||
6. **Canvas sync** - After Claude modifies files, canvas reloads from config
|
||||
|
||||
---
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
### Phase 1: Claude Code CLI Session Manager
|
||||
|
||||
**File**: `atomizer-dashboard/backend/api/services/claude_code_session.py`
|
||||
|
||||
```python
|
||||
"""
|
||||
Claude Code CLI Session Manager
|
||||
|
||||
Spawns actual Claude Code CLI processes with full Atomizer access.
|
||||
This gives dashboard users the same power as terminal users.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
from typing import AsyncGenerator, Dict, List, Optional
|
||||
|
||||
ATOMIZER_ROOT = Path(__file__).parent.parent.parent.parent.parent
|
||||
|
||||
class ClaudeCodeSession:
|
||||
"""
|
||||
Manages a Claude Code CLI session.
|
||||
|
||||
Unlike MCP tools, this spawns the actual claude CLI which has:
|
||||
- Full file system access
|
||||
- Full command execution
|
||||
- Opus 4.5 model
|
||||
- All Claude Code capabilities
|
||||
"""
|
||||
|
||||
def __init__(self, session_id: str, study_id: Optional[str] = None):
|
||||
self.session_id = session_id
|
||||
self.study_id = study_id
|
||||
self.canvas_state: Optional[Dict] = None # Current canvas state from frontend
|
||||
self.working_dir = ATOMIZER_ROOT
|
||||
if study_id:
|
||||
study_path = ATOMIZER_ROOT / "studies" / study_id
|
||||
if study_path.exists():
|
||||
self.working_dir = study_path
|
||||
|
||||
def set_canvas_state(self, canvas_state: Dict):
|
||||
"""Update canvas state from frontend"""
|
||||
self.canvas_state = canvas_state
|
||||
|
||||
async def send_message(self, message: str) -> AsyncGenerator[str, None]:
|
||||
"""
|
||||
Send message to Claude Code CLI and stream response.
|
||||
|
||||
Uses claude CLI with:
|
||||
- --print for output
|
||||
- --dangerously-skip-permissions for full access (controlled environment)
|
||||
- Runs from Atomizer root to get CLAUDE.md context automatically
|
||||
- Study-specific context injected into prompt
|
||||
"""
|
||||
# Build study-specific context
|
||||
study_context = self._build_study_context() if self.study_id else ""
|
||||
|
||||
# The user's message with study context prepended
|
||||
full_message = f"""## Current Study Context
|
||||
{study_context}
|
||||
|
||||
## User Request
|
||||
{message}
|
||||
|
||||
Remember: You have FULL power to edit files. Make the actual changes, don't just describe them."""
|
||||
|
||||
# Write prompt to a temp file (better than stdin for complex prompts)
|
||||
prompt_file = ATOMIZER_ROOT / f".claude-prompt-{self.session_id}.md"
|
||||
prompt_file.write_text(full_message)
|
||||
|
||||
try:
|
||||
# Spawn claude CLI from ATOMIZER_ROOT so it picks up CLAUDE.md
|
||||
# This gives it full Atomizer context automatically
|
||||
process = await asyncio.create_subprocess_exec(
|
||||
"claude",
|
||||
"--print",
|
||||
"--dangerously-skip-permissions", # Full access in controlled env
|
||||
"-p", str(prompt_file), # Read prompt from file
|
||||
stdout=asyncio.subprocess.PIPE,
|
||||
stderr=asyncio.subprocess.PIPE,
|
||||
cwd=str(ATOMIZER_ROOT), # Run from root to get CLAUDE.md
|
||||
env={
|
||||
**os.environ,
|
||||
"ATOMIZER_STUDY": self.study_id or "",
|
||||
"ATOMIZER_STUDY_PATH": str(self.working_dir),
|
||||
}
|
||||
)
|
||||
|
||||
# Stream output
|
||||
stdout, stderr = await process.communicate()
|
||||
|
||||
if stdout:
|
||||
yield stdout.decode()
|
||||
if stderr and process.returncode != 0:
|
||||
yield f"\n[Error]: {stderr.decode()}"
|
||||
|
||||
finally:
|
||||
# Clean up prompt file
|
||||
if prompt_file.exists():
|
||||
prompt_file.unlink()
|
||||
|
||||
def _build_system_prompt(self) -> str:
|
||||
"""Build Atomizer-aware system prompt with full context"""
|
||||
|
||||
# Load CLAUDE.md for Atomizer system instructions
|
||||
claude_md_path = ATOMIZER_ROOT / "CLAUDE.md"
|
||||
claude_md_content = ""
|
||||
if claude_md_path.exists():
|
||||
claude_md_content = claude_md_path.read_text()
|
||||
|
||||
# Load study-specific context
|
||||
study_context = ""
|
||||
if self.study_id:
|
||||
study_context = self._build_study_context()
|
||||
|
||||
prompt = f"""# Atomizer Dashboard Assistant
|
||||
|
||||
You are running as the Atomizer Dashboard Assistant with FULL Claude Code CLI capabilities.
|
||||
You have the same power as a terminal Claude Code session.
|
||||
|
||||
## Atomizer System Instructions
|
||||
{claude_md_content[:8000]} # Truncate if too long
|
||||
|
||||
## Your Capabilities
|
||||
|
||||
You can and MUST:
|
||||
- Read and EDIT any file in the codebase
|
||||
- Modify optimization_config.json directly
|
||||
- Update run_optimization.py
|
||||
- Run Python scripts
|
||||
- Execute git commands
|
||||
- Create new studies
|
||||
- Modify existing studies
|
||||
|
||||
When the user asks to add design variables, objectives, or other config changes:
|
||||
1. Read the current config file
|
||||
2. Make the actual modifications using Edit tool
|
||||
3. Save the file
|
||||
4. Report what you changed with a diff
|
||||
|
||||
DO NOT just return instructions - ACTUALLY MAKE THE CHANGES.
|
||||
|
||||
## Current Context
|
||||
|
||||
**Atomizer Root**: {ATOMIZER_ROOT}
|
||||
**Working Directory**: {self.working_dir}
|
||||
|
||||
{study_context}
|
||||
|
||||
## Important Paths
|
||||
|
||||
- Studies: {ATOMIZER_ROOT / 'studies'}
|
||||
- Extractors: {ATOMIZER_ROOT / 'optimization_engine' / 'extractors'}
|
||||
- Protocols: {ATOMIZER_ROOT / 'docs' / 'protocols'}
|
||||
|
||||
## After Making Changes
|
||||
|
||||
After modifying any study files:
|
||||
1. Confirm the changes were saved
|
||||
2. Show the relevant diff
|
||||
3. The dashboard canvas will auto-refresh to reflect your changes
|
||||
"""
|
||||
return prompt
|
||||
|
||||
def _build_study_context(self) -> str:
|
||||
"""Build detailed context for the active study"""
|
||||
context = f"## Active Study: {self.study_id}\n\n"
|
||||
|
||||
# Find and read optimization_config.json
|
||||
config_path = self.working_dir / "1_setup" / "optimization_config.json"
|
||||
if not config_path.exists():
|
||||
config_path = self.working_dir / "optimization_config.json"
|
||||
|
||||
if config_path.exists():
|
||||
import json
|
||||
try:
|
||||
config = json.loads(config_path.read_text())
|
||||
context += f"**Config File**: `{config_path}`\n\n"
|
||||
|
||||
# Design variables summary
|
||||
dvs = config.get("design_variables", [])
|
||||
if dvs:
|
||||
context += "### Design Variables\n\n"
|
||||
context += "| Name | Min | Max | Baseline | Unit |\n"
|
||||
context += "|------|-----|-----|----------|------|\n"
|
||||
for dv in dvs[:15]:
|
||||
name = dv.get("name", dv.get("expression_name", "?"))
|
||||
min_v = dv.get("min", dv.get("lower", "?"))
|
||||
max_v = dv.get("max", dv.get("upper", "?"))
|
||||
baseline = dv.get("baseline", "-")
|
||||
unit = dv.get("units", dv.get("unit", "-"))
|
||||
context += f"| {name} | {min_v} | {max_v} | {baseline} | {unit} |\n"
|
||||
if len(dvs) > 15:
|
||||
context += f"\n*... and {len(dvs) - 15} more*\n"
|
||||
context += "\n"
|
||||
|
||||
# Objectives
|
||||
objs = config.get("objectives", [])
|
||||
if objs:
|
||||
context += "### Objectives\n\n"
|
||||
for obj in objs:
|
||||
name = obj.get("name", "?")
|
||||
direction = obj.get("direction", "minimize")
|
||||
weight = obj.get("weight", 1)
|
||||
context += f"- **{name}**: {direction} (weight: {weight})\n"
|
||||
context += "\n"
|
||||
|
||||
# Extraction method (for Zernike)
|
||||
ext_method = config.get("extraction_method", {})
|
||||
if ext_method:
|
||||
context += "### Extraction Method\n\n"
|
||||
context += f"- Type: {ext_method.get('type', '?')}\n"
|
||||
context += f"- Class: {ext_method.get('class', '?')}\n"
|
||||
if ext_method.get("inner_radius"):
|
||||
context += f"- Inner Radius: {ext_method.get('inner_radius')}\n"
|
||||
context += "\n"
|
||||
|
||||
# Zernike settings
|
||||
zernike = config.get("zernike_settings", {})
|
||||
if zernike:
|
||||
context += "### Zernike Settings\n\n"
|
||||
context += f"- Modes: {zernike.get('n_modes', '?')}\n"
|
||||
context += f"- Filter Low Orders: {zernike.get('filter_low_orders', '?')}\n"
|
||||
context += f"- Subcases: {zernike.get('subcases', [])}\n"
|
||||
context += "\n"
|
||||
|
||||
# Algorithm
|
||||
method = config.get("method", config.get("optimization", {}).get("sampler", "TPE"))
|
||||
max_trials = config.get("max_trials", config.get("optimization", {}).get("n_trials", 100))
|
||||
context += f"### Algorithm\n\n"
|
||||
context += f"- Method: {method}\n"
|
||||
context += f"- Max Trials: {max_trials}\n\n"
|
||||
|
||||
except Exception as e:
|
||||
context += f"*Error reading config: {e}*\n\n"
|
||||
else:
|
||||
context += "*No optimization_config.json found*\n\n"
|
||||
|
||||
# Check for run_optimization.py
|
||||
run_opt_path = self.working_dir / "run_optimization.py"
|
||||
if run_opt_path.exists():
|
||||
context += f"**Run Script**: `{run_opt_path}` (exists)\n\n"
|
||||
|
||||
# Check results
|
||||
db_path = self.working_dir / "3_results" / "study.db"
|
||||
if db_path.exists():
|
||||
context += "**Results Database**: exists\n"
|
||||
# Could query trial count here
|
||||
else:
|
||||
context += "**Results Database**: not found (no optimization run yet)\n"
|
||||
|
||||
return context
|
||||
```
|
||||
|
||||
### Phase 2: WebSocket Handler for Claude Code
|
||||
|
||||
**File**: `atomizer-dashboard/backend/api/routes/claude_code.py`
|
||||
|
||||
```python
|
||||
"""
|
||||
Claude Code WebSocket Routes
|
||||
|
||||
Provides WebSocket endpoint that connects to actual Claude Code CLI.
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, WebSocket, WebSocketDisconnect
|
||||
from api.services.claude_code_session import ClaudeCodeSession
|
||||
import uuid
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
# Active sessions
|
||||
sessions: Dict[str, ClaudeCodeSession] = {}
|
||||
|
||||
@router.websocket("/ws/{study_id}")
|
||||
async def claude_code_websocket(websocket: WebSocket, study_id: str = None):
|
||||
"""
|
||||
WebSocket for full Claude Code CLI access.
|
||||
|
||||
This gives dashboard users the SAME power as terminal users.
|
||||
"""
|
||||
await websocket.accept()
|
||||
|
||||
session_id = str(uuid.uuid4())[:8]
|
||||
session = ClaudeCodeSession(session_id, study_id)
|
||||
sessions[session_id] = session
|
||||
|
||||
try:
|
||||
while True:
|
||||
data = await websocket.receive_json()
|
||||
|
||||
if data.get("type") == "message":
|
||||
content = data.get("content", "")
|
||||
|
||||
# Stream response from Claude Code CLI
|
||||
async for chunk in session.send_message(content):
|
||||
await websocket.send_json({
|
||||
"type": "text",
|
||||
"content": chunk,
|
||||
})
|
||||
|
||||
await websocket.send_json({"type": "done"})
|
||||
|
||||
# After response, trigger canvas refresh
|
||||
await websocket.send_json({
|
||||
"type": "refresh_canvas",
|
||||
"study_id": study_id,
|
||||
})
|
||||
|
||||
except WebSocketDisconnect:
|
||||
sessions.pop(session_id, None)
|
||||
```
|
||||
|
||||
### Phase 3: Frontend - Use Claude Code Endpoint
|
||||
|
||||
**File**: `atomizer-dashboard/frontend/src/hooks/useClaudeCode.ts`
|
||||
|
||||
```typescript
|
||||
/**
|
||||
* Hook for Claude Code CLI integration
|
||||
*
|
||||
* Connects to backend that spawns actual Claude Code CLI processes.
|
||||
* This gives full power: file editing, command execution, etc.
|
||||
*/
|
||||
|
||||
export function useClaudeCode(studyId?: string) {
|
||||
const [messages, setMessages] = useState<Message[]>([]);
|
||||
const [isThinking, setIsThinking] = useState(false);
|
||||
const wsRef = useRef<WebSocket | null>(null);
|
||||
|
||||
// Reload canvas after Claude makes changes
|
||||
const { loadFromConfig } = useCanvasStore();
|
||||
|
||||
useEffect(() => {
|
||||
// Connect to Claude Code WebSocket
|
||||
const ws = new WebSocket(`ws://${location.host}/api/claude-code/ws/${studyId || ''}`);
|
||||
|
||||
ws.onmessage = (event) => {
|
||||
const data = JSON.parse(event.data);
|
||||
|
||||
if (data.type === 'text') {
|
||||
// Stream Claude's response
|
||||
appendToLastMessage(data.content);
|
||||
}
|
||||
else if (data.type === 'done') {
|
||||
setIsThinking(false);
|
||||
}
|
||||
else if (data.type === 'refresh_canvas') {
|
||||
// Claude made file changes - reload canvas from config
|
||||
reloadCanvasFromStudy(data.study_id);
|
||||
}
|
||||
};
|
||||
|
||||
wsRef.current = ws;
|
||||
return () => ws.close();
|
||||
}, [studyId]);
|
||||
|
||||
const sendMessage = async (content: string) => {
|
||||
setIsThinking(true);
|
||||
addMessage({ role: 'user', content });
|
||||
addMessage({ role: 'assistant', content: '', isStreaming: true });
|
||||
|
||||
wsRef.current?.send(JSON.stringify({
|
||||
type: 'message',
|
||||
content,
|
||||
}));
|
||||
};
|
||||
|
||||
return { messages, isThinking, sendMessage };
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Canvas Auto-Refresh
|
||||
|
||||
When Claude modifies `optimization_config.json`, the canvas should automatically reload:
|
||||
|
||||
```typescript
|
||||
// In AtomizerCanvas.tsx or useCanvasChat.ts
|
||||
|
||||
const reloadCanvasFromStudy = async (studyId: string) => {
|
||||
// Fetch fresh config from backend
|
||||
const response = await fetch(`/api/studies/${studyId}/config`);
|
||||
const config = await response.json();
|
||||
|
||||
// Reload canvas
|
||||
loadFromConfig(config);
|
||||
|
||||
// Notify user
|
||||
showNotification('Canvas updated with Claude\'s changes');
|
||||
};
|
||||
```
|
||||
|
||||
### Phase 5: Smart Prompting for Canvas Context
|
||||
|
||||
When user sends a message from canvas view, include canvas state:
|
||||
|
||||
```typescript
|
||||
const sendCanvasMessage = (userMessage: string) => {
|
||||
const canvasContext = generateCanvasMarkdown();
|
||||
|
||||
const enrichedMessage = `
|
||||
## Current Canvas State
|
||||
${canvasContext}
|
||||
|
||||
## User Request
|
||||
${userMessage}
|
||||
|
||||
When making changes, modify the actual optimization_config.json file.
|
||||
After changes, the canvas will auto-refresh.
|
||||
`;
|
||||
|
||||
sendMessage(enrichedMessage);
|
||||
};
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Expected Behavior After Implementation
|
||||
|
||||
### Example 1: Add Design Variables
|
||||
```
|
||||
User: "Add 10 new design variables for hole diameters, range 5-25mm"
|
||||
|
||||
Claude Code (in dashboard):
|
||||
1. Reads studies/M1_Mirror/.../optimization_config.json
|
||||
2. Adds 10 entries to design_variables array:
|
||||
- hole_diameter_1: [5, 25] mm
|
||||
- hole_diameter_2: [5, 25] mm
|
||||
- ... (10 total)
|
||||
3. WRITES the file
|
||||
4. Reports: "Added 10 design variables to optimization_config.json"
|
||||
5. Frontend receives "refresh_canvas" signal
|
||||
6. Canvas reloads and shows 10 new nodes
|
||||
7. User sees actual changes
|
||||
```
|
||||
|
||||
### Example 2: Modify Optimization
|
||||
```
|
||||
User: "Change the algorithm to CMA-ES with 500 trials and add a stress constraint < 200 MPa"
|
||||
|
||||
Claude Code (in dashboard):
|
||||
1. Reads config
|
||||
2. Changes method: "TPE" -> "CMA-ES"
|
||||
3. Changes max_trials: 100 -> 500
|
||||
4. Adds constraint: {name: "stress_limit", operator: "<=", value: 200, unit: "MPa"}
|
||||
5. WRITES the file
|
||||
6. Reports changes
|
||||
7. Canvas refreshes with updated algorithm node and new constraint node
|
||||
```
|
||||
|
||||
### Example 3: Complex Multi-File Changes
|
||||
```
|
||||
User: "Add a new Zernike extractor for the secondary mirror and connect it to a new objective"
|
||||
|
||||
Claude Code (in dashboard):
|
||||
1. Reads config
|
||||
2. Adds extractor to extractors array
|
||||
3. Adds objective connected to extractor
|
||||
4. If needed, modifies run_optimization.py to import new extractor
|
||||
5. WRITES all modified files
|
||||
6. Canvas refreshes with new extractor and objective nodes, properly connected
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Checklist
|
||||
|
||||
### Phase 1: Backend Claude Code Session
|
||||
- [x] Create `claude_code_session.py` with session manager
|
||||
- [x] Implement `send_message()` with CLI spawning
|
||||
- [x] Build Atomizer-aware system prompt
|
||||
- [x] Handle study context (working directory)
|
||||
- [x] Stream output properly
|
||||
|
||||
### Phase 2: WebSocket Route
|
||||
- [x] Create `/api/claude-code/ws/{study_id}` endpoint
|
||||
- [x] Handle message routing
|
||||
- [x] Implement `refresh_canvas` signal
|
||||
- [x] Session cleanup on disconnect
|
||||
|
||||
### Phase 3: Frontend Hook
|
||||
- [x] Create `useClaudeCode.ts` hook
|
||||
- [x] Connect to Claude Code WebSocket
|
||||
- [x] Handle streaming responses
|
||||
- [x] Handle canvas refresh signals
|
||||
|
||||
### Phase 4: Canvas Auto-Refresh
|
||||
- [x] Add `reloadCanvasFromStudy()` function
|
||||
- [x] Wire refresh signal to canvas store
|
||||
- [x] Add loading state during refresh
|
||||
- [x] Show notification on refresh (system message)
|
||||
|
||||
### Phase 5: Chat Panel Integration
|
||||
- [x] Update ChatPanel to use `useClaudeCode`
|
||||
- [x] Include canvas context in messages
|
||||
- [x] Add "Claude Code" indicator in UI (mode toggle)
|
||||
- [x] Show when Claude is editing files
|
||||
|
||||
### Phase 6: Testing
|
||||
- [ ] Test adding design variables
|
||||
- [ ] Test modifying objectives
|
||||
- [ ] Test complex multi-file changes
|
||||
- [ ] Test canvas refresh after changes
|
||||
- [ ] Test error handling
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
### Files Created/Modified
|
||||
|
||||
**Backend:**
|
||||
- `atomizer-dashboard/backend/api/services/claude_code_session.py` - New session manager
|
||||
- `atomizer-dashboard/backend/api/routes/claude_code.py` - New WebSocket routes
|
||||
- `atomizer-dashboard/backend/api/main.py` - Added claude_code router
|
||||
|
||||
**Frontend:**
|
||||
- `atomizer-dashboard/frontend/src/hooks/useClaudeCode.ts` - New hook for Claude Code CLI
|
||||
- `atomizer-dashboard/frontend/src/components/canvas/AtomizerCanvas.tsx` - Added mode toggle
|
||||
- `atomizer-dashboard/frontend/src/components/chat/ChatMessage.tsx` - Added system message support
|
||||
|
||||
---
|
||||
|
||||
## Security Considerations
|
||||
|
||||
Claude Code CLI with `--dangerously-skip-permissions` has full system access. Mitigations:
|
||||
|
||||
1. **Sandboxed environment**: Dashboard runs on user's machine, not public server
|
||||
2. **Study-scoped working directory**: Claude starts in study folder
|
||||
3. **Audit logging**: Log all file modifications
|
||||
4. **User confirmation**: Option to require approval for destructive operations
|
||||
|
||||
---
|
||||
|
||||
## Cost Considerations
|
||||
|
||||
Using Opus 4.5 via Claude Code CLI is more expensive than Sonnet API. Options:
|
||||
|
||||
1. **Default to Sonnet, upgrade on request**: "Use full power mode" button
|
||||
2. **Per-session token budget**: Warn user when approaching limit
|
||||
3. **Cache common operations**: Pre-generate responses for common tasks
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
1. **Parity with terminal**: Dashboard chat can do everything Claude Code CLI can
|
||||
2. **Real modifications**: Files actually change, not just instructions
|
||||
3. **Canvas sync**: Canvas reflects file changes immediately
|
||||
4. **Intelligent defaults**: Claude makes smart choices without asking clarifying questions
|
||||
5. **Proactive behavior**: Claude anticipates needs and handles edge cases
|
||||
|
||||
---
|
||||
|
||||
*This document to be implemented by Claude Code CLI*
|
||||
863
docs/plans/SAAS_ATOMIZER_ROADMAP.md
Normal file
863
docs/plans/SAAS_ATOMIZER_ROADMAP.md
Normal file
@@ -0,0 +1,863 @@
|
||||
# SaaS-Level Atomizer Roadmap (Revised)
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This roadmap transforms Atomizer into a **SaaS-grade, LLM-assisted structural optimization platform** with the core innovation being **side-by-side Claude + Canvas** integration where:
|
||||
|
||||
1. **Claude talks → Canvas updates in real-time** (user sees nodes appear/change)
|
||||
2. **User tweaks Canvas → Claude sees changes** (bi-directional sync)
|
||||
3. **Full Claude Code-level power** through the dashboard chat
|
||||
4. **Interview-driven study creation** entirely through conversation
|
||||
|
||||
**Vision**: An engineer opens Atomizer, describes their optimization goal, watches the canvas build itself, makes quick tweaks, and starts optimization—all through natural conversation with full visual feedback.
|
||||
|
||||
---
|
||||
|
||||
## The Core Innovation: Unified Claude + Canvas
|
||||
|
||||
The power of Atomizer is the **side-by-side experience**:
|
||||
|
||||
```
|
||||
┌───────────────────────────────────────────────────────────────────┐
|
||||
│ ATOMIZER DASHBOARD │
|
||||
├────────────────────────────┬──────────────────────────────────────┤
|
||||
│ CHAT PANEL │ CANVAS │
|
||||
│ (Atomizer Assistant) │ (SpecRenderer) │
|
||||
│ │ │
|
||||
│ "Create a bracket │ [DV: thickness] │
|
||||
│ optimization with │ │ │
|
||||
│ mass and stiffness" │ ▼ │
|
||||
│ │ │ [Model Node] │
|
||||
│ ▼ │ │ │
|
||||
│ 🔧 Adding thickness │ ▼ │
|
||||
│ 🔧 Adding mass ext ◄───┼──►[Ext: mass]──>[Obj: min mass] │
|
||||
│ 🔧 Adding stiffness ◄───┼──►[Ext: disp]──>[Obj: min disp] │
|
||||
│ ✓ Study configured! │ │
|
||||
│ │ (nodes appear in real-time) │
|
||||
│ User clicks a node ───────┼──► Claude sees the edit │
|
||||
│ │ │
|
||||
└────────────────────────────┴──────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Current State vs Target
|
||||
|
||||
### What We Have
|
||||
|
||||
| Component | Status | Notes |
|
||||
|-----------|--------|-------|
|
||||
| Power Mode WebSocket | ✅ Implemented | `/ws/power` endpoint with write tools |
|
||||
| Write Tools | ✅ Implemented | add_design_variable, add_extractor, etc. |
|
||||
| spec_modified Events | ✅ Implemented | Frontend receives notifications |
|
||||
| Canvas Auto-reload | ✅ Implemented | Triggers on spec_modified |
|
||||
| Streaming Responses | ❌ Missing | Currently waits for full response |
|
||||
| Canvas State → Claude | ❌ Missing | Claude doesn't see current canvas |
|
||||
| Interview Engine | ❌ Missing | No guided creation |
|
||||
| Unified WebSocket | ❌ Missing | Separate connections for chat/spec |
|
||||
|
||||
### Target Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────┐
|
||||
│ Unified WebSocket │
|
||||
│ /api/atomizer/ws │
|
||||
└─────────┬───────────┘
|
||||
│
|
||||
Bi-directional Sync
|
||||
│
|
||||
┌────────────────────┼────────────────────┐
|
||||
│ │ │
|
||||
▼ ▼ ▼
|
||||
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
|
||||
│ Chat Panel │ │ Canvas │ │ Spec Store │
|
||||
│ │ │ │ │ │
|
||||
│ Send messages │ │ User edits → │ │ Single source │
|
||||
│ Stream text │ │ Notify Claude │ │ of truth │
|
||||
│ See tool calls │ │ Receive updates │ │ Validates all │
|
||||
└─────────────────┘ └─────────────────┘ └─────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Unified WebSocket Hub (1-2 weeks)
|
||||
|
||||
### Goal: Single connection for chat + canvas + spec sync
|
||||
|
||||
### 1.1 Backend: Unified WebSocket Endpoint
|
||||
|
||||
```python
|
||||
# atomizer-dashboard/backend/api/routes/atomizer_ws.py
|
||||
|
||||
@router.websocket("/api/atomizer/ws")
|
||||
async def atomizer_websocket(websocket: WebSocket):
|
||||
"""
|
||||
Unified WebSocket for Atomizer Dashboard.
|
||||
|
||||
Handles:
|
||||
- Chat messages with streaming responses
|
||||
- Spec modifications with real-time canvas updates
|
||||
- Canvas edit notifications from user
|
||||
- Study context switching
|
||||
"""
|
||||
await websocket.accept()
|
||||
|
||||
agent = AtomizerClaudeAgent()
|
||||
conversation: List[Dict] = []
|
||||
current_spec: Optional[Dict] = None
|
||||
|
||||
try:
|
||||
while True:
|
||||
data = await websocket.receive_json()
|
||||
|
||||
if data["type"] == "message":
|
||||
# Chat message - stream response
|
||||
async for event in agent.chat_stream(
|
||||
message=data["content"],
|
||||
conversation=conversation,
|
||||
canvas_state=current_spec # Claude sees current canvas!
|
||||
):
|
||||
await websocket.send_json(event)
|
||||
|
||||
# If spec changed, update our local copy
|
||||
if event.get("type") == "spec_updated":
|
||||
current_spec = event["spec"]
|
||||
|
||||
elif data["type"] == "canvas_edit":
|
||||
# User made a manual edit - update spec and tell Claude
|
||||
current_spec = apply_patch(current_spec, data["patch"])
|
||||
# Next message to Claude will include updated spec
|
||||
|
||||
elif data["type"] == "set_study":
|
||||
# Switch study context
|
||||
agent.set_study(data["study_id"])
|
||||
current_spec = agent.load_spec()
|
||||
await websocket.send_json({
|
||||
"type": "spec_updated",
|
||||
"spec": current_spec
|
||||
})
|
||||
|
||||
except WebSocketDisconnect:
|
||||
pass
|
||||
```
|
||||
|
||||
### 1.2 Enhanced Claude Agent with Streaming
|
||||
|
||||
```python
|
||||
class AtomizerClaudeAgent:
|
||||
"""Full-power Claude agent with Claude Code-like capabilities"""
|
||||
|
||||
async def chat_stream(
|
||||
self,
|
||||
message: str,
|
||||
conversation: List[Dict],
|
||||
canvas_state: Optional[Dict] = None
|
||||
) -> AsyncGenerator[Dict, None]:
|
||||
"""Stream responses with tool calls"""
|
||||
|
||||
# Build system prompt with current canvas state
|
||||
system = self._build_system_prompt()
|
||||
if canvas_state:
|
||||
system += self._format_canvas_context(canvas_state)
|
||||
|
||||
messages = conversation + [{"role": "user", "content": message}]
|
||||
|
||||
# Use streaming API
|
||||
with self.client.messages.stream(
|
||||
model="claude-sonnet-4-20250514",
|
||||
max_tokens=8192,
|
||||
system=system,
|
||||
messages=messages,
|
||||
tools=self.tools
|
||||
) as stream:
|
||||
current_text = ""
|
||||
|
||||
for event in stream:
|
||||
if event.type == "content_block_delta":
|
||||
if hasattr(event.delta, "text"):
|
||||
current_text += event.delta.text
|
||||
yield {"type": "text", "content": event.delta.text, "done": False}
|
||||
|
||||
# Get final message for tool calls
|
||||
response = stream.get_final_message()
|
||||
|
||||
# Process tool calls
|
||||
for block in response.content:
|
||||
if block.type == "tool_use":
|
||||
yield {"type": "tool_start", "tool": block.name, "input": block.input}
|
||||
|
||||
result = await self._execute_tool(block.name, block.input)
|
||||
|
||||
yield {"type": "tool_result", "tool": block.name, "result": result["preview"]}
|
||||
|
||||
if result.get("spec_changed"):
|
||||
yield {"type": "spec_updated", "spec": self.spec_store.get()}
|
||||
|
||||
yield {"type": "done"}
|
||||
|
||||
def _format_canvas_context(self, spec: Dict) -> str:
|
||||
"""Format current canvas state for Claude's context"""
|
||||
lines = ["\n## Current Canvas State\n"]
|
||||
|
||||
if spec.get("design_variables"):
|
||||
lines.append(f"**Design Variables ({len(spec['design_variables'])}):**")
|
||||
for dv in spec["design_variables"]:
|
||||
lines.append(f" - {dv['name']}: [{dv['bounds']['min']}, {dv['bounds']['max']}]")
|
||||
|
||||
if spec.get("extractors"):
|
||||
lines.append(f"\n**Extractors ({len(spec['extractors'])}):**")
|
||||
for ext in spec["extractors"]:
|
||||
lines.append(f" - {ext['name']} ({ext['type']})")
|
||||
|
||||
if spec.get("objectives"):
|
||||
lines.append(f"\n**Objectives ({len(spec['objectives'])}):**")
|
||||
for obj in spec["objectives"]:
|
||||
lines.append(f" - {obj['name']} ({obj['direction']})")
|
||||
|
||||
if spec.get("constraints"):
|
||||
lines.append(f"\n**Constraints ({len(spec['constraints'])}):**")
|
||||
for con in spec["constraints"]:
|
||||
lines.append(f" - {con['name']} {con['operator']} {con['threshold']}")
|
||||
|
||||
lines.append("\nThe user can see this canvas. When you modify it, they see changes in real-time.")
|
||||
|
||||
return "\n".join(lines)
|
||||
```
|
||||
|
||||
### 1.3 Frontend: Unified Hook
|
||||
|
||||
```typescript
|
||||
// hooks/useAtomizerSocket.ts
|
||||
export function useAtomizerSocket(studyId?: string) {
|
||||
const [spec, setSpec] = useState<AtomizerSpec | null>(null);
|
||||
const [messages, setMessages] = useState<Message[]>([]);
|
||||
const [isThinking, setIsThinking] = useState(false);
|
||||
const [streamingText, setStreamingText] = useState("");
|
||||
const [activeTool, setActiveTool] = useState<string | null>(null);
|
||||
|
||||
const ws = useRef<WebSocket | null>(null);
|
||||
|
||||
useEffect(() => {
|
||||
const url = `ws://localhost:8001/api/atomizer/ws`;
|
||||
ws.current = new WebSocket(url);
|
||||
|
||||
ws.current.onopen = () => {
|
||||
if (studyId) {
|
||||
ws.current?.send(JSON.stringify({ type: "set_study", study_id: studyId }));
|
||||
}
|
||||
};
|
||||
|
||||
ws.current.onmessage = (event) => {
|
||||
const data = JSON.parse(event.data);
|
||||
|
||||
switch (data.type) {
|
||||
case "text":
|
||||
setStreamingText(prev => prev + data.content);
|
||||
break;
|
||||
|
||||
case "tool_start":
|
||||
setActiveTool(data.tool);
|
||||
break;
|
||||
|
||||
case "tool_result":
|
||||
setActiveTool(null);
|
||||
break;
|
||||
|
||||
case "spec_updated":
|
||||
setSpec(data.spec); // Canvas updates automatically!
|
||||
break;
|
||||
|
||||
case "done":
|
||||
// Finalize message
|
||||
setMessages(prev => [...prev, {
|
||||
role: "assistant",
|
||||
content: streamingText
|
||||
}]);
|
||||
setStreamingText("");
|
||||
setIsThinking(false);
|
||||
break;
|
||||
}
|
||||
};
|
||||
|
||||
return () => ws.current?.close();
|
||||
}, [studyId]);
|
||||
|
||||
const sendMessage = useCallback((content: string) => {
|
||||
setIsThinking(true);
|
||||
setMessages(prev => [...prev, { role: "user", content }]);
|
||||
ws.current?.send(JSON.stringify({ type: "message", content }));
|
||||
}, []);
|
||||
|
||||
const notifyCanvasEdit = useCallback((path: string, value: any) => {
|
||||
ws.current?.send(JSON.stringify({
|
||||
type: "canvas_edit",
|
||||
patch: { path, value }
|
||||
}));
|
||||
}, []);
|
||||
|
||||
return {
|
||||
spec,
|
||||
messages,
|
||||
streamingText,
|
||||
isThinking,
|
||||
activeTool,
|
||||
sendMessage,
|
||||
notifyCanvasEdit
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Full Tool Set (1-2 weeks)
|
||||
|
||||
### Goal: Claude Code-level power through dashboard
|
||||
|
||||
### 2.1 Tool Categories
|
||||
|
||||
```python
|
||||
ATOMIZER_TOOLS = {
|
||||
# === SPEC MODIFICATION (Already Implemented) ===
|
||||
"add_design_variable": "Add a design variable to the optimization",
|
||||
"add_extractor": "Add a physics extractor (mass, stress, displacement, custom)",
|
||||
"add_objective": "Add an optimization objective",
|
||||
"add_constraint": "Add a constraint",
|
||||
"update_spec_field": "Update any field in the spec by JSON path",
|
||||
"remove_node": "Remove a node from the spec",
|
||||
|
||||
# === READ/QUERY ===
|
||||
"read_study_config": "Read the full atomizer_spec.json",
|
||||
"query_trials": "Query optimization trial data",
|
||||
"list_studies": "List all available studies",
|
||||
"get_optimization_status": "Check if optimization is running",
|
||||
|
||||
# === STUDY MANAGEMENT (New) ===
|
||||
"create_study": "Create a new study directory with atomizer_spec.json",
|
||||
"clone_study": "Clone an existing study as a starting point",
|
||||
"validate_spec": "Validate the current spec for errors",
|
||||
|
||||
# === NX INTEGRATION (New) ===
|
||||
"introspect_model": "Analyze NX model for expressions and features",
|
||||
"suggest_design_vars": "AI-suggest design variables from model",
|
||||
"list_model_expressions": "List all expressions in the NX model",
|
||||
|
||||
# === OPTIMIZATION CONTROL (New) ===
|
||||
"start_optimization": "Start the optimization run",
|
||||
"stop_optimization": "Stop a running optimization",
|
||||
|
||||
# === INTERVIEW (New) ===
|
||||
"start_interview": "Begin guided study creation interview",
|
||||
"get_interview_progress": "Get current interview state",
|
||||
}
|
||||
```
|
||||
|
||||
### 2.2 Create Study Tool
|
||||
|
||||
```python
|
||||
async def _tool_create_study(self, params: Dict) -> Dict:
|
||||
"""Create a new study with atomizer_spec.json"""
|
||||
study_name = params["name"]
|
||||
study_dir = STUDIES_DIR / study_name
|
||||
|
||||
# Create directory structure
|
||||
study_dir.mkdir(parents=True, exist_ok=True)
|
||||
(study_dir / "1_setup").mkdir(exist_ok=True)
|
||||
(study_dir / "2_iterations").mkdir(exist_ok=True)
|
||||
(study_dir / "3_results").mkdir(exist_ok=True)
|
||||
|
||||
# Create initial spec
|
||||
spec = {
|
||||
"meta": {
|
||||
"version": "2.0",
|
||||
"study_name": study_name,
|
||||
"created_at": datetime.now().isoformat(),
|
||||
"created_by": "claude_agent"
|
||||
},
|
||||
"model": {
|
||||
"sim": {"path": params.get("model_path", ""), "solver": "nastran"}
|
||||
},
|
||||
"design_variables": [],
|
||||
"extractors": [],
|
||||
"objectives": [],
|
||||
"constraints": [],
|
||||
"optimization": {
|
||||
"algorithm": {"type": "TPE"},
|
||||
"budget": {"max_trials": 100}
|
||||
},
|
||||
"canvas": {"edges": [], "layout_version": "2.0"}
|
||||
}
|
||||
|
||||
# Save spec
|
||||
spec_path = study_dir / "atomizer_spec.json"
|
||||
with open(spec_path, "w") as f:
|
||||
json.dump(spec, f, indent=2)
|
||||
|
||||
# Update agent's study context
|
||||
self.study_id = study_name
|
||||
self.study_dir = study_dir
|
||||
|
||||
return {
|
||||
"preview": f"✓ Created study '{study_name}' at {study_dir}",
|
||||
"spec_changed": True,
|
||||
"study_id": study_name
|
||||
}
|
||||
```
|
||||
|
||||
### 2.3 NX Introspection Tool
|
||||
|
||||
```python
|
||||
async def _tool_introspect_model(self, params: Dict) -> Dict:
|
||||
"""Analyze NX model for design variable candidates"""
|
||||
model_path = params.get("model_path") or self._find_model_path()
|
||||
|
||||
if not model_path or not Path(model_path).exists():
|
||||
return {"preview": "✗ Model file not found", "spec_changed": False}
|
||||
|
||||
# Use NX session to get expressions
|
||||
expressions = await self._get_nx_expressions(model_path)
|
||||
|
||||
# Classify expressions as potential DVs
|
||||
candidates = []
|
||||
for expr in expressions:
|
||||
score = self._score_dv_candidate(expr)
|
||||
if score > 0.5:
|
||||
candidates.append({
|
||||
"name": expr["name"],
|
||||
"value": expr["value"],
|
||||
"formula": expr.get("formula", ""),
|
||||
"score": score,
|
||||
"suggested_bounds": self._suggest_bounds(expr)
|
||||
})
|
||||
|
||||
# Sort by score
|
||||
candidates.sort(key=lambda x: x["score"], reverse=True)
|
||||
|
||||
return {
|
||||
"preview": f"Found {len(expressions)} expressions, {len(candidates)} are DV candidates",
|
||||
"expressions": expressions,
|
||||
"candidates": candidates[:10], # Top 10
|
||||
"spec_changed": False
|
||||
}
|
||||
|
||||
def _score_dv_candidate(self, expr: Dict) -> float:
|
||||
"""Score expression as design variable candidate"""
|
||||
score = 0.0
|
||||
name = expr["name"].lower()
|
||||
|
||||
# Geometric parameters score high
|
||||
if any(kw in name for kw in ["thickness", "width", "height", "radius", "diameter", "depth"]):
|
||||
score += 0.4
|
||||
|
||||
# Numeric with reasonable value
|
||||
if isinstance(expr["value"], (int, float)) and expr["value"] > 0:
|
||||
score += 0.2
|
||||
|
||||
# Not a formula (pure number)
|
||||
if not expr.get("formula") or expr["formula"] == str(expr["value"]):
|
||||
score += 0.2
|
||||
|
||||
# Common design parameter names
|
||||
if any(kw in name for kw in ["rib", "web", "flange", "support", "angle"]):
|
||||
score += 0.2
|
||||
|
||||
return min(score, 1.0)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Interview Engine (1-2 weeks)
|
||||
|
||||
### Goal: Guided study creation through conversation
|
||||
|
||||
### 3.1 Interview Engine
|
||||
|
||||
```python
|
||||
class InterviewEngine:
|
||||
"""Guided study creation through conversation"""
|
||||
|
||||
PHASES = [
|
||||
("welcome", "What kind of optimization do you want to set up?"),
|
||||
("model", "What's the path to your NX simulation file (.sim)?"),
|
||||
("objectives", "What do you want to optimize? (e.g., minimize mass, minimize displacement)"),
|
||||
("design_vars", "Which parameters should vary? I can suggest some from your model."),
|
||||
("constraints", "Any constraints to respect? (e.g., max stress ≤ 200 MPa)"),
|
||||
("method", "I recommend {method} for this. Sound good?"),
|
||||
("review", "Here's your configuration. Ready to create the study?"),
|
||||
]
|
||||
|
||||
def __init__(self):
|
||||
self.phase_index = 0
|
||||
self.collected = {}
|
||||
self.spec_builder = SpecBuilder()
|
||||
|
||||
def get_current_question(self) -> str:
|
||||
phase, question = self.PHASES[self.phase_index]
|
||||
|
||||
# Dynamic question customization
|
||||
if phase == "method":
|
||||
method = self._recommend_method()
|
||||
question = question.format(method=method)
|
||||
elif phase == "design_vars" and self.collected.get("model_expressions"):
|
||||
candidates = self.collected["model_expressions"][:5]
|
||||
question += f"\n\nI found these candidates: {', '.join(c['name'] for c in candidates)}"
|
||||
|
||||
return question
|
||||
|
||||
def process_answer(self, answer: str) -> Dict:
|
||||
"""Process user's answer and advance interview"""
|
||||
phase, _ = self.PHASES[self.phase_index]
|
||||
|
||||
# Extract structured data based on phase
|
||||
extracted = self._extract_for_phase(phase, answer)
|
||||
self.collected[phase] = extracted
|
||||
|
||||
# Build spec incrementally
|
||||
spec_changes = self.spec_builder.apply(phase, extracted)
|
||||
|
||||
# Advance
|
||||
self.phase_index += 1
|
||||
complete = self.phase_index >= len(self.PHASES)
|
||||
|
||||
return {
|
||||
"phase_completed": phase,
|
||||
"extracted": extracted,
|
||||
"spec_changes": spec_changes,
|
||||
"next_question": None if complete else self.get_current_question(),
|
||||
"complete": complete,
|
||||
"spec": self.spec_builder.get_spec() if complete else None
|
||||
}
|
||||
|
||||
def _extract_for_phase(self, phase: str, answer: str) -> Dict:
|
||||
"""Extract structured data from natural language answer"""
|
||||
if phase == "model":
|
||||
# Extract file path
|
||||
return {"path": self._extract_path(answer)}
|
||||
|
||||
elif phase == "objectives":
|
||||
# Extract objectives
|
||||
objectives = []
|
||||
if "mass" in answer.lower() or "weight" in answer.lower():
|
||||
direction = "minimize" if "minimize" in answer.lower() or "reduce" in answer.lower() else "minimize"
|
||||
objectives.append({"name": "mass", "direction": direction})
|
||||
if "displacement" in answer.lower() or "stiff" in answer.lower():
|
||||
objectives.append({"name": "max_displacement", "direction": "minimize"})
|
||||
if "stress" in answer.lower():
|
||||
objectives.append({"name": "max_stress", "direction": "minimize"})
|
||||
if "wfe" in answer.lower() or "wavefront" in answer.lower():
|
||||
objectives.append({"name": "wfe", "direction": "minimize"})
|
||||
return {"objectives": objectives}
|
||||
|
||||
elif phase == "constraints":
|
||||
# Extract constraints
|
||||
constraints = []
|
||||
import re
|
||||
# Pattern: "stress < 200 MPa" or "max stress <= 200"
|
||||
stress_match = re.search(r'stress[^0-9]*([<>=]+)\s*(\d+)', answer.lower())
|
||||
if stress_match:
|
||||
constraints.append({
|
||||
"name": "max_stress",
|
||||
"operator": stress_match.group(1),
|
||||
"threshold": float(stress_match.group(2))
|
||||
})
|
||||
return {"constraints": constraints}
|
||||
|
||||
return {"raw": answer}
|
||||
|
||||
def _recommend_method(self) -> str:
|
||||
"""Recommend optimization method based on collected info"""
|
||||
objectives = self.collected.get("objectives", {}).get("objectives", [])
|
||||
if len(objectives) > 1:
|
||||
return "NSGA-II (multi-objective)"
|
||||
return "TPE (Bayesian optimization)"
|
||||
```
|
||||
|
||||
### 3.2 Interview Tool Integration
|
||||
|
||||
```python
|
||||
async def _tool_start_interview(self, params: Dict) -> Dict:
|
||||
"""Start guided study creation"""
|
||||
self.interview = InterviewEngine()
|
||||
question = self.interview.get_current_question()
|
||||
|
||||
return {
|
||||
"preview": f"Starting interview.\n\n{question}",
|
||||
"interview_started": True,
|
||||
"spec_changed": False
|
||||
}
|
||||
|
||||
async def _tool_interview_answer(self, params: Dict) -> Dict:
|
||||
"""Process interview answer"""
|
||||
if not self.interview:
|
||||
return {"preview": "No interview in progress", "spec_changed": False}
|
||||
|
||||
result = self.interview.process_answer(params["answer"])
|
||||
|
||||
response = f"Got it: {result['phase_completed']}\n\n"
|
||||
|
||||
if result["spec_changes"]:
|
||||
response += "Updated configuration:\n"
|
||||
for change in result["spec_changes"]:
|
||||
response += f" ✓ {change}\n"
|
||||
|
||||
if result["next_question"]:
|
||||
response += f"\n{result['next_question']}"
|
||||
elif result["complete"]:
|
||||
response += "\n✓ Interview complete! Creating study..."
|
||||
# Auto-create the study
|
||||
self.spec_store.set(result["spec"])
|
||||
|
||||
return {
|
||||
"preview": response,
|
||||
"spec_changed": result["complete"],
|
||||
"complete": result["complete"]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Visual Polish (1 week)
|
||||
|
||||
### Goal: Beautiful, responsive canvas updates
|
||||
|
||||
### 4.1 Tool Call Visualization
|
||||
|
||||
```typescript
|
||||
// components/chat/ToolCallIndicator.tsx
|
||||
function ToolCallIndicator({ tool, status }: { tool: string; status: 'running' | 'complete' }) {
|
||||
const icons: Record<string, JSX.Element> = {
|
||||
add_design_variable: <Variable className="w-4 h-4" />,
|
||||
add_extractor: <Cpu className="w-4 h-4" />,
|
||||
add_objective: <Target className="w-4 h-4" />,
|
||||
add_constraint: <Lock className="w-4 h-4" />,
|
||||
create_study: <FolderPlus className="w-4 h-4" />,
|
||||
introspect_model: <Search className="w-4 h-4" />,
|
||||
};
|
||||
|
||||
return (
|
||||
<div className={`flex items-center gap-2 px-3 py-2 rounded-lg ${
|
||||
status === 'running'
|
||||
? 'bg-amber-500/10 text-amber-400 border border-amber-500/20'
|
||||
: 'bg-green-500/10 text-green-400 border border-green-500/20'
|
||||
}`}>
|
||||
{status === 'running' ? (
|
||||
<Loader2 className="w-4 h-4 animate-spin" />
|
||||
) : (
|
||||
<Check className="w-4 h-4" />
|
||||
)}
|
||||
{icons[tool] || <Wrench className="w-4 h-4" />}
|
||||
<span className="text-sm font-medium">
|
||||
{formatToolName(tool)}
|
||||
</span>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### 4.2 Canvas Node Animation
|
||||
|
||||
```typescript
|
||||
// components/canvas/AnimatedNode.tsx
|
||||
function AnimatedNode({ data, isNew, isHighlighted }) {
|
||||
return (
|
||||
<motion.div
|
||||
initial={isNew ? { scale: 0, opacity: 0 } : false}
|
||||
animate={{
|
||||
scale: 1,
|
||||
opacity: 1,
|
||||
boxShadow: isHighlighted
|
||||
? '0 0 0 3px rgba(245, 158, 11, 0.5)'
|
||||
: 'none'
|
||||
}}
|
||||
transition={{ type: 'spring', stiffness: 500, damping: 30 }}
|
||||
className="node-container"
|
||||
>
|
||||
{/* Node content */}
|
||||
</motion.div>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### 4.3 Connection Line Animation
|
||||
|
||||
```typescript
|
||||
// Draw animated line when Claude adds an edge
|
||||
function AnimatedEdge({ source, target, isNew }) {
|
||||
return (
|
||||
<motion.path
|
||||
d={getBezierPath({ source, target })}
|
||||
initial={isNew ? { pathLength: 0, opacity: 0 } : false}
|
||||
animate={{ pathLength: 1, opacity: 1 }}
|
||||
transition={{ duration: 0.5, ease: 'easeOut' }}
|
||||
stroke="currentColor"
|
||||
strokeWidth={2}
|
||||
fill="none"
|
||||
/>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## User Experience Flows
|
||||
|
||||
### Flow 1: Quick Creation (Experienced User)
|
||||
|
||||
```
|
||||
User: "Create bracket optimization, minimize mass, thickness 2-10mm, max stress 200 MPa"
|
||||
|
||||
Claude: [Parses complete intent]
|
||||
🔧 Creating study "bracket_optimization"
|
||||
🔧 Adding design variable: thickness [2mm - 10mm]
|
||||
🔧 Adding extractor: mass
|
||||
🔧 Adding extractor: max_stress
|
||||
🔧 Adding objective: minimize mass
|
||||
🔧 Adding constraint: stress ≤ 200 MPa
|
||||
|
||||
✓ Created! Canvas shows your setup. Click any node to adjust.
|
||||
|
||||
[Canvas animates: DV appears → Model → Extractors → Objectives/Constraints]
|
||||
[Total time: ~5 seconds]
|
||||
```
|
||||
|
||||
### Flow 2: Guided Interview (New User)
|
||||
|
||||
```
|
||||
User: "Help me set up an optimization"
|
||||
|
||||
Claude: What kind of optimization do you want to set up?
|
||||
(e.g., bracket stiffness, mirror WFE, beam stress)
|
||||
|
||||
User: "Mirror surface quality optimization"
|
||||
|
||||
Claude: Got it - mirror optimization!
|
||||
|
||||
What's the path to your NX simulation file?
|
||||
(You can drag & drop or paste the path)
|
||||
|
||||
User: "studies/M1_Mirror/model.sim"
|
||||
|
||||
Claude: 🔧 Loading model...
|
||||
|
||||
Found your model with 15 expressions. These look like good design variables:
|
||||
- rib_thickness (8mm)
|
||||
- mirror_depth (50mm)
|
||||
- support_angle (45°)
|
||||
|
||||
What are you trying to optimize?
|
||||
|
||||
[Canvas: Model node appears]
|
||||
|
||||
User: "Minimize wavefront error and mass"
|
||||
|
||||
Claude: 🔧 Adding Zernike WFE extractor
|
||||
🔧 Adding mass extractor
|
||||
🔧 Adding objectives (multi-objective)
|
||||
|
||||
Multi-objective optimization! I recommend NSGA-II.
|
||||
|
||||
Which parameters should vary?
|
||||
|
||||
[Canvas: Extractors and objectives appear, connected to model]
|
||||
|
||||
User: "Use rib_thickness and mirror_depth, 50% range on each"
|
||||
|
||||
Claude: 🔧 Adding rib_thickness [4mm - 12mm]
|
||||
🔧 Adding mirror_depth [25mm - 75mm]
|
||||
|
||||
Any constraints?
|
||||
|
||||
[Canvas: DVs appear, connected to model]
|
||||
|
||||
User: "Keep stress under 150 MPa"
|
||||
|
||||
Claude: 🔧 Adding stress extractor
|
||||
🔧 Adding constraint: stress ≤ 150 MPa
|
||||
|
||||
✓ Configuration complete!
|
||||
|
||||
Your setup:
|
||||
- 2 design variables
|
||||
- 3 extractors (WFE, mass, stress)
|
||||
- 2 objectives (multi-objective)
|
||||
- 1 constraint
|
||||
|
||||
Ready to run? Say "start" or adjust the canvas first.
|
||||
|
||||
[Canvas: Full optimization flow visible]
|
||||
```
|
||||
|
||||
### Flow 3: User Edits, Claude Adapts
|
||||
|
||||
```
|
||||
[User clicks rib_thickness node, changes max from 12mm to 20mm]
|
||||
|
||||
User: "Is this a good range?"
|
||||
|
||||
Claude: I see you expanded rib_thickness to [4mm - 20mm].
|
||||
|
||||
That's a wide range! Considerations:
|
||||
- Thicker ribs = more mass, more stiffness
|
||||
- 20mm might exceed typical manufacturing limits
|
||||
- Wider range = more trials to explore
|
||||
|
||||
If you're exploring, this is fine. For production,
|
||||
I'd suggest [4mm - 15mm].
|
||||
|
||||
Want me to adjust, or keep your change?
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Priority
|
||||
|
||||
### Week 1-2: Unified WebSocket
|
||||
- [ ] Create `/api/atomizer/ws` endpoint
|
||||
- [ ] Implement streaming in `AtomizerClaudeAgent`
|
||||
- [ ] Create `useAtomizerSocket` hook
|
||||
- [ ] Wire canvas to receive spec updates
|
||||
- [ ] Add canvas edit notifications
|
||||
|
||||
### Week 3-4: Tools & Interview
|
||||
- [ ] Add `create_study` tool
|
||||
- [ ] Add `introspect_model` tool
|
||||
- [ ] Implement `InterviewEngine`
|
||||
- [ ] Add interview tools
|
||||
- [ ] Test guided creation flow
|
||||
|
||||
### Week 5: Polish
|
||||
- [ ] Tool call indicators in chat
|
||||
- [ ] Node appear/highlight animations
|
||||
- [ ] Edge draw animations
|
||||
- [ ] Error recovery & reconnection
|
||||
- [ ] Performance optimization
|
||||
|
||||
---
|
||||
|
||||
## Success Metrics
|
||||
|
||||
| Metric | Target |
|
||||
|--------|--------|
|
||||
| Study creation time (experienced) | < 30 seconds |
|
||||
| Study creation time (interview) | < 3 minutes |
|
||||
| Canvas update latency | < 200ms |
|
||||
| User edit → Claude context | < 100ms |
|
||||
| Interview completion rate | > 90% |
|
||||
|
||||
---
|
||||
|
||||
## Key Files to Modify
|
||||
|
||||
### Backend
|
||||
- `atomizer-dashboard/backend/api/routes/atomizer_ws.py` (new)
|
||||
- `atomizer-dashboard/backend/api/services/claude_agent.py` (enhance)
|
||||
- `atomizer-dashboard/backend/api/services/interview_engine.py` (new)
|
||||
|
||||
### Frontend
|
||||
- `atomizer-dashboard/frontend/src/hooks/useAtomizerSocket.ts` (new)
|
||||
- `atomizer-dashboard/frontend/src/pages/CanvasView.tsx` (update)
|
||||
- `atomizer-dashboard/frontend/src/components/canvas/SpecRenderer.tsx` (update)
|
||||
- `atomizer-dashboard/frontend/src/components/chat/ToolCallIndicator.tsx` (new)
|
||||
|
||||
---
|
||||
|
||||
*This architecture makes Atomizer uniquely powerful: natural language + visual feedback + full control, all in one seamless experience.*
|
||||
1697
docs/plans/UNIFIED_CONFIGURATION_ARCHITECTURE.md
Normal file
1697
docs/plans/UNIFIED_CONFIGURATION_ARCHITECTURE.md
Normal file
File diff suppressed because it is too large
Load Diff
495
docs/plans/UNIFIED_CONFIG_EXECUTION_PLAN.md
Normal file
495
docs/plans/UNIFIED_CONFIG_EXECUTION_PLAN.md
Normal file
@@ -0,0 +1,495 @@
|
||||
# Unified Configuration Architecture - Execution Plan
|
||||
|
||||
**Project**: AtomizerSpec v2.0 Implementation
|
||||
**Reference Document**: `docs/plans/UNIFIED_CONFIGURATION_ARCHITECTURE.md`
|
||||
**Schema Definition**: `optimization_engine/schemas/atomizer_spec_v2.json`
|
||||
**Status**: Ready for Implementation
|
||||
|
||||
---
|
||||
|
||||
## Project Overview
|
||||
|
||||
Transform Atomizer's fragmented configuration system into a unified architecture where:
|
||||
- One JSON schema (AtomizerSpec v2.0) is the single source of truth
|
||||
- Canvas, Backend, Claude, and Optimization Engine all use the same spec
|
||||
- Real-time WebSocket sync keeps all clients updated
|
||||
- Claude can dynamically modify specs and add custom functions
|
||||
|
||||
---
|
||||
|
||||
## Phase Structure
|
||||
|
||||
| Phase | Name | Duration | Focus |
|
||||
|-------|------|----------|-------|
|
||||
| 1 | Foundation | Week 1-3 | Backend SpecManager, REST API, Migration |
|
||||
| 2 | Frontend | Week 4-6 | SpecRenderer, WebSocket Sync, Store |
|
||||
| 3 | Claude Integration | Week 7-9 | MCP Tools, Custom Functions |
|
||||
| 4 | Polish & Testing | Week 10-12 | Migration, Testing, Documentation |
|
||||
|
||||
---
|
||||
|
||||
## Implementation Order (Critical Path)
|
||||
|
||||
```
|
||||
1. Schema & Types (P1.1-P1.3)
|
||||
└── 2. SpecManager Service (P1.4-P1.7)
|
||||
└── 3. REST Endpoints (P1.8-P1.12)
|
||||
├── 4. Migration Script (P1.13-P1.16)
|
||||
└── 5. Frontend Store (P2.1-P2.4)
|
||||
└── 6. SpecRenderer (P2.5-P2.10)
|
||||
└── 7. WebSocket Sync (P2.11-P2.15)
|
||||
└── 8. MCP Tools (P3.1-P3.8)
|
||||
└── 9. Custom Functions (P3.9-P3.14)
|
||||
└── 10. Testing & Polish (P4.1-P4.12)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## PHASE 1: Foundation (Backend)
|
||||
|
||||
### P1.1 - Create TypeScript types from JSON Schema
|
||||
- **File**: `atomizer-dashboard/frontend/src/types/atomizer-spec.ts`
|
||||
- **Action**: Generate TypeScript interfaces matching `atomizer_spec_v2.json`
|
||||
- **Reference**: Schema at `optimization_engine/schemas/atomizer_spec_v2.json`
|
||||
- **Acceptance**: Types compile, cover all schema definitions
|
||||
|
||||
### P1.2 - Create Python Pydantic models from JSON Schema
|
||||
- **File**: `optimization_engine/config/spec_models.py`
|
||||
- **Action**: Create Pydantic models for AtomizerSpec validation
|
||||
- **Reference**: Schema at `optimization_engine/schemas/atomizer_spec_v2.json`
|
||||
- **Acceptance**: Models validate example specs correctly
|
||||
|
||||
### P1.3 - Create spec validation utility
|
||||
- **File**: `optimization_engine/config/spec_validator.py`
|
||||
- **Action**: JSON Schema validation + semantic validation (bounds, references)
|
||||
- **Dependencies**: P1.2
|
||||
- **Acceptance**: Validates good specs, rejects invalid with clear errors
|
||||
|
||||
### P1.4 - Create SpecManager core class
|
||||
- **File**: `atomizer-dashboard/backend/api/services/spec_manager.py`
|
||||
- **Action**: Implement `load()`, `save()`, `_validate()`, `_compute_hash()`
|
||||
- **Reference**: Design in UNIFIED_CONFIGURATION_ARCHITECTURE.md Section 5.1
|
||||
- **Dependencies**: P1.3
|
||||
- **Acceptance**: Can load/save/validate atomizer_spec.json files
|
||||
|
||||
### P1.5 - Add SpecManager patch functionality
|
||||
- **File**: `atomizer-dashboard/backend/api/services/spec_manager.py`
|
||||
- **Action**: Implement `patch()` method for JSONPath-style updates
|
||||
- **Dependencies**: P1.4
|
||||
- **Acceptance**: Can update nested fields with conflict detection
|
||||
|
||||
### P1.6 - Add SpecManager node operations
|
||||
- **File**: `atomizer-dashboard/backend/api/services/spec_manager.py`
|
||||
- **Action**: Implement `add_node()`, `remove_node()`, `_generate_id()`, `_auto_position()`
|
||||
- **Dependencies**: P1.5
|
||||
- **Acceptance**: Can add/remove design vars, extractors, objectives, constraints
|
||||
|
||||
### P1.7 - Add SpecManager custom function support
|
||||
- **File**: `atomizer-dashboard/backend/api/services/spec_manager.py`
|
||||
- **Action**: Implement `add_custom_function()` with Python syntax validation
|
||||
- **Dependencies**: P1.6
|
||||
- **Acceptance**: Can add custom extractors with validated Python code
|
||||
|
||||
### P1.8 - Create spec REST router
|
||||
- **File**: `atomizer-dashboard/backend/api/routes/spec.py`
|
||||
- **Action**: Create FastAPI router for spec endpoints
|
||||
- **Dependencies**: P1.7
|
||||
- **Acceptance**: Router imports and mounts correctly
|
||||
|
||||
### P1.9 - Implement GET /studies/{study_id}/spec
|
||||
- **File**: `atomizer-dashboard/backend/api/routes/spec.py`
|
||||
- **Action**: Return full AtomizerSpec for a study
|
||||
- **Dependencies**: P1.8
|
||||
- **Acceptance**: Returns valid spec JSON, 404 for missing studies
|
||||
|
||||
### P1.10 - Implement PUT /studies/{study_id}/spec
|
||||
- **File**: `atomizer-dashboard/backend/api/routes/spec.py`
|
||||
- **Action**: Replace entire spec with validation
|
||||
- **Dependencies**: P1.9
|
||||
- **Acceptance**: Validates, saves, returns new hash
|
||||
|
||||
### P1.11 - Implement PATCH /studies/{study_id}/spec
|
||||
- **File**: `atomizer-dashboard/backend/api/routes/spec.py`
|
||||
- **Action**: Partial update with JSONPath
|
||||
- **Dependencies**: P1.10
|
||||
- **Acceptance**: Updates specific fields, broadcasts change
|
||||
|
||||
### P1.12 - Implement POST /studies/{study_id}/spec/validate
|
||||
- **File**: `atomizer-dashboard/backend/api/routes/spec.py`
|
||||
- **Action**: Validate spec and return detailed report
|
||||
- **Dependencies**: P1.11
|
||||
- **Acceptance**: Returns errors, warnings, summary
|
||||
|
||||
### P1.13 - Create config migration base
|
||||
- **File**: `optimization_engine/config/migrator.py`
|
||||
- **Action**: Create SpecMigrator class with field mapping
|
||||
- **Reference**: Design in UNIFIED_CONFIGURATION_ARCHITECTURE.md Section 8
|
||||
- **Acceptance**: Class structure ready for migration logic
|
||||
|
||||
### P1.14 - Implement design variable migration
|
||||
- **File**: `optimization_engine/config/migrator.py`
|
||||
- **Action**: Migrate `bounds[]` → `bounds.min/max`, `parameter` → `expression_name`
|
||||
- **Dependencies**: P1.13
|
||||
- **Acceptance**: All DV formats convert correctly
|
||||
|
||||
### P1.15 - Implement objective/constraint migration
|
||||
- **File**: `optimization_engine/config/migrator.py`
|
||||
- **Action**: Migrate `goal` → `direction`, extraction configs to new format
|
||||
- **Dependencies**: P1.14
|
||||
- **Acceptance**: Objectives and constraints convert correctly
|
||||
|
||||
### P1.16 - Implement full config migration
|
||||
- **File**: `optimization_engine/config/migrator.py`
|
||||
- **Action**: Complete migration including canvas positions, extractors inference
|
||||
- **Dependencies**: P1.15
|
||||
- **Acceptance**: Can migrate any existing optimization_config.json to AtomizerSpec
|
||||
|
||||
### P1.17 - Create migration CLI tool
|
||||
- **File**: `tools/migrate_to_spec_v2.py`
|
||||
- **Action**: CLI for batch migration with dry-run support
|
||||
- **Dependencies**: P1.16
|
||||
- **Acceptance**: `python tools/migrate_to_spec_v2.py --dry-run studies/*`
|
||||
|
||||
---
|
||||
|
||||
## PHASE 2: Frontend Integration
|
||||
|
||||
### P2.1 - Create useSpecStore hook
|
||||
- **File**: `atomizer-dashboard/frontend/src/hooks/useSpecStore.ts`
|
||||
- **Action**: Zustand store for AtomizerSpec state management
|
||||
- **Dependencies**: P1.1
|
||||
- **Acceptance**: Store holds spec, provides typed accessors
|
||||
|
||||
### P2.2 - Add spec loading to useSpecStore
|
||||
- **File**: `atomizer-dashboard/frontend/src/hooks/useSpecStore.ts`
|
||||
- **Action**: Implement `loadSpec(studyId)` fetching from API
|
||||
- **Dependencies**: P2.1, P1.9
|
||||
- **Acceptance**: Loads spec from backend, updates store
|
||||
|
||||
### P2.3 - Add spec modification to useSpecStore
|
||||
- **File**: `atomizer-dashboard/frontend/src/hooks/useSpecStore.ts`
|
||||
- **Action**: Implement `patchSpec()`, `addNode()`, `removeNode()` calling API
|
||||
- **Dependencies**: P2.2, P1.11
|
||||
- **Acceptance**: Modifications persist to backend
|
||||
|
||||
### P2.4 - Add optimistic updates to useSpecStore
|
||||
- **File**: `atomizer-dashboard/frontend/src/hooks/useSpecStore.ts`
|
||||
- **Action**: Update local state immediately, rollback on error
|
||||
- **Dependencies**: P2.3
|
||||
- **Acceptance**: UI feels instant, handles errors gracefully
|
||||
|
||||
### P2.5 - Create specToNodes converter
|
||||
- **File**: `atomizer-dashboard/frontend/src/lib/spec/converter.ts`
|
||||
- **Action**: Convert AtomizerSpec to ReactFlow nodes array
|
||||
- **Reference**: Design in UNIFIED_CONFIGURATION_ARCHITECTURE.md Section 5.2
|
||||
- **Dependencies**: P1.1
|
||||
- **Acceptance**: All node types render correctly
|
||||
|
||||
### P2.6 - Create specToEdges converter
|
||||
- **File**: `atomizer-dashboard/frontend/src/lib/spec/converter.ts`
|
||||
- **Action**: Convert spec.canvas.edges to ReactFlow edges
|
||||
- **Dependencies**: P2.5
|
||||
- **Acceptance**: All connections render correctly
|
||||
|
||||
### P2.7 - Create SpecRenderer component
|
||||
- **File**: `atomizer-dashboard/frontend/src/components/canvas/SpecRenderer.tsx`
|
||||
- **Action**: ReactFlow component that renders from useSpecStore
|
||||
- **Dependencies**: P2.5, P2.6, P2.4
|
||||
- **Acceptance**: Canvas displays spec correctly
|
||||
|
||||
### P2.8 - Wire node editing to spec updates
|
||||
- **File**: `atomizer-dashboard/frontend/src/components/canvas/SpecRenderer.tsx`
|
||||
- **Action**: Node data changes call useSpecStore.patchSpec()
|
||||
- **Dependencies**: P2.7
|
||||
- **Acceptance**: Editing node properties persists to spec
|
||||
|
||||
### P2.9 - Wire node position to spec updates
|
||||
- **File**: `atomizer-dashboard/frontend/src/components/canvas/SpecRenderer.tsx`
|
||||
- **Action**: Drag-drop updates canvas_position in spec
|
||||
- **Dependencies**: P2.8
|
||||
- **Acceptance**: Layout persists across reloads
|
||||
|
||||
### P2.10 - Update node panels for full spec fields
|
||||
- **Files**: `atomizer-dashboard/frontend/src/components/canvas/panels/*.tsx`
|
||||
- **Action**: Update all node config panels to show/edit full spec fields
|
||||
- **Dependencies**: P2.8
|
||||
- **Acceptance**: All spec fields are editable in UI
|
||||
|
||||
### P2.11 - Create WebSocket connection hook
|
||||
- **File**: `atomizer-dashboard/frontend/src/hooks/useSpecSync.ts`
|
||||
- **Action**: WebSocket connection to `/api/studies/{id}/sync`
|
||||
- **Dependencies**: P2.4
|
||||
- **Acceptance**: Connects, handles reconnection
|
||||
|
||||
### P2.12 - Create WebSocket backend endpoint
|
||||
- **File**: `atomizer-dashboard/backend/api/routes/spec.py`
|
||||
- **Action**: WebSocket endpoint for spec sync
|
||||
- **Dependencies**: P1.12
|
||||
- **Acceptance**: Accepts connections, tracks subscribers
|
||||
|
||||
### P2.13 - Implement spec_updated broadcast
|
||||
- **File**: `atomizer-dashboard/backend/api/services/spec_manager.py`
|
||||
- **Action**: SpecManager broadcasts to all subscribers on save
|
||||
- **Dependencies**: P2.12
|
||||
- **Acceptance**: All connected clients receive updates
|
||||
|
||||
### P2.14 - Handle spec_updated in frontend
|
||||
- **File**: `atomizer-dashboard/frontend/src/hooks/useSpecSync.ts`
|
||||
- **Action**: On spec_updated message, refresh spec from store
|
||||
- **Dependencies**: P2.11, P2.13
|
||||
- **Acceptance**: Changes from other clients appear in real-time
|
||||
|
||||
### P2.15 - Add conflict detection
|
||||
- **File**: `atomizer-dashboard/frontend/src/hooks/useSpecStore.ts`
|
||||
- **Action**: Compare hashes, warn on conflict, offer merge/overwrite
|
||||
- **Dependencies**: P2.14
|
||||
- **Acceptance**: Concurrent edits don't silently overwrite
|
||||
|
||||
### P2.16 - Replace CanvasView with SpecRenderer
|
||||
- **File**: `atomizer-dashboard/frontend/src/pages/CanvasView.tsx`
|
||||
- **Action**: Switch from useCanvasStore to useSpecStore + SpecRenderer
|
||||
- **Dependencies**: P2.10, P2.15
|
||||
- **Acceptance**: Canvas page uses new spec-based system
|
||||
|
||||
---
|
||||
|
||||
## PHASE 3: Claude Integration
|
||||
|
||||
### P3.1 - Create spec_get MCP tool
|
||||
- **File**: `mcp-server/atomizer-tools/src/tools/spec_tools.ts`
|
||||
- **Action**: Tool to retrieve full AtomizerSpec
|
||||
- **Reference**: Design in UNIFIED_CONFIGURATION_ARCHITECTURE.md Section 5.3
|
||||
- **Dependencies**: P1.9
|
||||
- **Acceptance**: Claude can read spec via MCP
|
||||
|
||||
### P3.2 - Create spec_modify MCP tool
|
||||
- **File**: `mcp-server/atomizer-tools/src/tools/spec_tools.ts`
|
||||
- **Action**: Tool to apply modifications (set, add, remove operations)
|
||||
- **Dependencies**: P3.1, P1.11
|
||||
- **Acceptance**: Claude can modify spec fields
|
||||
|
||||
### P3.3 - Create spec_add_node MCP tool
|
||||
- **File**: `mcp-server/atomizer-tools/src/tools/spec_tools.ts`
|
||||
- **Action**: Tool to add design vars, extractors, objectives, constraints
|
||||
- **Dependencies**: P3.2
|
||||
- **Acceptance**: Claude can add nodes to canvas
|
||||
|
||||
### P3.4 - Create spec_remove_node MCP tool
|
||||
- **File**: `mcp-server/atomizer-tools/src/tools/spec_tools.ts`
|
||||
- **Action**: Tool to remove nodes (with edge cleanup)
|
||||
- **Dependencies**: P3.3
|
||||
- **Acceptance**: Claude can remove nodes
|
||||
|
||||
### P3.5 - Create spec_validate MCP tool
|
||||
- **File**: `mcp-server/atomizer-tools/src/tools/spec_tools.ts`
|
||||
- **Action**: Tool to validate spec and return report
|
||||
- **Dependencies**: P1.12
|
||||
- **Acceptance**: Claude can check spec validity
|
||||
|
||||
### P3.6 - Create spec_add_custom_extractor MCP tool
|
||||
- **File**: `mcp-server/atomizer-tools/src/tools/spec_tools.ts`
|
||||
- **Action**: Tool to add custom Python functions as extractors
|
||||
- **Dependencies**: P1.7, P3.3
|
||||
- **Acceptance**: Claude can add custom extraction logic
|
||||
|
||||
### P3.7 - Register spec tools in MCP server
|
||||
- **File**: `mcp-server/atomizer-tools/src/index.ts`
|
||||
- **Action**: Import and register all spec_* tools
|
||||
- **Dependencies**: P3.1-P3.6
|
||||
- **Acceptance**: Tools appear in MCP tool list
|
||||
|
||||
### P3.8 - Update ContextBuilder for spec awareness
|
||||
- **File**: `atomizer-dashboard/backend/api/services/context_builder.py`
|
||||
- **Action**: Include spec summary in Claude context, mention spec tools
|
||||
- **Dependencies**: P3.7
|
||||
- **Acceptance**: Claude knows about spec tools in context
|
||||
|
||||
### P3.9 - Create custom extractor runtime loader
|
||||
- **File**: `optimization_engine/extractors/custom_loader.py`
|
||||
- **Action**: Dynamically load and execute custom functions from spec
|
||||
- **Dependencies**: P1.7
|
||||
- **Acceptance**: Custom functions execute during optimization
|
||||
|
||||
### P3.10 - Integrate custom extractors into optimization runner
|
||||
- **File**: `optimization_engine/core/runner.py`
|
||||
- **Action**: Check spec for custom extractors, load and use them
|
||||
- **Dependencies**: P3.9
|
||||
- **Acceptance**: Optimization uses custom extractors defined in spec
|
||||
|
||||
### P3.11 - Add custom extractor node type to canvas
|
||||
- **File**: `atomizer-dashboard/frontend/src/components/canvas/nodes/CustomExtractorNode.tsx`
|
||||
- **Action**: New node type showing custom function with code preview
|
||||
- **Dependencies**: P2.10
|
||||
- **Acceptance**: Custom extractors display distinctly in canvas
|
||||
|
||||
### P3.12 - Add code editor for custom extractors
|
||||
- **File**: `atomizer-dashboard/frontend/src/components/canvas/panels/CustomExtractorPanel.tsx`
|
||||
- **Action**: Monaco editor for viewing/editing custom Python code
|
||||
- **Dependencies**: P3.11
|
||||
- **Acceptance**: Users can view and edit custom function code
|
||||
|
||||
### P3.13 - Create spec_create_from_description MCP tool
|
||||
- **File**: `mcp-server/atomizer-tools/src/tools/spec_tools.ts`
|
||||
- **Action**: Tool to create new spec from natural language + model path
|
||||
- **Dependencies**: P3.6, P1.16
|
||||
- **Acceptance**: Claude can create studies from descriptions
|
||||
|
||||
### P3.14 - Update Claude prompts for spec workflow
|
||||
- **File**: `atomizer-dashboard/backend/api/services/context_builder.py`
|
||||
- **Action**: Update system prompts to guide Claude on using spec tools
|
||||
- **Dependencies**: P3.13
|
||||
- **Acceptance**: Claude naturally uses spec tools for modifications
|
||||
|
||||
---
|
||||
|
||||
## PHASE 4: Polish & Testing
|
||||
|
||||
### P4.1 - Migrate m1_mirror studies
|
||||
- **Action**: Run migration on all m1_mirror_* studies
|
||||
- **Dependencies**: P1.17
|
||||
- **Acceptance**: All studies have valid atomizer_spec.json
|
||||
|
||||
### P4.2 - Migrate drone_gimbal study
|
||||
- **Action**: Run migration on drone_gimbal study
|
||||
- **Dependencies**: P1.17
|
||||
- **Acceptance**: Study has valid atomizer_spec.json
|
||||
|
||||
### P4.3 - Migrate all remaining studies
|
||||
- **Action**: Run migration on all studies in studies/
|
||||
- **Dependencies**: P4.1, P4.2
|
||||
- **Acceptance**: All studies migrated, validated
|
||||
|
||||
### P4.4 - Create spec unit tests
|
||||
- **File**: `tests/test_spec_manager.py`
|
||||
- **Action**: Unit tests for SpecManager operations
|
||||
- **Dependencies**: P1.7
|
||||
- **Acceptance**: All SpecManager methods tested
|
||||
|
||||
### P4.5 - Create spec API integration tests
|
||||
- **File**: `tests/test_spec_api.py`
|
||||
- **Action**: Integration tests for REST endpoints
|
||||
- **Dependencies**: P1.12
|
||||
- **Acceptance**: All endpoints tested
|
||||
|
||||
### P4.6 - Create migration tests
|
||||
- **File**: `tests/test_migrator.py`
|
||||
- **Action**: Test migration with various config formats
|
||||
- **Dependencies**: P1.16
|
||||
- **Acceptance**: All config variants migrate correctly
|
||||
|
||||
### P4.7 - Create frontend component tests
|
||||
- **File**: `atomizer-dashboard/frontend/src/__tests__/SpecRenderer.test.tsx`
|
||||
- **Action**: Test SpecRenderer with various specs
|
||||
- **Dependencies**: P2.16
|
||||
- **Acceptance**: Canvas renders correctly for all spec types
|
||||
|
||||
### P4.8 - Create WebSocket sync tests
|
||||
- **File**: `tests/test_spec_sync.py`
|
||||
- **Action**: Test real-time sync between multiple clients
|
||||
- **Dependencies**: P2.15
|
||||
- **Acceptance**: Changes propagate correctly
|
||||
|
||||
### P4.9 - Create MCP tools tests
|
||||
- **File**: `mcp-server/atomizer-tools/src/__tests__/spec_tools.test.ts`
|
||||
- **Action**: Test all spec_* MCP tools
|
||||
- **Dependencies**: P3.7
|
||||
- **Acceptance**: All tools work correctly
|
||||
|
||||
### P4.10 - End-to-end testing
|
||||
- **Action**: Full workflow test: create study in canvas, modify via Claude, run optimization
|
||||
- **Dependencies**: P4.1-P4.9
|
||||
- **Acceptance**: Complete workflow works
|
||||
|
||||
### P4.11 - Update documentation
|
||||
- **Files**: `docs/04_USER_GUIDES/CANVAS.md`, `docs/04_USER_GUIDES/DASHBOARD.md`
|
||||
- **Action**: Document new spec-based workflow
|
||||
- **Dependencies**: P4.10
|
||||
- **Acceptance**: Documentation reflects new system
|
||||
|
||||
### P4.12 - Update CLAUDE.md
|
||||
- **File**: `CLAUDE.md`
|
||||
- **Action**: Add spec tools documentation, update context loading
|
||||
- **Dependencies**: P4.11
|
||||
- **Acceptance**: Claude Code sessions know about spec system
|
||||
|
||||
---
|
||||
|
||||
## File Summary
|
||||
|
||||
### New Files to Create
|
||||
|
||||
| File | Phase | Purpose |
|
||||
|------|-------|---------|
|
||||
| `atomizer-dashboard/frontend/src/types/atomizer-spec.ts` | P1 | TypeScript types |
|
||||
| `optimization_engine/config/spec_models.py` | P1 | Pydantic models |
|
||||
| `optimization_engine/config/spec_validator.py` | P1 | Validation logic |
|
||||
| `atomizer-dashboard/backend/api/services/spec_manager.py` | P1 | Core spec service |
|
||||
| `atomizer-dashboard/backend/api/routes/spec.py` | P1 | REST endpoints |
|
||||
| `optimization_engine/config/migrator.py` | P1 | Config migration |
|
||||
| `tools/migrate_to_spec_v2.py` | P1 | Migration CLI |
|
||||
| `atomizer-dashboard/frontend/src/hooks/useSpecStore.ts` | P2 | Spec state store |
|
||||
| `atomizer-dashboard/frontend/src/hooks/useSpecSync.ts` | P2 | WebSocket sync |
|
||||
| `atomizer-dashboard/frontend/src/lib/spec/converter.ts` | P2 | Spec ↔ ReactFlow |
|
||||
| `atomizer-dashboard/frontend/src/components/canvas/SpecRenderer.tsx` | P2 | New canvas component |
|
||||
| `mcp-server/atomizer-tools/src/tools/spec_tools.ts` | P3 | MCP tools |
|
||||
| `optimization_engine/extractors/custom_loader.py` | P3 | Custom function loader |
|
||||
| `atomizer-dashboard/frontend/src/components/canvas/nodes/CustomExtractorNode.tsx` | P3 | Custom node type |
|
||||
| `atomizer-dashboard/frontend/src/components/canvas/panels/CustomExtractorPanel.tsx` | P3 | Code editor panel |
|
||||
|
||||
### Files to Modify
|
||||
|
||||
| File | Phase | Changes |
|
||||
|------|-------|---------|
|
||||
| `atomizer-dashboard/backend/api/main.py` | P1 | Mount spec router |
|
||||
| `mcp-server/atomizer-tools/src/index.ts` | P3 | Register spec tools |
|
||||
| `atomizer-dashboard/backend/api/services/context_builder.py` | P3 | Update Claude context |
|
||||
| `optimization_engine/core/runner.py` | P3 | Custom extractor support |
|
||||
| `atomizer-dashboard/frontend/src/pages/CanvasView.tsx` | P2 | Use SpecRenderer |
|
||||
| `CLAUDE.md` | P4 | Document spec system |
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
### Phase 1 Complete When:
|
||||
- [ ] SpecManager can load/save/validate specs
|
||||
- [ ] All REST endpoints return correct responses
|
||||
- [ ] Migration tool converts existing configs
|
||||
|
||||
### Phase 2 Complete When:
|
||||
- [ ] Canvas renders from AtomizerSpec
|
||||
- [ ] Edits persist to spec file
|
||||
- [ ] WebSocket sync works between clients
|
||||
|
||||
### Phase 3 Complete When:
|
||||
- [ ] Claude can read and modify specs via MCP
|
||||
- [ ] Custom extractors work in optimization
|
||||
- [ ] Claude can create studies from descriptions
|
||||
|
||||
### Phase 4 Complete When:
|
||||
- [ ] All existing studies migrated
|
||||
- [ ] All tests pass
|
||||
- [ ] Documentation updated
|
||||
|
||||
---
|
||||
|
||||
## Risk Mitigation
|
||||
|
||||
| Risk | Mitigation |
|
||||
|------|------------|
|
||||
| Existing studies break | Migration tool with dry-run, keep old configs as backup |
|
||||
| WebSocket complexity | Start with polling, add WebSocket as enhancement |
|
||||
| Custom code security | Sandbox execution, syntax validation, no imports |
|
||||
| Performance with large specs | Lazy loading, incremental updates |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
**Master Design**: `docs/plans/UNIFIED_CONFIGURATION_ARCHITECTURE.md`
|
||||
**Schema**: `optimization_engine/schemas/atomizer_spec_v2.json`
|
||||
**This Plan**: `docs/plans/UNIFIED_CONFIG_EXECUTION_PLAN.md`
|
||||
|
||||
---
|
||||
|
||||
*Execute tasks in order. Each task has clear acceptance criteria. Reference the master design document for detailed specifications.*
|
||||
92
docs/plans/UNIFIED_CONFIG_QUICKSTART.md
Normal file
92
docs/plans/UNIFIED_CONFIG_QUICKSTART.md
Normal file
@@ -0,0 +1,92 @@
|
||||
# AtomizerSpec v2.0 - Quick Start Guide
|
||||
|
||||
## TL;DR
|
||||
|
||||
**Goal**: Replace 4+ config formats with one unified `atomizer_spec.json` that Canvas, Backend, Claude, and Optimization Engine all use.
|
||||
|
||||
**Effort**: ~50 tasks across 4 phases (~12 weeks)
|
||||
|
||||
**Key Files**:
|
||||
- Design: `docs/plans/UNIFIED_CONFIGURATION_ARCHITECTURE.md`
|
||||
- Tasks: `docs/plans/UNIFIED_CONFIG_EXECUTION_PLAN.md`
|
||||
- Schema: `optimization_engine/schemas/atomizer_spec_v2.json`
|
||||
- Prompts: `docs/plans/UNIFIED_CONFIG_RALPH_PROMPT.md`
|
||||
|
||||
---
|
||||
|
||||
## Start Autonomous Implementation
|
||||
|
||||
### 1. Open Claude CLI
|
||||
```bash
|
||||
claude --dangerously-skip-permissions
|
||||
```
|
||||
|
||||
### 2. Paste Initial Prompt
|
||||
```
|
||||
You are implementing the AtomizerSpec v2.0 Unified Configuration Architecture for Atomizer.
|
||||
|
||||
Read these documents before starting:
|
||||
1. `docs/plans/UNIFIED_CONFIGURATION_ARCHITECTURE.md` - Master design
|
||||
2. `docs/plans/UNIFIED_CONFIG_EXECUTION_PLAN.md` - Task list (50+ tasks)
|
||||
3. `optimization_engine/schemas/atomizer_spec_v2.json` - JSON Schema
|
||||
|
||||
Implement tasks in order (P1.1 → P4.12). Use TodoWrite to track progress. Commit after each logical group. Reference the design document for code examples.
|
||||
|
||||
Begin by reading the documents and loading tasks into TodoWrite. Start with P1.1.
|
||||
```
|
||||
|
||||
### 3. Let It Run
|
||||
Claude will work through the task list autonomously, committing as it goes.
|
||||
|
||||
---
|
||||
|
||||
## Phase Summary
|
||||
|
||||
| Phase | Tasks | Focus | Key Deliverables |
|
||||
|-------|-------|-------|------------------|
|
||||
| **1** | P1.1-P1.17 | Backend | SpecManager, REST API, Migration |
|
||||
| **2** | P2.1-P2.16 | Frontend | SpecRenderer, WebSocket Sync |
|
||||
| **3** | P3.1-P3.14 | Claude | MCP Tools, Custom Functions |
|
||||
| **4** | P4.1-P4.12 | Polish | Tests, Migration, Docs |
|
||||
|
||||
---
|
||||
|
||||
## Key Concepts
|
||||
|
||||
**AtomizerSpec**: Single JSON file containing everything needed for optimization AND canvas display.
|
||||
|
||||
**SpecManager**: Backend service that validates, saves, and broadcasts spec changes.
|
||||
|
||||
**SpecRenderer**: Frontend component that renders canvas directly from spec.
|
||||
|
||||
**Real-time Sync**: WebSocket broadcasts changes to all connected clients.
|
||||
|
||||
**Custom Functions**: Python code stored in spec, loaded dynamically during optimization.
|
||||
|
||||
---
|
||||
|
||||
## Success Metrics
|
||||
|
||||
- [ ] Canvas loads/saves without data loss
|
||||
- [ ] Claude can modify specs via MCP tools
|
||||
- [ ] All existing studies migrated
|
||||
- [ ] WebSocket sync works between clients
|
||||
- [ ] Custom extractors execute in optimization
|
||||
- [ ] All tests pass
|
||||
|
||||
---
|
||||
|
||||
## If Resuming
|
||||
|
||||
```
|
||||
Continue implementing AtomizerSpec v2.0.
|
||||
|
||||
Reference:
|
||||
- `docs/plans/UNIFIED_CONFIG_EXECUTION_PLAN.md`
|
||||
|
||||
Check current todo list, find last completed task, continue from there.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
*Full details in the referenced documents.*
|
||||
286
docs/plans/UNIFIED_CONFIG_RALPH_PROMPT.md
Normal file
286
docs/plans/UNIFIED_CONFIG_RALPH_PROMPT.md
Normal file
@@ -0,0 +1,286 @@
|
||||
# Ralph Loop Prompt: AtomizerSpec v2.0 Implementation
|
||||
|
||||
**Copy this prompt to start the autonomous implementation loop.**
|
||||
|
||||
---
|
||||
|
||||
## Claude CLI Command
|
||||
|
||||
```bash
|
||||
claude --dangerously-skip-permissions
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Initial Prompt (Copy This)
|
||||
|
||||
```
|
||||
You are implementing the AtomizerSpec v2.0 Unified Configuration Architecture for Atomizer.
|
||||
|
||||
## Project Context
|
||||
|
||||
Read these documents before starting:
|
||||
1. `docs/plans/UNIFIED_CONFIGURATION_ARCHITECTURE.md` - Master design document with architecture, schemas, and component designs
|
||||
2. `docs/plans/UNIFIED_CONFIG_EXECUTION_PLAN.md` - Detailed task list with 50+ tasks across 4 phases
|
||||
3. `optimization_engine/schemas/atomizer_spec_v2.json` - The JSON Schema definition
|
||||
|
||||
## Your Mission
|
||||
|
||||
Implement the AtomizerSpec v2.0 system following the execution plan. Work through tasks in order (P1.1 → P1.2 → ... → P4.12).
|
||||
|
||||
## Rules
|
||||
|
||||
1. **Follow the plan**: Execute tasks in the order specified in UNIFIED_CONFIG_EXECUTION_PLAN.md
|
||||
2. **Check dependencies**: Don't start a task until its dependencies are complete
|
||||
3. **Use TodoWrite**: Track all tasks, mark in_progress when starting, completed when done
|
||||
4. **Test as you go**: Verify each task meets its acceptance criteria before moving on
|
||||
5. **Commit regularly**: Commit after completing each logical group of tasks (e.g., after P1.7, after P1.12)
|
||||
6. **Reference the design**: The master document has code examples - use them
|
||||
7. **Ask if blocked**: If you encounter a blocker, explain what's wrong and what you need
|
||||
|
||||
## Starting Point
|
||||
|
||||
1. Read the three reference documents listed above
|
||||
2. Load the current task list into TodoWrite
|
||||
3. Start with P1.1 (Create TypeScript types from JSON Schema)
|
||||
4. Work through each task, marking completion as you go
|
||||
|
||||
## Commit Message Format
|
||||
|
||||
Use conventional commits:
|
||||
- `feat(spec): Add SpecManager core class (P1.4)`
|
||||
- `feat(spec-api): Implement GET /studies/{id}/spec (P1.9)`
|
||||
- `feat(frontend): Create useSpecStore hook (P2.1)`
|
||||
- `test(spec): Add SpecManager unit tests (P4.4)`
|
||||
|
||||
## Progress Tracking
|
||||
|
||||
After completing each phase, summarize:
|
||||
- Tasks completed
|
||||
- Files created/modified
|
||||
- Any deviations from plan
|
||||
- Blockers encountered
|
||||
|
||||
Begin by reading the reference documents and loading tasks into TodoWrite.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Continuation Prompt (When Resuming)
|
||||
|
||||
```
|
||||
Continue implementing the AtomizerSpec v2.0 system.
|
||||
|
||||
Reference documents:
|
||||
- `docs/plans/UNIFIED_CONFIGURATION_ARCHITECTURE.md` - Master design
|
||||
- `docs/plans/UNIFIED_CONFIG_EXECUTION_PLAN.md` - Task list
|
||||
|
||||
Check where we left off:
|
||||
1. Read the current todo list
|
||||
2. Find the last completed task
|
||||
3. Continue with the next task in sequence
|
||||
|
||||
If you need context on what was done, check recent git commits.
|
||||
|
||||
Resume implementation.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase-Specific Prompts
|
||||
|
||||
### Start Phase 1 (Foundation)
|
||||
```
|
||||
Begin Phase 1 of AtomizerSpec implementation - Foundation/Backend.
|
||||
|
||||
Reference: `docs/plans/UNIFIED_CONFIG_EXECUTION_PLAN.md` section "PHASE 1: Foundation"
|
||||
|
||||
Tasks P1.1 through P1.17 focus on:
|
||||
- TypeScript and Python types from JSON Schema
|
||||
- SpecManager service (load, save, validate, patch, nodes)
|
||||
- REST API endpoints
|
||||
- Migration tool
|
||||
|
||||
Start with P1.1: Create TypeScript types from the JSON Schema at `optimization_engine/schemas/atomizer_spec_v2.json`.
|
||||
|
||||
Work through each task in order, tracking with TodoWrite.
|
||||
```
|
||||
|
||||
### Start Phase 2 (Frontend)
|
||||
```
|
||||
Begin Phase 2 of AtomizerSpec implementation - Frontend Integration.
|
||||
|
||||
Reference: `docs/plans/UNIFIED_CONFIG_EXECUTION_PLAN.md` section "PHASE 2: Frontend Integration"
|
||||
|
||||
Tasks P2.1 through P2.16 focus on:
|
||||
- useSpecStore hook (Zustand state management)
|
||||
- Spec ↔ ReactFlow conversion
|
||||
- SpecRenderer component
|
||||
- WebSocket real-time sync
|
||||
- Replacing old canvas with spec-based canvas
|
||||
|
||||
Prerequisites: Phase 1 must be complete (P1.1-P1.17).
|
||||
|
||||
Start with P2.1: Create useSpecStore hook.
|
||||
|
||||
Work through each task in order, tracking with TodoWrite.
|
||||
```
|
||||
|
||||
### Start Phase 3 (Claude Integration)
|
||||
```
|
||||
Begin Phase 3 of AtomizerSpec implementation - Claude Integration.
|
||||
|
||||
Reference: `docs/plans/UNIFIED_CONFIG_EXECUTION_PLAN.md` section "PHASE 3: Claude Integration"
|
||||
|
||||
Tasks P3.1 through P3.14 focus on:
|
||||
- MCP tools for spec operations (spec_get, spec_modify, spec_add_node, etc.)
|
||||
- Custom extractor support (add Python functions via Claude)
|
||||
- Runtime loading of custom functions
|
||||
- Natural language study creation
|
||||
|
||||
Prerequisites: Phases 1 and 2 must be complete.
|
||||
|
||||
Start with P3.1: Create spec_get MCP tool.
|
||||
|
||||
Work through each task in order, tracking with TodoWrite.
|
||||
```
|
||||
|
||||
### Start Phase 4 (Polish & Testing)
|
||||
```
|
||||
Begin Phase 4 of AtomizerSpec implementation - Polish & Testing.
|
||||
|
||||
Reference: `docs/plans/UNIFIED_CONFIG_EXECUTION_PLAN.md` section "PHASE 4: Polish & Testing"
|
||||
|
||||
Tasks P4.1 through P4.12 focus on:
|
||||
- Migrating all existing studies to AtomizerSpec v2
|
||||
- Unit tests, integration tests, e2e tests
|
||||
- Documentation updates
|
||||
|
||||
Prerequisites: Phases 1, 2, and 3 must be complete.
|
||||
|
||||
Start with P4.1: Migrate m1_mirror studies.
|
||||
|
||||
Work through each task in order, tracking with TodoWrite.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting Prompts
|
||||
|
||||
### If Stuck on a Task
|
||||
```
|
||||
I'm blocked on task [TASK_ID].
|
||||
|
||||
The issue is: [describe the problem]
|
||||
|
||||
What I've tried: [list attempts]
|
||||
|
||||
Please help me resolve this so I can continue with the execution plan.
|
||||
```
|
||||
|
||||
### If Tests Fail
|
||||
```
|
||||
Tests are failing for [component].
|
||||
|
||||
Error: [paste error]
|
||||
|
||||
This is blocking task [TASK_ID]. Help me fix the tests so I can mark the task complete.
|
||||
```
|
||||
|
||||
### If Design Needs Clarification
|
||||
```
|
||||
Task [TASK_ID] requires [specific thing] but the design document doesn't specify [what's unclear].
|
||||
|
||||
Options I see:
|
||||
1. [option A]
|
||||
2. [option B]
|
||||
|
||||
Which approach should I take? Or should I update the design document?
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Full Task List for TodoWrite
|
||||
|
||||
Copy this to initialize the todo list:
|
||||
|
||||
```
|
||||
P1.1 - Create TypeScript types from JSON Schema
|
||||
P1.2 - Create Python Pydantic models from JSON Schema
|
||||
P1.3 - Create spec validation utility
|
||||
P1.4 - Create SpecManager core class
|
||||
P1.5 - Add SpecManager patch functionality
|
||||
P1.6 - Add SpecManager node operations
|
||||
P1.7 - Add SpecManager custom function support
|
||||
P1.8 - Create spec REST router
|
||||
P1.9 - Implement GET /studies/{study_id}/spec
|
||||
P1.10 - Implement PUT /studies/{study_id}/spec
|
||||
P1.11 - Implement PATCH /studies/{study_id}/spec
|
||||
P1.12 - Implement POST /studies/{study_id}/spec/validate
|
||||
P1.13 - Create config migration base
|
||||
P1.14 - Implement design variable migration
|
||||
P1.15 - Implement objective/constraint migration
|
||||
P1.16 - Implement full config migration
|
||||
P1.17 - Create migration CLI tool
|
||||
P2.1 - Create useSpecStore hook
|
||||
P2.2 - Add spec loading to useSpecStore
|
||||
P2.3 - Add spec modification to useSpecStore
|
||||
P2.4 - Add optimistic updates to useSpecStore
|
||||
P2.5 - Create specToNodes converter
|
||||
P2.6 - Create specToEdges converter
|
||||
P2.7 - Create SpecRenderer component
|
||||
P2.8 - Wire node editing to spec updates
|
||||
P2.9 - Wire node position to spec updates
|
||||
P2.10 - Update node panels for full spec fields
|
||||
P2.11 - Create WebSocket connection hook
|
||||
P2.12 - Create WebSocket backend endpoint
|
||||
P2.13 - Implement spec_updated broadcast
|
||||
P2.14 - Handle spec_updated in frontend
|
||||
P2.15 - Add conflict detection
|
||||
P2.16 - Replace CanvasView with SpecRenderer
|
||||
P3.1 - Create spec_get MCP tool
|
||||
P3.2 - Create spec_modify MCP tool
|
||||
P3.3 - Create spec_add_node MCP tool
|
||||
P3.4 - Create spec_remove_node MCP tool
|
||||
P3.5 - Create spec_validate MCP tool
|
||||
P3.6 - Create spec_add_custom_extractor MCP tool
|
||||
P3.7 - Register spec tools in MCP server
|
||||
P3.8 - Update ContextBuilder for spec awareness
|
||||
P3.9 - Create custom extractor runtime loader
|
||||
P3.10 - Integrate custom extractors into optimization runner
|
||||
P3.11 - Add custom extractor node type to canvas
|
||||
P3.12 - Add code editor for custom extractors
|
||||
P3.13 - Create spec_create_from_description MCP tool
|
||||
P3.14 - Update Claude prompts for spec workflow
|
||||
P4.1 - Migrate m1_mirror studies
|
||||
P4.2 - Migrate drone_gimbal study
|
||||
P4.3 - Migrate all remaining studies
|
||||
P4.4 - Create spec unit tests
|
||||
P4.5 - Create spec API integration tests
|
||||
P4.6 - Create migration tests
|
||||
P4.7 - Create frontend component tests
|
||||
P4.8 - Create WebSocket sync tests
|
||||
P4.9 - Create MCP tools tests
|
||||
P4.10 - End-to-end testing
|
||||
P4.11 - Update documentation
|
||||
P4.12 - Update CLAUDE.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Expected Outcomes
|
||||
|
||||
After successful completion:
|
||||
|
||||
1. **Single Source of Truth**: `atomizer_spec.json` used everywhere
|
||||
2. **Bidirectional Sync**: Canvas ↔ Spec with no data loss
|
||||
3. **Real-time Updates**: WebSocket keeps all clients in sync
|
||||
4. **Claude Integration**: Claude can read/modify/create specs via MCP
|
||||
5. **Custom Functions**: Users can add Python extractors through UI/Claude
|
||||
6. **All Studies Migrated**: Existing configs converted to v2 format
|
||||
7. **Full Test Coverage**: Unit, integration, and e2e tests passing
|
||||
8. **Updated Documentation**: User guides reflect new workflow
|
||||
|
||||
---
|
||||
|
||||
*This prompt file enables autonomous implementation of the AtomizerSpec v2.0 system using the Ralph Loop pattern.*
|
||||
197
docs/reference/DEEP_INVESTIGATION_PROMPT.md
Normal file
197
docs/reference/DEEP_INVESTIGATION_PROMPT.md
Normal file
@@ -0,0 +1,197 @@
|
||||
# Deep Investigation & Architecture Analysis Prompt
|
||||
|
||||
**Purpose**: Reusable instructions for conducting thorough system investigations and producing actionable design documents.
|
||||
|
||||
---
|
||||
|
||||
## Full Investigation Prompt
|
||||
|
||||
Use this when you need a comprehensive analysis of a system, architecture, or problem:
|
||||
|
||||
```markdown
|
||||
Please conduct a comprehensive investigation and produce a master design document.
|
||||
|
||||
### Investigation Phase
|
||||
|
||||
1. **Multi-Agent Exploration**: Launch parallel exploration agents to examine:
|
||||
- Frontend components and state management
|
||||
- Backend services, routes, and data flow
|
||||
- Core engine/library code and schemas
|
||||
- Configuration files and existing patterns
|
||||
- Any MCP tools, APIs, or integration points
|
||||
|
||||
2. **Source of Truth Analysis**: Identify ALL places where the same concept is represented:
|
||||
- Data schemas/types (JSON, TypeScript, Python)
|
||||
- Configuration formats and their variants
|
||||
- State management across components
|
||||
- Document inconsistencies and naming conflicts
|
||||
|
||||
3. **Data Flow Mapping**: Trace how data moves through the system:
|
||||
- Entry points → Processing → Storage → Display
|
||||
- Identify lossy conversions and sync gaps
|
||||
- Note missing endpoints or broken connections
|
||||
|
||||
### Documentation Phase
|
||||
|
||||
4. **Current State Report**: Document what exists with:
|
||||
- Architecture diagrams (text-based)
|
||||
- Comparison tables showing format inconsistencies
|
||||
- Data flow diagrams showing where information is lost
|
||||
- Clear "What's Wrong / What's OK" assessment
|
||||
|
||||
5. **Problem Statement**: Articulate:
|
||||
- Core issues with severity ratings
|
||||
- User pain points with concrete scenarios
|
||||
- Technical debt and architectural gaps
|
||||
|
||||
### Design Phase
|
||||
|
||||
6. **Proposed Architecture**: Design the solution with:
|
||||
- Single source of truth principle
|
||||
- Complete schema definition (JSON Schema if applicable)
|
||||
- Component responsibilities and interfaces
|
||||
- API/endpoint specifications
|
||||
- Real-time sync strategy if needed
|
||||
|
||||
7. **Integration Design**: Show how components connect:
|
||||
- Frontend ↔ Backend ↔ Engine data contracts
|
||||
- Bidirectional sync mechanisms
|
||||
- AI/Assistant integration points
|
||||
- Extensibility patterns (plugins, custom functions)
|
||||
|
||||
### Planning Phase
|
||||
|
||||
8. **Implementation Roadmap**: Break into phases with:
|
||||
- Clear deliverables per phase
|
||||
- Priority (P0/P1/P2) and effort estimates
|
||||
- Dependencies between tasks
|
||||
- Migration strategy for existing data
|
||||
|
||||
9. **Appendices**: Include:
|
||||
- Glossary of terms
|
||||
- File location references
|
||||
- Comparison tables (before/after)
|
||||
- Code examples for key components
|
||||
|
||||
### Output Format
|
||||
|
||||
Produce:
|
||||
1. **Master Document** (Markdown): Complete design document with all sections
|
||||
2. **Schema Files**: Actual JSON Schema or type definitions ready to use
|
||||
3. **Executive Summary**: Key findings, what's broken, proposed solution, timeline
|
||||
|
||||
### Quality Standards
|
||||
|
||||
- Read actual source files, don't assume
|
||||
- Use tables for comparisons (makes inconsistencies obvious)
|
||||
- Include text-based diagrams for architecture and data flow
|
||||
- Provide concrete code examples, not just descriptions
|
||||
- Make it actionable: someone should be able to implement from this document
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Version
|
||||
|
||||
For faster investigations or when you already have context:
|
||||
|
||||
```markdown
|
||||
Deep investigation with master document output:
|
||||
|
||||
1. **Explore**: Parallel agents examine frontend, backend, engine, schemas
|
||||
2. **Map**: Trace data flow, identify all representations of same concept
|
||||
3. **Compare**: Tables showing format inconsistencies and naming conflicts
|
||||
4. **Diagram**: Architecture and data flow (text-based)
|
||||
5. **Assess**: What's wrong (severity) / What's OK / User pain points
|
||||
6. **Design**: Single source of truth, complete schema, API specs
|
||||
7. **Plan**: Phased roadmap with priorities and effort estimates
|
||||
8. **Deliver**: Master document + actual schema files + executive summary
|
||||
|
||||
Standards: Read actual code, use comparison tables, include diagrams, make it actionable.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Trigger Phrases
|
||||
|
||||
Use these phrases to invoke deep analysis behavior:
|
||||
|
||||
| Phrase | Effect |
|
||||
|--------|--------|
|
||||
| "Make a very deep and thoughtful research on..." | Triggers comprehensive multi-agent exploration |
|
||||
| "Produce a master document that will lead to implementation" | Ensures actionable output with roadmap |
|
||||
| "Investigate with multi-agent exploration" | Parallel exploration of different system areas |
|
||||
| "Map all representations and identify inconsistencies" | Source of truth analysis with comparison tables |
|
||||
| "Design a single source of truth architecture" | Unified schema/format design |
|
||||
| "Include comparison tables and data flow diagrams" | Visual documentation requirements |
|
||||
| "Make it actionable with implementation roadmap" | Phased planning with priorities |
|
||||
|
||||
---
|
||||
|
||||
## Example Usage
|
||||
|
||||
### For Architecture Overhaul
|
||||
```
|
||||
Please conduct a deep investigation on how [Component A] and [Component B]
|
||||
share data. I need a master document that:
|
||||
- Maps all data representations across the system
|
||||
- Identifies inconsistencies with comparison tables
|
||||
- Proposes a unified architecture
|
||||
- Includes implementation roadmap
|
||||
|
||||
Make it actionable - someone should be able to implement from this document.
|
||||
```
|
||||
|
||||
### For Problem Diagnosis
|
||||
```
|
||||
Deep investigation needed: [describe the problem]
|
||||
|
||||
Explore the codebase to understand:
|
||||
- Where this data/logic currently lives
|
||||
- How it flows through the system
|
||||
- What's broken and why
|
||||
|
||||
Produce a report with:
|
||||
- Root cause analysis
|
||||
- Data flow diagrams
|
||||
- Proposed fix with implementation steps
|
||||
```
|
||||
|
||||
### For New Feature Design
|
||||
```
|
||||
I want to add [feature]. Before implementing, conduct a deep analysis:
|
||||
|
||||
1. How does similar functionality work in the codebase?
|
||||
2. What components would be affected?
|
||||
3. What's the cleanest integration approach?
|
||||
|
||||
Produce a design document with architecture, API specs, and phased roadmap.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Checklist
|
||||
|
||||
A good deep investigation should produce:
|
||||
|
||||
- [ ] **Architecture diagram** (text-based, showing component relationships)
|
||||
- [ ] **Data flow diagram** (showing how data moves, where it's transformed)
|
||||
- [ ] **Comparison tables** (format inconsistencies, naming conflicts)
|
||||
- [ ] **Problem severity matrix** (Critical/High/Medium/Low ratings)
|
||||
- [ ] **User pain points** (concrete scenarios, not abstract)
|
||||
- [ ] **Proposed schema/types** (actual JSON Schema or TypeScript)
|
||||
- [ ] **API specifications** (endpoints, request/response formats)
|
||||
- [ ] **Implementation roadmap** (phased, with effort estimates)
|
||||
- [ ] **Migration strategy** (for existing data/code)
|
||||
- [ ] **Code examples** (for key components)
|
||||
|
||||
---
|
||||
|
||||
## Related Documents
|
||||
|
||||
- Example output: `docs/plans/UNIFIED_CONFIGURATION_ARCHITECTURE.md`
|
||||
- Example schema: `optimization_engine/schemas/atomizer_spec_v2.json`
|
||||
|
||||
---
|
||||
|
||||
*This prompt template was developed during the Atomizer Unified Configuration Architecture investigation (January 2026).*
|
||||
322
docs/reference/EXECUTION_PLAN_GENERATOR_PROMPT.md
Normal file
322
docs/reference/EXECUTION_PLAN_GENERATOR_PROMPT.md
Normal file
@@ -0,0 +1,322 @@
|
||||
# Execution Plan & Ralph Loop Generator
|
||||
|
||||
**Purpose**: Instructions for generating comprehensive execution plans with autonomous implementation prompts from any design document.
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
After completing a deep investigation that produced:
|
||||
- A master design document (architecture, schemas, component designs)
|
||||
- Clear understanding of what needs to be built
|
||||
|
||||
Use this to generate:
|
||||
- Detailed task list with dependencies
|
||||
- File-level implementation instructions
|
||||
- Autonomous execution prompts (Ralph Loop)
|
||||
- Quick reference guides
|
||||
|
||||
---
|
||||
|
||||
## Generator Prompt
|
||||
|
||||
```markdown
|
||||
Based on the design document at [PATH_TO_DESIGN_DOC], create a comprehensive execution plan for autonomous implementation.
|
||||
|
||||
### Task Breakdown Requirements
|
||||
|
||||
1. **Granularity**: Break work into tasks that take 15-60 minutes each
|
||||
2. **Atomic**: Each task should produce a testable/verifiable output
|
||||
3. **Sequential IDs**: Use format P{phase}.{number} (e.g., P1.1, P1.2, P2.1)
|
||||
4. **Dependencies**: Explicitly list which tasks must complete first
|
||||
|
||||
### For Each Task, Specify:
|
||||
|
||||
- **Task ID & Name**: P1.1 - Create TypeScript types from JSON Schema
|
||||
- **File(s)**: Exact file path(s) to create or modify
|
||||
- **Action**: Specific implementation instructions
|
||||
- **Reference**: Section of design doc with details/examples
|
||||
- **Dependencies**: List of prerequisite task IDs
|
||||
- **Acceptance Criteria**: How to verify task is complete
|
||||
|
||||
### Structure the Plan As:
|
||||
|
||||
1. **Project Overview**: Goal, reference docs, success metrics
|
||||
2. **Phase Structure**: Table showing phases, duration, focus areas
|
||||
3. **Implementation Order**: Critical path diagram (text-based)
|
||||
4. **Phase Sections**: Detailed tasks grouped by phase
|
||||
5. **File Summary**: Tables of files to create and modify
|
||||
6. **Success Criteria**: Checkboxes for each phase completion
|
||||
7. **Risk Mitigation**: Known risks and how to handle them
|
||||
|
||||
### Also Generate:
|
||||
|
||||
1. **Ralph Loop Prompts**:
|
||||
- Initial prompt to start autonomous execution
|
||||
- Continuation prompt for resuming
|
||||
- Phase-specific prompts
|
||||
- Troubleshooting prompts
|
||||
- Full task list for TodoWrite initialization
|
||||
|
||||
2. **Quick Start Guide**:
|
||||
- TL;DR summary
|
||||
- Copy-paste commands to start
|
||||
- Phase summary table
|
||||
- Key concepts glossary
|
||||
- Success metrics checklist
|
||||
|
||||
### Output Files:
|
||||
|
||||
- `{PROJECT}_EXECUTION_PLAN.md` - Detailed task list
|
||||
- `{PROJECT}_RALPH_PROMPT.md` - Autonomous execution prompts
|
||||
- `{PROJECT}_QUICKSTART.md` - Quick reference guide
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Example Usage
|
||||
|
||||
### Input
|
||||
```
|
||||
Based on the design document at `docs/plans/UNIFIED_CONFIGURATION_ARCHITECTURE.md`,
|
||||
create a comprehensive execution plan for autonomous implementation.
|
||||
|
||||
The project involves:
|
||||
- Backend services (Python/FastAPI)
|
||||
- Frontend components (React/TypeScript)
|
||||
- MCP tools integration
|
||||
- Database migrations
|
||||
|
||||
Generate the execution plan, Ralph Loop prompts, and quick start guide.
|
||||
```
|
||||
|
||||
### Output Structure
|
||||
```
|
||||
docs/plans/
|
||||
├── {PROJECT}_EXECUTION_PLAN.md # 50+ detailed tasks
|
||||
├── {PROJECT}_RALPH_PROMPT.md # Autonomous prompts
|
||||
└── {PROJECT}_QUICKSTART.md # TL;DR guide
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task Template
|
||||
|
||||
Use this template for each task:
|
||||
|
||||
```markdown
|
||||
### P{X}.{Y} - {Task Name}
|
||||
- **File**: `path/to/file.ext`
|
||||
- **Action**: {What to implement}
|
||||
- **Reference**: {Design doc section} Section X.Y
|
||||
- **Dependencies**: P{X}.{Z}, P{X}.{W}
|
||||
- **Acceptance**: {How to verify completion}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase Template
|
||||
|
||||
```markdown
|
||||
## PHASE {N}: {Phase Name}
|
||||
|
||||
### P{N}.1 - {First Task}
|
||||
- **File**: `path/to/file`
|
||||
- **Action**: {Description}
|
||||
- **Dependencies**: {None or list}
|
||||
- **Acceptance**: {Criteria}
|
||||
|
||||
### P{N}.2 - {Second Task}
|
||||
...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Ralph Prompt Template
|
||||
|
||||
```markdown
|
||||
# Ralph Loop Prompt: {Project Name}
|
||||
|
||||
## Claude CLI Command
|
||||
\`\`\`bash
|
||||
claude --dangerously-skip-permissions
|
||||
\`\`\`
|
||||
|
||||
## Initial Prompt (Copy This)
|
||||
\`\`\`
|
||||
You are implementing {Project Description}.
|
||||
|
||||
## Project Context
|
||||
|
||||
Read these documents before starting:
|
||||
1. `{path/to/design_doc}` - Master design document
|
||||
2. `{path/to/execution_plan}` - Detailed task list
|
||||
3. `{path/to/schema_or_types}` - Type definitions (if applicable)
|
||||
|
||||
## Your Mission
|
||||
|
||||
Implement the system following the execution plan. Work through tasks in order (P1.1 → P1.2 → ... → P{N}.{M}).
|
||||
|
||||
## Rules
|
||||
|
||||
1. **Follow the plan**: Execute tasks in order specified
|
||||
2. **Check dependencies**: Don't start until prerequisites complete
|
||||
3. **Use TodoWrite**: Track all tasks, mark progress
|
||||
4. **Test as you go**: Verify acceptance criteria
|
||||
5. **Commit regularly**: After each logical group
|
||||
6. **Reference the design**: Use code examples from docs
|
||||
7. **Ask if blocked**: Explain blockers clearly
|
||||
|
||||
Begin by reading the reference documents and loading tasks into TodoWrite.
|
||||
\`\`\`
|
||||
|
||||
## Continuation Prompt
|
||||
\`\`\`
|
||||
Continue implementing {Project Name}.
|
||||
|
||||
Reference: `{path/to/execution_plan}`
|
||||
|
||||
Check current todo list, find last completed task, continue from there.
|
||||
\`\`\`
|
||||
|
||||
## Full Task List for TodoWrite
|
||||
\`\`\`
|
||||
P1.1 - {Task name}
|
||||
P1.2 - {Task name}
|
||||
...
|
||||
\`\`\`
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Start Template
|
||||
|
||||
```markdown
|
||||
# {Project Name} - Quick Start Guide
|
||||
|
||||
## TL;DR
|
||||
|
||||
**Goal**: {One sentence description}
|
||||
|
||||
**Effort**: ~{N} tasks across {M} phases (~{W} weeks)
|
||||
|
||||
**Key Files**:
|
||||
- Design: `{path}`
|
||||
- Tasks: `{path}`
|
||||
- Prompts: `{path}`
|
||||
|
||||
---
|
||||
|
||||
## Start Autonomous Implementation
|
||||
|
||||
### 1. Open Claude CLI
|
||||
\`\`\`bash
|
||||
claude --dangerously-skip-permissions
|
||||
\`\`\`
|
||||
|
||||
### 2. Paste Initial Prompt
|
||||
\`\`\`
|
||||
{Condensed version of initial prompt}
|
||||
\`\`\`
|
||||
|
||||
### 3. Let It Run
|
||||
|
||||
---
|
||||
|
||||
## Phase Summary
|
||||
|
||||
| Phase | Tasks | Focus | Deliverables |
|
||||
|-------|-------|-------|--------------|
|
||||
| 1 | P1.1-P1.{N} | {Focus} | {Key outputs} |
|
||||
| 2 | P2.1-P2.{N} | {Focus} | {Key outputs} |
|
||||
...
|
||||
|
||||
---
|
||||
|
||||
## Success Metrics
|
||||
|
||||
- [ ] {Metric 1}
|
||||
- [ ] {Metric 2}
|
||||
...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
Before finalizing the execution plan, verify:
|
||||
|
||||
- [ ] Every task has a specific file path
|
||||
- [ ] Every task has clear acceptance criteria
|
||||
- [ ] Dependencies form a valid DAG (no cycles)
|
||||
- [ ] Tasks are ordered so dependencies come first
|
||||
- [ ] Phase boundaries make logical sense
|
||||
- [ ] Commits points are identified (after groups of related tasks)
|
||||
- [ ] Risk mitigations are documented
|
||||
- [ ] Ralph prompts reference all necessary documents
|
||||
- [ ] Quick start is actually quick (fits on one screen)
|
||||
- [ ] Task list for TodoWrite is complete and copy-pasteable
|
||||
|
||||
---
|
||||
|
||||
## Complexity Guidelines
|
||||
|
||||
| Project Size | Tasks | Phases | Typical Duration |
|
||||
|--------------|-------|--------|------------------|
|
||||
| Small | 10-20 | 2 | 1-2 weeks |
|
||||
| Medium | 20-50 | 3-4 | 3-6 weeks |
|
||||
| Large | 50-100 | 4-6 | 6-12 weeks |
|
||||
| Epic | 100+ | 6+ | 12+ weeks |
|
||||
|
||||
---
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Backend-First Pattern
|
||||
```
|
||||
Phase 1: Data models, validation, core service
|
||||
Phase 2: REST API endpoints
|
||||
Phase 3: Frontend integration
|
||||
Phase 4: Testing & polish
|
||||
```
|
||||
|
||||
### Full-Stack Feature Pattern
|
||||
```
|
||||
Phase 1: Schema/types, backend service, API
|
||||
Phase 2: Frontend components, state management
|
||||
Phase 3: Integration, real-time sync
|
||||
Phase 4: Testing, documentation
|
||||
```
|
||||
|
||||
### Migration Pattern
|
||||
```
|
||||
Phase 1: New system alongside old
|
||||
Phase 2: Migration tooling
|
||||
Phase 3: Gradual cutover
|
||||
Phase 4: Deprecate old system
|
||||
```
|
||||
|
||||
### Integration Pattern
|
||||
```
|
||||
Phase 1: Define contracts/interfaces
|
||||
Phase 2: Implement adapters
|
||||
Phase 3: Wire up connections
|
||||
Phase 4: Testing, error handling
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Tips for Better Plans
|
||||
|
||||
1. **Start with the end**: Define success criteria first, then work backward
|
||||
2. **Identify the critical path**: What's the minimum to get something working?
|
||||
3. **Group related tasks**: Makes commits logical and rollback easier
|
||||
4. **Front-load risky tasks**: Discover blockers early
|
||||
5. **Include buffer**: Things always take longer than expected
|
||||
6. **Make tasks testable**: "It works" is not acceptance criteria
|
||||
7. **Reference existing code**: Point to similar patterns in codebase
|
||||
8. **Consider parallelism**: Some phases can overlap if teams split work
|
||||
|
||||
---
|
||||
|
||||
*Use this template to generate execution plans for any project following a design document.*
|
||||
730
docs/reviews/ARCHITECTURE_REVIEW.md
Normal file
730
docs/reviews/ARCHITECTURE_REVIEW.md
Normal file
@@ -0,0 +1,730 @@
|
||||
# Atomizer Architecture Review
|
||||
|
||||
**Date**: January 2026
|
||||
**Version**: 2.0 (AtomizerSpec unified configuration)
|
||||
**Author**: Architecture Review
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
Atomizer is a structural optimization platform that enables engineers to optimize FEA (Finite Element Analysis) designs through a visual canvas interface and AI-powered assistance. The architecture follows a **single source of truth** pattern where all configuration flows through `atomizer_spec.json`.
|
||||
|
||||
### Key Strengths
|
||||
- **Unified Configuration**: One JSON file defines the entire optimization study
|
||||
- **Type Safety**: Pydantic validation at every modification point
|
||||
- **Real-time Collaboration**: WebSocket-based sync between all clients
|
||||
- **Responsive UI**: Optimistic updates with background synchronization
|
||||
- **AI Integration**: Claude can modify configurations in Power Mode
|
||||
|
||||
### Architecture Quality Score: **8.5/10**
|
||||
|
||||
| Aspect | Score | Notes |
|
||||
|--------|-------|-------|
|
||||
| Data Integrity | 9/10 | Single source of truth, hash-based conflict detection |
|
||||
| Type Safety | 9/10 | Pydantic models throughout backend |
|
||||
| Extensibility | 8/10 | Custom extractors, algorithms supported |
|
||||
| Performance | 8/10 | Optimistic updates, WebSocket streaming |
|
||||
| Maintainability | 8/10 | Clear separation of concerns |
|
||||
| Documentation | 7/10 | Good inline docs, needs more high-level guides |
|
||||
|
||||
---
|
||||
|
||||
## 1. Configuration Layer
|
||||
|
||||
### 1.1 AtomizerSpec v2.0 - The Single Source of Truth
|
||||
|
||||
**Location**: `studies/{study_name}/atomizer_spec.json`
|
||||
|
||||
The AtomizerSpec is the heart of Atomizer's configuration. Every component reads from and writes to this single file.
|
||||
|
||||
```
|
||||
atomizer_spec.json
|
||||
├── meta # Study metadata
|
||||
│ ├── version: "2.0"
|
||||
│ ├── study_name
|
||||
│ ├── created_by # canvas | claude | api | migration
|
||||
│ └── modified_at
|
||||
├── model # NX model files
|
||||
│ ├── sim: { path, solver }
|
||||
│ ├── nx_part: { path }
|
||||
│ └── fem: { path }
|
||||
├── design_variables[] # Parameters to optimize
|
||||
│ ├── id: "dv_001"
|
||||
│ ├── name, expression_name
|
||||
│ ├── type: continuous | discrete
|
||||
│ ├── bounds: { min, max }
|
||||
│ └── canvas_position
|
||||
├── extractors[] # Physics result extractors
|
||||
│ ├── id: "ext_001"
|
||||
│ ├── type: mass | displacement | stress | zernike | custom
|
||||
│ ├── config: {}
|
||||
│ └── outputs: [{ name, metric }]
|
||||
├── objectives[] # Optimization goals
|
||||
│ ├── id: "obj_001"
|
||||
│ ├── direction: minimize | maximize
|
||||
│ ├── weight
|
||||
│ └── source: { extractor_id, output_key }
|
||||
├── constraints[] # Hard/soft constraints
|
||||
│ ├── id: "con_001"
|
||||
│ ├── operator: <= | >= | ==
|
||||
│ ├── threshold
|
||||
│ └── source: { extractor_id, output_key }
|
||||
├── optimization # Algorithm settings
|
||||
│ ├── algorithm: { type, config }
|
||||
│ ├── budget: { max_trials }
|
||||
│ └── surrogate: { enabled, type }
|
||||
└── canvas # UI layout state
|
||||
└── edges[]
|
||||
```
|
||||
|
||||
### 1.2 Node ID Convention
|
||||
|
||||
All configurable elements use unique IDs with prefixes:
|
||||
|
||||
| Prefix | Element Type | Example |
|
||||
|--------|--------------|---------|
|
||||
| `dv_` | Design Variable | `dv_001`, `dv_002` |
|
||||
| `ext_` | Extractor | `ext_001`, `ext_002` |
|
||||
| `obj_` | Objective | `obj_001`, `obj_002` |
|
||||
| `con_` | Constraint | `con_001`, `con_002` |
|
||||
|
||||
IDs are auto-generated: `{prefix}{max_existing + 1:03d}`
|
||||
|
||||
### 1.3 Legacy Configuration
|
||||
|
||||
**File**: `optimization_config.json` (deprecated)
|
||||
|
||||
Legacy studies may have this file. The `SpecMigrator` automatically converts them to AtomizerSpec v2.0 on load.
|
||||
|
||||
```python
|
||||
from optimization_engine.config.migrator import SpecMigrator
|
||||
migrator = SpecMigrator(study_dir)
|
||||
spec = migrator.migrate_file(legacy_path, spec_path)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Frontend Architecture
|
||||
|
||||
### 2.1 Technology Stack
|
||||
|
||||
| Layer | Technology | Purpose |
|
||||
|-------|------------|---------|
|
||||
| Build | Vite + TypeScript | Fast bundling, type safety |
|
||||
| UI | React 18 + TailwindCSS | Component framework |
|
||||
| State | Zustand | Lightweight global state |
|
||||
| Canvas | ReactFlow | Graph visualization |
|
||||
| Communication | Fetch + WebSocket | API + real-time sync |
|
||||
|
||||
### 2.2 Directory Structure
|
||||
|
||||
```
|
||||
atomizer-dashboard/frontend/src/
|
||||
├── components/
|
||||
│ ├── canvas/ # Canvas components
|
||||
│ │ ├── AtomizerCanvas # Main wrapper
|
||||
│ │ ├── SpecRenderer # Spec → ReactFlow
|
||||
│ │ ├── nodes/ # Node type components
|
||||
│ │ └── panels/ # Side panels
|
||||
│ └── chat/ # Claude chat UI
|
||||
├── hooks/
|
||||
│ ├── useSpecStore.ts # Central state (Zustand)
|
||||
│ ├── useChat.ts # Claude integration
|
||||
│ ├── useCanvasStore.ts # Local canvas state
|
||||
│ └── useSpecWebSocket.ts # Real-time sync
|
||||
├── lib/
|
||||
│ └── spec/converter.ts # Spec ↔ ReactFlow
|
||||
├── types/
|
||||
│ └── atomizer-spec.ts # TypeScript definitions
|
||||
└── pages/
|
||||
├── CanvasView.tsx # Main canvas page
|
||||
├── Home.tsx # Study selection
|
||||
└── Setup.tsx # Study wizard
|
||||
```
|
||||
|
||||
### 2.3 State Management Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ useSpecStore (Zustand) │
|
||||
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌───────────────┐ │
|
||||
│ │ spec │ │ hash │ │ isDirty │ │ selectedNode │ │
|
||||
│ └────┬────┘ └────┬────┘ └────┬────┘ └───────┬───────┘ │
|
||||
└───────┼────────────┼────────────┼───────────────┼──────────┘
|
||||
│ │ │ │
|
||||
▼ ▼ ▼ ▼
|
||||
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────────┐
|
||||
│Canvas │ │Conflict │ │ Save │ │ NodeConfig │
|
||||
│ Render │ │Detection│ │ Button │ │ Panel │
|
||||
└─────────┘ └─────────┘ └─────────┘ └─────────────┘
|
||||
```
|
||||
|
||||
**Key Actions**:
|
||||
- `loadSpec(studyId)` - Fetch spec from backend
|
||||
- `patchSpec(path, value)` - Update with conflict check
|
||||
- `patchSpecOptimistic(path, value)` - Fire-and-forget update
|
||||
- `addNode(type, data)` - Add design var/extractor/etc.
|
||||
- `removeNode(nodeId)` - Delete with edge cleanup
|
||||
|
||||
### 2.4 Canvas Rendering Pipeline
|
||||
|
||||
```
|
||||
AtomizerSpec JSON
|
||||
│
|
||||
▼
|
||||
specToNodes() [converter.ts]
|
||||
│
|
||||
├──► Model Node (synthetic)
|
||||
├──► Solver Node (synthetic)
|
||||
├──► DesignVar Nodes × N
|
||||
├──► Extractor Nodes × N
|
||||
├──► Objective Nodes × N
|
||||
├──► Constraint Nodes × N
|
||||
├──► Algorithm Node (synthetic)
|
||||
└──► Surrogate Node (optional)
|
||||
│
|
||||
▼
|
||||
ReactFlow Component
|
||||
│
|
||||
▼
|
||||
Interactive Canvas
|
||||
```
|
||||
|
||||
**Layout Constants**:
|
||||
```
|
||||
Design Variables: x = 50
|
||||
Model: x = 280
|
||||
Solver: x = 510
|
||||
Extractors: x = 740
|
||||
Objectives: x = 1020
|
||||
Constraints: x = 1020 (offset y)
|
||||
Algorithm: x = 1300
|
||||
Surrogate: x = 1530
|
||||
Row Height: 100px
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Backend Architecture
|
||||
|
||||
### 3.1 Technology Stack
|
||||
|
||||
| Layer | Technology | Purpose |
|
||||
|-------|------------|---------|
|
||||
| Framework | FastAPI | Async REST + WebSocket |
|
||||
| Validation | Pydantic | Schema enforcement |
|
||||
| Database | SQLite | Trial storage (Optuna schema) |
|
||||
| LLM | Anthropic Claude | AI assistance |
|
||||
|
||||
### 3.2 Directory Structure
|
||||
|
||||
```
|
||||
atomizer-dashboard/backend/api/
|
||||
├── main.py # FastAPI app
|
||||
├── routes/
|
||||
│ ├── spec.py # Spec CRUD + WebSocket
|
||||
│ ├── optimization.py # Run management
|
||||
│ ├── claude.py # Chat sessions
|
||||
│ └── files.py # File operations
|
||||
└── services/
|
||||
├── spec_manager.py # Central spec management
|
||||
├── claude_agent.py # Claude with tools
|
||||
├── context_builder.py # System prompts
|
||||
└── session_manager.py # WebSocket sessions
|
||||
```
|
||||
|
||||
### 3.3 SpecManager Service
|
||||
|
||||
**The SpecManager is the gatekeeper for all spec modifications.**
|
||||
|
||||
```python
|
||||
class SpecManager:
|
||||
def __init__(self, study_path: Path):
|
||||
self.study_path = study_path
|
||||
self.spec_file = study_path / "atomizer_spec.json"
|
||||
self.subscribers: List[WebSocket] = []
|
||||
|
||||
# Loading
|
||||
def load_spec(self) -> AtomizerSpec
|
||||
def load_raw(self) -> dict
|
||||
|
||||
# Validation
|
||||
def validate(self, spec) -> ValidationReport
|
||||
def validate_semantic(self, spec) -> ValidationReport
|
||||
|
||||
# Modifications
|
||||
def patch_spec(self, path, value) -> dict
|
||||
def add_node(self, type, data) -> str
|
||||
def update_node(self, id, updates) -> None
|
||||
def remove_node(self, id) -> None
|
||||
|
||||
# Persistence
|
||||
def save_spec(self, spec) -> dict # Atomic write + hash
|
||||
def compute_hash(self, spec) -> str
|
||||
|
||||
# Real-time sync
|
||||
def subscribe(self, ws: WebSocket)
|
||||
def broadcast(self, message: dict)
|
||||
```
|
||||
|
||||
**Modification Flow**:
|
||||
1. Load current spec
|
||||
2. Apply modification
|
||||
3. Validate with Pydantic
|
||||
4. Atomic write to disk
|
||||
5. Compute new hash
|
||||
6. Broadcast to all WebSocket subscribers
|
||||
7. Return hash + timestamp
|
||||
|
||||
### 3.4 REST API Endpoints
|
||||
|
||||
| Method | Endpoint | Purpose |
|
||||
|--------|----------|---------|
|
||||
| GET | `/api/studies/{id}/spec` | Load full spec |
|
||||
| GET | `/api/studies/{id}/spec/hash` | Get current hash |
|
||||
| PUT | `/api/studies/{id}/spec` | Replace entire spec |
|
||||
| PATCH | `/api/studies/{id}/spec` | JSONPath patch |
|
||||
| POST | `/api/studies/{id}/spec/nodes` | Add node |
|
||||
| PATCH | `/api/studies/{id}/spec/nodes/{nid}` | Update node |
|
||||
| DELETE | `/api/studies/{id}/spec/nodes/{nid}` | Remove node |
|
||||
| POST | `/api/studies/{id}/spec/validate` | Validate spec |
|
||||
| WS | `/api/studies/{id}/spec/sync` | Real-time sync |
|
||||
|
||||
### 3.5 Conflict Detection
|
||||
|
||||
Uses SHA256 hash of spec content:
|
||||
|
||||
```
|
||||
Client A loads spec (hash: abc123)
|
||||
Client B loads spec (hash: abc123)
|
||||
|
||||
Client A modifies → sends with hash abc123
|
||||
Server: hash matches → apply → new hash: def456
|
||||
Server: broadcast to all clients
|
||||
|
||||
Client B modifies → sends with hash abc123
|
||||
Server: hash mismatch (expected def456)
|
||||
Server: return 409 Conflict
|
||||
Client B: reload latest spec
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Optimization Engine
|
||||
|
||||
### 4.1 Directory Structure
|
||||
|
||||
```
|
||||
optimization_engine/
|
||||
├── config/ # Configuration management
|
||||
│ ├── spec_models.py # Pydantic models
|
||||
│ ├── spec_validator.py # Semantic validation
|
||||
│ └── migrator.py # Legacy migration
|
||||
├── extractors/ # Physics extractors
|
||||
│ ├── extract_displacement.py
|
||||
│ ├── extract_stress.py
|
||||
│ ├── extract_mass_*.py
|
||||
│ ├── extract_zernike*.py
|
||||
│ └── custom_extractor_loader.py
|
||||
├── core/ # Optimization algorithms
|
||||
│ ├── runner.py # Main loop
|
||||
│ ├── method_selector.py # Algorithm selection
|
||||
│ └── intelligent_optimizer.py # IMSO
|
||||
├── nx/ # NX integration
|
||||
│ ├── solver.py # Nastran execution
|
||||
│ └── updater.py # Parameter updates
|
||||
├── study/ # Study management
|
||||
│ ├── creator.py
|
||||
│ └── state.py
|
||||
└── utils/
|
||||
├── dashboard_db.py # Optuna schema
|
||||
└── trial_manager.py # Trial CRUD
|
||||
```
|
||||
|
||||
### 4.2 Extractor Library
|
||||
|
||||
| ID | Type | Function | Inputs |
|
||||
|----|------|----------|--------|
|
||||
| E1 | Displacement | Max/RMS displacement | OP2, subcase |
|
||||
| E2 | Frequency | Eigenvalue | OP2, mode |
|
||||
| E3 | Stress | Von Mises, principal | OP2, element set |
|
||||
| E4 | Mass (BDF) | Total mass | BDF file |
|
||||
| E5 | Mass (Expr) | NX expression | NX session |
|
||||
| E8-10 | Zernike | OPD polynomial fit | OP2, grid config |
|
||||
|
||||
**Custom Extractor Pattern**:
|
||||
```python
|
||||
def extract_volume(op2_path: str) -> Dict[str, float]:
|
||||
from pyNastran.op2.op2 import OP2
|
||||
op2 = OP2()
|
||||
op2.read_op2(op2_path)
|
||||
# ... calculation
|
||||
return {"volume_mm3": calculated_volume}
|
||||
```
|
||||
|
||||
### 4.3 Trial Storage
|
||||
|
||||
**Folder Structure**:
|
||||
```
|
||||
studies/{study}/2_iterations/
|
||||
├── trial_0001/
|
||||
│ ├── params.json # Input parameters
|
||||
│ ├── results.json # Objectives/constraints
|
||||
│ ├── _meta.json # Metadata
|
||||
│ └── *.op2, *.fem # FEA outputs
|
||||
├── trial_0002/
|
||||
└── ...
|
||||
```
|
||||
|
||||
**Database Schema** (Optuna-compatible SQLite):
|
||||
```sql
|
||||
trials (trial_id, study_id, number, state, created_at)
|
||||
trial_params (trial_id, param_name, param_value)
|
||||
trial_values (trial_id, objective_id, value)
|
||||
trial_user_attributes (trial_id, key, value_json)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Claude Integration
|
||||
|
||||
### 5.1 Two Operation Modes
|
||||
|
||||
| Mode | Endpoint | Capabilities | Use Case |
|
||||
|------|----------|--------------|----------|
|
||||
| **User** | `/ws` | Read-only, MCP tools | Safe exploration |
|
||||
| **Power** | `/ws/power` | Full write access | Canvas modification |
|
||||
|
||||
### 5.2 Power Mode Tools
|
||||
|
||||
```python
|
||||
# claude_agent.py - Direct API tools
|
||||
add_design_variable(name, min, max, baseline, units)
|
||||
add_extractor(name, type, config, custom_code)
|
||||
add_objective(name, direction, weight, extractor_id)
|
||||
add_constraint(name, operator, threshold, extractor_id)
|
||||
update_spec_field(path, value) # JSONPath update
|
||||
remove_node(node_id)
|
||||
```
|
||||
|
||||
### 5.3 Context Building
|
||||
|
||||
The `ContextBuilder` assembles rich system prompts:
|
||||
|
||||
```
|
||||
# Atomizer Assistant
|
||||
|
||||
## Current Mode: POWER (full write access)
|
||||
|
||||
## Current Study: bracket_optimization
|
||||
- Design Variables: 3 (thickness, angle, radius)
|
||||
- Extractors: 2 (Displacement, Mass)
|
||||
- Objectives: 2 (Min mass, Max stiffness)
|
||||
- Constraints: 1 (mass <= 0.2 kg)
|
||||
- Status: 47/100 trials complete
|
||||
|
||||
## Canvas State
|
||||
8 nodes, 11 edges
|
||||
[Node list with IDs and types...]
|
||||
|
||||
## Available Tools
|
||||
- add_design_variable: Add a new design variable
|
||||
- add_extractor: Add physics extractor
|
||||
- add_objective: Add optimization objective
|
||||
- add_constraint: Add constraint
|
||||
- update_spec_field: Update any field by JSONPath
|
||||
- remove_node: Remove element by ID
|
||||
|
||||
**ACT IMMEDIATELY** when asked to modify things.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Data Flow Diagrams
|
||||
|
||||
### 6.1 Canvas Edit Flow
|
||||
|
||||
```
|
||||
User edits node property
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────┐
|
||||
│ useSpecStore.patchSpec() │
|
||||
│ ┌─────────────────────────┐ │
|
||||
│ │ 1. Optimistic UI update │ │
|
||||
│ └───────────┬─────────────┘ │
|
||||
│ │ │
|
||||
│ ┌───────────▼─────────────┐ │
|
||||
│ │ 2. Async PATCH request │ │
|
||||
│ └───────────┬─────────────┘ │
|
||||
└─────────────┼───────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────┐
|
||||
│ Backend: SpecManager │
|
||||
│ ┌─────────────────────────┐ │
|
||||
│ │ 3. JSONPath parse │ │
|
||||
│ │ 4. Apply modification │ │
|
||||
│ │ 5. Pydantic validate │ │
|
||||
│ │ 6. Atomic file write │ │
|
||||
│ │ 7. Compute new hash │ │
|
||||
│ │ 8. Broadcast to clients │ │
|
||||
│ └───────────┬─────────────┘ │
|
||||
└─────────────┼───────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────┐
|
||||
│ All WebSocket Clients │
|
||||
│ ┌─────────────────────────┐ │
|
||||
│ │ 9. Receive spec_updated │ │
|
||||
│ │ 10. Update local hash │ │
|
||||
│ │ 11. Re-render if needed │ │
|
||||
│ └─────────────────────────┘ │
|
||||
└─────────────────────────────┘
|
||||
```
|
||||
|
||||
### 6.2 Optimization Run Flow
|
||||
|
||||
```
|
||||
User clicks "Run"
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────┐
|
||||
│ POST /api/optimization/start│
|
||||
│ { study_id, trials, method }│
|
||||
└─────────────┬───────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────┐
|
||||
│ Backend: Spawn runner │
|
||||
│ ┌─────────────────────────┐ │
|
||||
│ │ 1. Load spec │ │
|
||||
│ │ 2. Initialize Optuna │ │
|
||||
│ │ 3. Create trial folders │ │
|
||||
│ └───────────┬─────────────┘ │
|
||||
└─────────────┼───────────────┘
|
||||
│
|
||||
┌─────────┼─────────┐
|
||||
│ │ │
|
||||
▼ ▼ ▼
|
||||
┌───────────────────────────────────────────────────────┐
|
||||
│ For each trial (1 to N): │
|
||||
│ ┌───────────────────────────────────────────────────┐ │
|
||||
│ │ 4. Optuna suggests parameters │ │
|
||||
│ │ 5. Update NX expressions │ │
|
||||
│ │ 6. Run Nastran simulation │ │
|
||||
│ │ 7. Extract physics results │ │
|
||||
│ │ 8. Compute objectives/constraints │ │
|
||||
│ │ 9. Save to trial folder + database │ │
|
||||
│ │ 10. Send WebSocket update │ │
|
||||
│ └─────────────────────────────────────────────────┘ │
|
||||
└───────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────┐
|
||||
│ Frontend: Real-time updates │
|
||||
│ ┌─────────────────────────┐ │
|
||||
│ │ Update convergence plot │ │
|
||||
│ │ Update trial table │ │
|
||||
│ │ Show best design │ │
|
||||
│ └─────────────────────────┘ │
|
||||
└─────────────────────────────┘
|
||||
```
|
||||
|
||||
### 6.3 Claude Canvas Modification Flow
|
||||
|
||||
```
|
||||
User: "Add volume extractor with constraint <= 1000"
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────┐
|
||||
│ WebSocket: /ws/power │
|
||||
│ { type: 'message', │
|
||||
│ content: '...', │
|
||||
│ canvas_state: {...} } │
|
||||
└─────────────┬───────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────┐
|
||||
│ AtomizerClaudeAgent │
|
||||
│ ┌─────────────────────────┐ │
|
||||
│ │ 1. Build context prompt │ │
|
||||
│ │ 2. Send to Claude API │ │
|
||||
│ │ 3. Claude decides tools │ │
|
||||
│ └───────────┬─────────────┘ │
|
||||
└─────────────┼───────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────┐
|
||||
│ Claude Tool Calls: │
|
||||
│ ┌─────────────────────────┐ │
|
||||
│ │ add_extractor( │ │
|
||||
│ │ name="Volume", │ │
|
||||
│ │ type="custom", │ │
|
||||
│ │ code="..." ) │ │
|
||||
│ │ │ │
|
||||
│ │ add_constraint( │ │
|
||||
│ │ name="Max Volume", │ │
|
||||
│ │ operator="<=", │ │
|
||||
│ │ threshold=1000 ) │ │
|
||||
│ └───────────┬─────────────┘ │
|
||||
└─────────────┼───────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────┐
|
||||
│ Each tool modifies spec: │
|
||||
│ ┌─────────────────────────┐ │
|
||||
│ │ Load → Modify → Save │ │
|
||||
│ │ Send spec_modified │ │
|
||||
│ └───────────┬─────────────┘ │
|
||||
└─────────────┼───────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────┐
|
||||
│ Frontend receives events: │
|
||||
│ ┌─────────────────────────┐ │
|
||||
│ │ spec_modified → reload │ │
|
||||
│ │ Canvas shows new nodes │ │
|
||||
│ └─────────────────────────┘ │
|
||||
└─────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Component Relationships
|
||||
|
||||
### 7.1 Frontend Component Hierarchy
|
||||
|
||||
```
|
||||
<App>
|
||||
├── <CanvasView>
|
||||
│ ├── <AtomizerCanvas>
|
||||
│ │ ├── <ReactFlow>
|
||||
│ │ │ ├── <DesignVarNode> × N
|
||||
│ │ │ ├── <ExtractorNode> × N
|
||||
│ │ │ ├── <ObjectiveNode> × N
|
||||
│ │ │ ├── <ConstraintNode> × N
|
||||
│ │ │ ├── <ModelNode>
|
||||
│ │ │ ├── <SolverNode>
|
||||
│ │ │ └── <AlgorithmNode>
|
||||
│ │ ├── <NodePalette>
|
||||
│ │ ├── <NodeConfigPanel>
|
||||
│ │ ├── <ValidationPanel>
|
||||
│ │ └── <ExecuteDialog>
|
||||
│ └── <ChatPanel>
|
||||
│ ├── <ChatMessage> × N
|
||||
│ └── <ToolCallCard> × M
|
||||
├── <Home>
|
||||
│ └── <StudyList>
|
||||
└── <Setup>
|
||||
└── <StudyWizard>
|
||||
```
|
||||
|
||||
### 7.2 Backend Service Dependencies
|
||||
|
||||
```
|
||||
FastAPI App
|
||||
│
|
||||
├── spec.py ─────────► SpecManager
|
||||
│ ├── Pydantic Models
|
||||
│ └── File I/O + Hash
|
||||
│
|
||||
├── claude.py ───────► AtomizerClaudeAgent
|
||||
│ ├── ContextBuilder
|
||||
│ ├── Anthropic Client
|
||||
│ └── Write Tools
|
||||
│
|
||||
├── optimization.py ─► Runner Process
|
||||
│ ├── TrialManager
|
||||
│ ├── NX Solver
|
||||
│ └── Extractors
|
||||
│
|
||||
└── WebSocket Hub ◄─── All routes broadcast
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. Critical Patterns
|
||||
|
||||
### 8.1 Modification Pattern
|
||||
|
||||
**Always use SpecManager for modifications:**
|
||||
|
||||
```python
|
||||
# ❌ WRONG: Direct file write
|
||||
with open("atomizer_spec.json", "w") as f:
|
||||
json.dump(spec, f)
|
||||
|
||||
# ✅ CORRECT: Use SpecManager
|
||||
manager = SpecManager(study_path)
|
||||
spec = manager.load_spec()
|
||||
spec.objectives[0].weight = 2.0
|
||||
manager.save_spec(spec)
|
||||
```
|
||||
|
||||
### 8.2 Optimistic Update Pattern
|
||||
|
||||
```typescript
|
||||
// 1. Update UI immediately
|
||||
setSpec(modifiedSpec);
|
||||
|
||||
// 2. Async sync to backend
|
||||
patchSpec(path, value)
|
||||
.then(({ hash }) => setHash(hash))
|
||||
.catch(() => setSpec(originalSpec)); // Rollback on failure
|
||||
```
|
||||
|
||||
### 8.3 WebSocket Sync Pattern
|
||||
|
||||
```
|
||||
Client A ─────► Server ─────► Client B
|
||||
│ │ │
|
||||
│ PATCH │ │
|
||||
├─────────────►│ │
|
||||
│ │ broadcast │
|
||||
│ ├─────────────►│
|
||||
│ │ │
|
||||
│◄─────────────┤◄─────────────┤
|
||||
│ ack + hash │ │
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. Potential Improvements
|
||||
|
||||
### 9.1 Current Limitations
|
||||
|
||||
1. **No Undo/Redo**: Canvas modifications are immediate
|
||||
2. **Single File Lock**: No distributed locking for multi-user
|
||||
3. **Memory-only Sessions**: Session state lost on restart
|
||||
4. **Limited Offline**: Requires backend connection
|
||||
|
||||
### 9.2 Recommended Enhancements
|
||||
|
||||
| Priority | Enhancement | Benefit |
|
||||
|----------|-------------|---------|
|
||||
| High | Operation history with undo | Better UX |
|
||||
| High | Persistent sessions (Redis) | Scalability |
|
||||
| Medium | Spec versioning/branching | Experimentation |
|
||||
| Medium | Batch operations API | Performance |
|
||||
| Low | Offline canvas editing | Flexibility |
|
||||
|
||||
---
|
||||
|
||||
## 10. Conclusion
|
||||
|
||||
Atomizer's architecture is **well-designed for its purpose**: enabling engineers to configure and run FEA optimizations through a visual interface with AI assistance.
|
||||
|
||||
**Strongest Points**:
|
||||
- Single source of truth eliminates sync issues
|
||||
- Pydantic ensures data integrity
|
||||
- WebSocket enables real-time collaboration
|
||||
- Optimistic updates provide responsive UX
|
||||
|
||||
**Areas for Attention**:
|
||||
- Add undo/redo for canvas operations
|
||||
- Consider persistent session storage for production
|
||||
- Expand test coverage for spec migrations
|
||||
|
||||
The architecture is **production-ready** for single-user/small-team scenarios and can be enhanced for enterprise deployment.
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user