docs: Major documentation overhaul - restructure folders, update tagline, add Getting Started guide

- Restructure docs/ folder (remove numeric prefixes):
  - 04_USER_GUIDES -> guides/
  - 05_API_REFERENCE -> api/
  - 06_PHYSICS -> physics/
  - 07_DEVELOPMENT -> development/
  - 08_ARCHIVE -> archive/
  - 09_DIAGRAMS -> diagrams/

- Replace tagline 'Talk, don't click' with 'LLM-driven optimization framework' in 9 files

- Create comprehensive docs/GETTING_STARTED.md:
  - Prerequisites and quick setup
  - Project structure overview
  - First study tutorial (Claude or manual)
  - Dashboard usage guide
  - Neural acceleration introduction

- Rewrite docs/00_INDEX.md with correct paths and modern structure

- Archive obsolete files:
  - 01_PROTOCOLS.md -> archive/historical/01_PROTOCOLS_legacy.md
  - 03_GETTING_STARTED.md -> archive/historical/
  - ATOMIZER_PODCAST_BRIEFING.md -> archive/marketing/

- Update timestamps to 2026-01-20 across all key files

- Update .gitignore to exclude docs/generated/

- Version bump: ATOMIZER_CONTEXT v1.8 -> v2.0
This commit is contained in:
2026-01-20 10:03:45 -05:00
parent 37f73cc2be
commit ea437d360e
103 changed files with 8980 additions and 327 deletions

View File

@@ -0,0 +1,700 @@
# AtomizerField Neural Optimization Guide
## 🚀 Overview
This guide explains how to use AtomizerField neural network surrogates with Atomizer to achieve **600x-500,000x speedup** in FEA-based optimization. By replacing expensive 30-minute FEA simulations with 50ms neural network predictions, you can explore 1000x more design configurations in the same time.
## Table of Contents
1. [Quick Start](#quick-start)
2. [Architecture](#architecture)
3. [Configuration](#configuration)
4. [Training Data Collection](#training-data-collection)
5. [Neural Model Training](#neural-model-training)
6. [Hybrid Optimization Strategies](#hybrid-optimization-strategies)
7. [Performance Monitoring](#performance-monitoring)
8. [Troubleshooting](#troubleshooting)
9. [Best Practices](#best-practices)
## Quick Start
### 1. Enable Neural Surrogate in Your Study
Add the following to your `workflow_config.json`:
```json
{
"neural_surrogate": {
"enabled": true,
"model_checkpoint": "atomizer-field/checkpoints/your_model.pt",
"confidence_threshold": 0.85
}
}
```
### 2. Use Neural-Enhanced Runner
```python
# In your run_optimization.py
from optimization_engine.runner_with_neural import create_neural_runner
# Create neural-enhanced runner instead of standard runner
runner = create_neural_runner(
config_path=Path("workflow_config.json"),
model_updater=update_nx_model,
simulation_runner=run_simulation,
result_extractors=extractors
)
# Run optimization with automatic neural acceleration
study = runner.run(n_trials=1000) # Can now afford 1000s of trials!
```
### 3. Monitor Speedup
After optimization, you'll see:
```
============================================================
NEURAL NETWORK SPEEDUP SUMMARY
============================================================
Trials using neural network: 950/1000 (95.0%)
Average NN inference time: 0.052 seconds
Average NN confidence: 92.3%
Estimated speedup: 34,615x
Time saved: ~475.0 hours
============================================================
```
## Architecture
### Component Overview
```
┌─────────────────────────────────────────────────────────┐
│ Atomizer Framework │
├─────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────────┐ ┌─────────────────────┐ │
│ │ Optimization │ │ Neural-Enhanced │ │
│ │ Runner │ ───> │ Runner │ │
│ └──────────────────┘ └─────────────────────┘ │
│ │ │ │
│ │ ▼ │
│ │ ┌─────────────────────┐ │
│ │ │ Neural Surrogate │ │
│ │ │ Manager │ │
│ │ └─────────────────────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌──────────────────┐ ┌─────────────────────┐ │
│ │ NX FEA Solver │ │ AtomizerField NN │ │
│ │ (30 minutes) │ │ (50 ms) │ │
│ └──────────────────┘ └─────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────┘
```
### Key Components
1. **NeuralOptimizationRunner** (`optimization_engine/runner_with_neural.py`)
- Extends base runner with neural capabilities
- Manages hybrid FEA/NN decisions
- Tracks performance metrics
2. **NeuralSurrogate** (`optimization_engine/neural_surrogate.py`)
- Loads AtomizerField models
- Converts design variables to graph format
- Provides confidence-based predictions
3. **HybridOptimizer** (`optimization_engine/neural_surrogate.py`)
- Implements smart switching strategies
- Manages exploration/exploitation phases
- Handles model retraining
4. **TrainingDataExporter** (`optimization_engine/training_data_exporter.py`)
- Exports FEA results for neural training
- Saves .dat/.op2 files with metadata
## Configuration
### Complete Neural Configuration
```json
{
"study_name": "advanced_optimization_with_neural",
"neural_surrogate": {
"enabled": true,
"model_checkpoint": "atomizer-field/checkpoints/model_v2.0/best.pt",
"confidence_threshold": 0.85,
"fallback_to_fea": true,
"ensemble_models": [
"atomizer-field/checkpoints/model_v2.0/fold_1.pt",
"atomizer-field/checkpoints/model_v2.0/fold_2.pt",
"atomizer-field/checkpoints/model_v2.0/fold_3.pt"
],
"device": "cuda",
"batch_size": 32,
"cache_predictions": true,
"cache_size": 10000
},
"hybrid_optimization": {
"enabled": true,
"exploration_trials": 30,
"training_interval": 100,
"validation_frequency": 20,
"min_training_samples": 50,
"phases": [
{
"name": "exploration",
"trials": [0, 30],
"use_nn": false,
"description": "Initial FEA exploration"
},
{
"name": "exploitation",
"trials": [31, 950],
"use_nn": true,
"description": "Neural network exploitation"
},
{
"name": "validation",
"trials": [951, 1000],
"use_nn": false,
"description": "Final FEA validation"
}
],
"adaptive_switching": true,
"drift_threshold": 0.15,
"retrain_on_drift": true
},
"training_data_export": {
"enabled": true,
"export_dir": "atomizer_field_training_data/my_study",
"include_failed_trials": false,
"compression": "gzip"
}
}
```
### Configuration Parameters
#### neural_surrogate
- `enabled`: Enable/disable neural surrogate
- `model_checkpoint`: Path to trained PyTorch model
- `confidence_threshold`: Minimum confidence to use NN (0.0-1.0)
- `fallback_to_fea`: Use FEA when confidence is low
- `ensemble_models`: List of models for ensemble predictions
- `device`: "cuda" or "cpu"
- `batch_size`: Batch size for neural inference
- `cache_predictions`: Cache NN predictions for repeated designs
#### hybrid_optimization
- `exploration_trials`: Number of initial FEA trials
- `training_interval`: Retrain NN every N trials
- `validation_frequency`: Validate NN with FEA every N trials
- `phases`: List of optimization phases with different strategies
- `adaptive_switching`: Dynamically adjust FEA/NN usage
- `drift_threshold`: Max prediction error before retraining
## Training Data Collection
### Automatic Export During Optimization
Training data is automatically exported when enabled:
```json
{
"training_data_export": {
"enabled": true,
"export_dir": "atomizer_field_training_data/beam_study"
}
}
```
Directory structure created:
```
atomizer_field_training_data/beam_study/
├── trial_0001/
│ ├── input/
│ │ └── model.bdf # NX Nastran input
│ ├── output/
│ │ └── model.op2 # Binary results
│ └── metadata.json # Design vars, objectives
├── trial_0002/
│ └── ...
├── study_summary.json
└── README.md
```
### Manual Export of Existing Studies
```python
from optimization_engine.training_data_exporter import TrainingDataExporter
exporter = TrainingDataExporter(
export_dir=Path("training_data/my_study"),
study_name="beam_optimization",
design_variable_names=["width", "height", "thickness"],
objective_names=["max_stress", "mass"]
)
# Export each trial
for trial in existing_trials:
exporter.export_trial(
trial_number=trial.number,
design_variables=trial.params,
results=trial.values,
simulation_files={
'dat_file': Path(f"sim_{trial.number}.dat"),
'op2_file': Path(f"sim_{trial.number}.op2")
}
)
exporter.finalize()
```
## Neural Model Training
### 1. Prepare Training Data
```bash
cd atomizer-field
python batch_parser.py --data-dir ../atomizer_field_training_data/beam_study
```
This converts BDF/OP2 files to PyTorch Geometric format.
### 2. Train Neural Network
```bash
python train.py \
--data-dir training_data/parsed/ \
--epochs 200 \
--model GraphUNet \
--hidden-channels 128 \
--num-layers 4 \
--physics-loss-weight 0.3
```
### 3. Validate Model
```bash
python validate.py --checkpoint checkpoints/model_v2.0/best.pt
```
Expected output:
```
Validation Results:
- Mean Absolute Error: 2.34 MPa (1.2%)
- R² Score: 0.987
- Inference Time: 52ms ± 8ms
- Physics Constraint Violations: 0.3%
```
### 4. Deploy Model
Copy trained model to Atomizer:
```bash
cp checkpoints/model_v2.0/best.pt ../studies/my_study/neural_model.pt
```
## Hybrid Optimization Strategies
### Strategy 1: Phased Optimization
```python
# Phase 1: Exploration (FEA)
# Collect diverse training data
for trial in range(30):
use_fea() # Always use FEA
# Phase 2: Training
# Train neural network on collected data
train_neural_network()
# Phase 3: Exploitation (NN)
# Use NN for rapid optimization
for trial in range(31, 980):
if confidence > 0.85:
use_nn() # Fast neural network
else:
use_fea() # Fallback to FEA
# Phase 4: Validation (FEA)
# Validate best designs with FEA
for trial in range(981, 1000):
use_fea() # Final validation
```
### Strategy 2: Adaptive Switching
```python
class AdaptiveStrategy:
def should_use_nn(self, trial_number):
# Start with exploration
if trial_number < 20:
return False
# Check prediction accuracy
if self.recent_error > 0.15:
self.retrain_model()
return False
# Periodic validation
if trial_number % 50 == 0:
return False # Validate with FEA
# High-stakes decisions
if self.near_optimal_region():
return False # Use FEA for critical designs
return True # Use NN for everything else
```
### Strategy 3: Uncertainty-Based
```python
def decide_solver(design_vars, ensemble_models):
# Get predictions from ensemble
predictions = [model.predict(design_vars) for model in ensemble_models]
# Calculate uncertainty
mean_pred = np.mean(predictions)
std_pred = np.std(predictions)
confidence = 1.0 - (std_pred / mean_pred)
if confidence > 0.9:
return "neural", mean_pred
elif confidence > 0.7:
# Mixed strategy
if random.random() < confidence:
return "neural", mean_pred
else:
return "fea", None
else:
return "fea", None
```
## Performance Monitoring
### Real-Time Metrics
The neural runner tracks performance automatically:
```python
# During optimization
Trial 42: Used neural network (confidence: 94.2%, time: 0.048s)
Trial 43: Neural confidence too low (72.1%), using FEA
Trial 44: Used neural network (confidence: 91.8%, time: 0.051s)
```
### Post-Optimization Analysis
```python
# Access performance metrics
metrics = runner.neural_speedup_tracker
# Calculate statistics
avg_speedup = np.mean([m['speedup'] for m in metrics])
total_time_saved = sum([m['time_saved'] for m in metrics])
# Export detailed report
runner.export_performance_report("neural_performance.json")
```
### Visualization
```python
import matplotlib.pyplot as plt
# Plot confidence over trials
plt.figure(figsize=(10, 6))
plt.plot(trials, confidences, 'b-', label='NN Confidence')
plt.axhline(y=0.85, color='r', linestyle='--', label='Threshold')
plt.xlabel('Trial Number')
plt.ylabel('Confidence')
plt.title('Neural Network Confidence During Optimization')
plt.legend()
plt.savefig('nn_confidence.png')
# Plot speedup
plt.figure(figsize=(10, 6))
plt.bar(phases, speedups, color=['red', 'yellow', 'green'])
plt.xlabel('Optimization Phase')
plt.ylabel('Speedup Factor')
plt.title('Speedup by Optimization Phase')
plt.savefig('speedup_by_phase.png')
```
## Troubleshooting
### Common Issues and Solutions
#### 1. Low Neural Network Confidence
```
WARNING: Neural confidence below threshold (65.3% < 85%)
```
**Solutions:**
- Train with more diverse data
- Reduce confidence threshold
- Use ensemble models
- Check if design is out of training distribution
#### 2. Model Loading Error
```
ERROR: Could not load model checkpoint: file not found
```
**Solutions:**
- Verify path in config
- Check file permissions
- Ensure model is compatible with current AtomizerField version
#### 3. Slow Neural Inference
```
WARNING: Neural inference taking 2.3s (expected <100ms)
```
**Solutions:**
- Use GPU acceleration (`device: "cuda"`)
- Reduce batch size
- Enable prediction caching
- Check model complexity
#### 4. Prediction Drift
```
WARNING: Neural predictions drifting from FEA (error: 18.2%)
```
**Solutions:**
- Retrain model with recent data
- Increase validation frequency
- Adjust drift threshold
- Check for distribution shift
### Debugging Tips
1. **Enable Verbose Logging**
```python
import logging
logging.basicConfig(level=logging.DEBUG)
```
2. **Test Neural Model Standalone**
```python
from optimization_engine.neural_surrogate import NeuralSurrogate
surrogate = NeuralSurrogate(model_path="model.pt")
test_design = {"width": 50, "height": 75, "thickness": 5}
pred, conf, used_nn = surrogate.predict(test_design)
print(f"Prediction: {pred}, Confidence: {conf}")
```
3. **Compare NN vs FEA**
```python
# Force FEA for comparison
fea_result = runner.simulation_runner(design_vars)
nn_result = runner.neural_surrogate.predict(design_vars)
error = abs(fea_result - nn_result) / fea_result * 100
print(f"Relative error: {error:.1f}%")
```
## Best Practices
### 1. Data Quality
- **Diverse Training Data**: Ensure training covers full design space
- **Quality Control**: Validate FEA results before training
- **Incremental Training**: Continuously improve model with new data
### 2. Model Selection
- **Start Simple**: Begin with smaller models, increase complexity as needed
- **Ensemble Methods**: Use 3-5 models for robust predictions
- **Physics Constraints**: Include physics loss for better generalization
### 3. Optimization Strategy
- **Conservative Start**: Use high confidence threshold initially
- **Adaptive Approach**: Adjust strategy based on performance
- **Validation**: Always validate final designs with FEA
### 4. Performance Optimization
- **GPU Acceleration**: Use CUDA for 10x faster inference
- **Batch Processing**: Process multiple designs simultaneously
- **Caching**: Cache predictions for repeated designs
### 5. Safety and Reliability
- **Fallback Mechanism**: Always have FEA fallback
- **Confidence Monitoring**: Track and log confidence levels
- **Periodic Validation**: Regularly check NN accuracy
## Example: Complete Workflow
### Step 1: Initial FEA Study
```bash
# Run initial optimization with training data export
python run_optimization.py --trials 50 --export-training-data
```
### Step 2: Train Neural Model
```bash
cd atomizer-field
python batch_parser.py --data-dir ../training_data
python train.py --epochs 200
```
### Step 3: Neural-Enhanced Optimization
```python
# Update config to use neural model
config["neural_surrogate"]["enabled"] = True
config["neural_surrogate"]["model_checkpoint"] = "model.pt"
# Run with 1000s of trials
runner = create_neural_runner(config_path, ...)
study = runner.run(n_trials=5000) # Now feasible!
```
### Step 4: Validate Results
```python
# Get top 10 designs
best_designs = study.best_trials[:10]
# Validate with FEA
for design in best_designs:
fea_result = validate_with_fea(design.params)
print(f"Design {design.number}: NN={design.value:.2f}, FEA={fea_result:.2f}")
```
## Parametric Surrogate Model (NEW)
The **ParametricSurrogate** is a design-conditioned GNN that predicts **ALL 4 optimization objectives** directly from design parameters, providing a future-proof solution for neural-accelerated optimization.
### Key Features
- **Predicts all objectives**: mass, frequency, max_displacement, max_stress
- **Design-conditioned**: Takes design variables as explicit input
- **Ultra-fast inference**: ~4.5ms per prediction (vs ~10s FEA = 2000x speedup)
- **GPU accelerated**: Uses CUDA for fast inference
### Quick Start
```python
from optimization_engine.neural_surrogate import create_parametric_surrogate_for_study
# Create surrogate with auto-detection
surrogate = create_parametric_surrogate_for_study()
# Predict all objectives
test_params = {
"beam_half_core_thickness": 7.0,
"beam_face_thickness": 2.5,
"holes_diameter": 35.0,
"hole_count": 10.0
}
results = surrogate.predict(test_params)
print(f"Mass: {results['mass']:.2f} g")
print(f"Frequency: {results['frequency']:.2f} Hz")
print(f"Max displacement: {results['max_displacement']:.6f} mm")
print(f"Max stress: {results['max_stress']:.2f} MPa")
print(f"Inference time: {results['inference_time_ms']:.2f} ms")
```
### Architecture
```
Design Parameters (4)
|
┌────▼────┐
│ Design │
│ Encoder │ (MLP: 4 -> 64 -> 128)
└────┬────┘
┌────▼────┐
│ GNN │ (Design-conditioned message passing)
│ Layers │ (4 layers, 128 hidden channels)
└────┬────┘
┌────▼────┐
│ Global │ (Mean + Max pooling)
│ Pool │
└────┬────┘
┌────▼────┐
│ Scalar │ (MLP: 384 -> 128 -> 64 -> 4)
│ Heads │
└────┬────┘
4 Objectives: [mass, frequency, displacement, stress]
```
### Training the Parametric Model
```bash
cd atomizer-field
python train_parametric.py \
--train_dir ../atomizer_field_training_data/uav_arm_train \
--val_dir ../atomizer_field_training_data/uav_arm_val \
--epochs 200 \
--output_dir runs/parametric_model
```
### Model Location
Trained models are stored in:
```
atomizer-field/runs/parametric_uav_arm_v2/
├── checkpoint_best.pt # Best validation loss model
├── config.json # Model configuration
└── training_log.csv # Training history
```
### Integration with Optimization
The ParametricSurrogate can be used as a drop-in replacement for FEA during optimization:
```python
from optimization_engine.neural_surrogate import create_parametric_surrogate_for_study
# In your objective function
surrogate = create_parametric_surrogate_for_study()
def fast_objective(design_params):
"""Use neural network instead of FEA"""
results = surrogate.predict(design_params)
return results['mass'], results['frequency']
```
## Advanced Topics
### Custom Neural Architectures
See [AtomizerField documentation](https://github.com/Anto01/Atomizer-Field/docs) for implementing custom GNN architectures.
### Multi-Fidelity Optimization
Combine low-fidelity (coarse mesh) and high-fidelity (fine mesh) simulations with neural surrogates.
### Transfer Learning
Use pre-trained models from similar problems to accelerate training.
### Active Learning
Intelligently select which designs to evaluate with FEA for maximum learning.
## Summary
AtomizerField neural surrogates enable:
-**600x-500,000x speedup** over traditional FEA
-**Explore 1000x more designs** in same time
-**Maintain accuracy** with confidence-based fallback
-**Seamless integration** with existing Atomizer workflows
-**Continuous improvement** through online learning
Start with the Quick Start section and gradually adopt more advanced features as needed.
For questions or issues, see the [AtomizerField GitHub](https://github.com/Anto01/Atomizer-Field) or [Atomizer documentation](../README.md).

981
docs/guides/CANVAS.md Normal file
View File

@@ -0,0 +1,981 @@
# Atomizer Canvas - Visual Workflow Builder
**Last Updated**: January 17, 2026
**Version**: 3.1 (AtomizerSpec v2.0)
**Status**: Production
---
## Overview
The Atomizer Canvas is a visual, node-based workflow builder for designing optimization studies. It provides a drag-and-drop interface for configuring FEA optimizations that integrates with Claude to validate and execute workflows.
**New in v3.1**: The Canvas now uses **AtomizerSpec v2.0** as the unified configuration format. All studies are defined by a single `atomizer_spec.json` file that serves as the single source of truth for Canvas, Backend, Claude, and the Optimization Engine.
### Key Features
- **Visual Workflow Design**: Drag-and-drop nodes to build optimization pipelines
- **Professional Lucide Icons**: Clean, consistent iconography throughout the interface
- **AtomizerSpec v2.0**: Unified JSON spec format (`atomizer_spec.json`) for all configuration
- **Auto-Load from Studies**: Import existing studies (supports both v2.0 specs and legacy configs)
- **NX Model Introspection**: Automatically extract expressions from .prt/.sim/.fem files
- **File Browser**: Browse and select model files with type filtering
- **Expression Search**: Searchable dropdown for design variable configuration
- **One-Click Add**: Add discovered expressions as design variables instantly
- **Custom Extractors**: Define custom Python extraction functions directly in the spec
- **Claude Integration**: "Process with Claude" button for AI-assisted study creation
- **Real-time Sync**: WebSocket-based synchronization between clients
- **Responsive Layout**: Full-screen canvas that adapts to window size
### What's New in V3.1
| Feature | Description |
|---------|-------------|
| **AtomizerSpec v2.0** | Unified configuration format replacing `optimization_config.json` |
| **Spec REST API** | Full CRUD operations on spec via `/api/studies/{id}/spec` |
| **Custom Extractors** | Define custom Python functions as extractors in the spec |
| **WebSocket Sync** | Real-time spec synchronization between clients |
| **Legacy Migration** | Automatic migration of old `optimization_config.json` files |
### What's New in V3.0
| Feature | Description |
|---------|-------------|
| **File Browser** | Browse studies directory for .sim/.prt/.fem/.afem files |
| **Introspection Panel** | View discovered expressions, extractors, and dependencies |
| **One-Click Add** | Add expressions as design variables with a single click |
| **Claude Fixes** | Fixed SQL errors, WebSocket reconnection issues |
| **Health Check** | `/api/health` endpoint for database monitoring |
---
## Architecture
### Frontend Stack
| Component | Technology | Purpose |
|-----------|------------|---------|
| Flow Engine | React Flow | Node-based graph rendering |
| State Management | Zustand | Canvas state (nodes, edges, selection) |
| Icons | Lucide React | Professional icon library |
| Styling | Tailwind CSS | Dark theme (Atomaster palette) |
| Chat | WebSocket | Real-time Claude communication |
### Node Types (8)
| Node | Icon | Description | Color |
|------|------|-------------|-------|
| **Model** | `Cube` | NX model file (.prt, .sim, .fem) | Blue |
| **Solver** | `Cpu` | Nastran solution type (SOL101, SOL103, etc.) | Violet |
| **Design Variable** | `SlidersHorizontal` | Parameter to optimize with bounds | Emerald |
| **Extractor** | `FlaskConical` | Physics result extraction (E1-E10) | Cyan |
| **Objective** | `Target` | Optimization goal (minimize/maximize) | Rose |
| **Constraint** | `ShieldAlert` | Design constraint (upper/lower bounds) | Amber |
| **Algorithm** | `BrainCircuit` | Optimization method (TPE, CMA-ES, NSGA-II) | Indigo |
| **Surrogate** | `Rocket` | Neural acceleration (optional) | Pink |
### File Structure
```
atomizer-dashboard/frontend/src/
├── components/canvas/
│ ├── AtomizerCanvas.tsx # Main canvas component
│ ├── nodes/
│ │ ├── index.ts # Node type registry
│ │ ├── BaseNode.tsx # Base node with handles
│ │ ├── ModelNode.tsx # Model file node
│ │ ├── SolverNode.tsx # Solver type node
│ │ ├── DesignVarNode.tsx # Design variable node
│ │ ├── ExtractorNode.tsx # Extractor node
│ │ ├── ObjectiveNode.tsx # Objective node
│ │ ├── ConstraintNode.tsx # Constraint node
│ │ ├── AlgorithmNode.tsx # Algorithm node
│ │ ├── SurrogateNode.tsx # Surrogate node
│ │ └── CustomExtractorNode.tsx # Custom extractor node (V3.1)
│ ├── panels/
│ │ ├── NodeConfigPanel.tsx # Node configuration sidebar
│ │ ├── ValidationPanel.tsx # Validation toast display
│ │ ├── ExecuteDialog.tsx # Execute confirmation modal
│ │ ├── ChatPanel.tsx # Claude chat sidebar
│ │ ├── ConfigImporter.tsx # Study import dialog
│ │ ├── TemplateSelector.tsx # Workflow template chooser
│ │ ├── FileBrowser.tsx # File picker modal (V3)
│ │ ├── IntrospectionPanel.tsx # Model introspection results (V3)
│ │ ├── ExpressionSelector.tsx # Expression search dropdown (V3)
│ │ └── CustomExtractorPanel.tsx # Code editor for custom extractors (V3.1)
│ └── palette/
│ └── NodePalette.tsx # Draggable node palette
├── hooks/
│ ├── useCanvasStore.ts # Zustand store for canvas state
│ ├── useSpecStore.ts # Zustand store for AtomizerSpec (V3.1)
│ ├── useSpecSync.ts # WebSocket spec sync (V3.1)
│ └── useCanvasChat.ts # Claude chat integration
├── lib/
│ ├── canvas/
│ │ ├── schema.ts # TypeScript type definitions
│ │ ├── intent.ts # Intent serialization (legacy)
│ │ ├── validation.ts # Graph validation logic
│ │ └── templates.ts # Workflow templates
│ └── spec/
│ └── converter.ts # AtomizerSpec ↔ ReactFlow converter (V3.1)
├── types/
│ └── atomizer-spec.ts # AtomizerSpec TypeScript types (V3.1)
└── pages/
└── CanvasView.tsx # Canvas page (/canvas route)
```
### Backend File Structure (V3.1)
```
atomizer-dashboard/backend/api/
├── services/
│ ├── spec_manager.py # SpecManager - load/save/validate specs
│ ├── claude_agent.py # Claude API integration
│ └── context_builder.py # Context assembly with spec awareness
└── routes/
├── spec.py # AtomizerSpec REST API endpoints
└── optimization.py # Optimization endpoints
optimization_engine/
├── config/
│ ├── spec_models.py # Pydantic models for AtomizerSpec
│ ├── spec_validator.py # Semantic validation
│ └── migrator.py # Legacy config migration
├── extractors/
│ └── custom_extractor_loader.py # Runtime custom function loader
└── schemas/
└── atomizer_spec_v2.json # JSON Schema definition
```
---
## User Interface
### Layout
```
┌───────────────────────────────────────────────────────────────────┐
│ Canvas Builder Templates Import│
├──────────┬────────────────────────────────────────────┬───────────┤
│ │ │ │
│ Node │ Canvas Area │ Config │
│ Palette │ │ Panel │
│ │ ┌─────┐ ┌─────┐ ┌─────┐ │ │
│ [Model] │ │Model├──────│Solver├──────│Algo │ │ Label: │
│ [Solver]│ └─────┘ └──┬──┘ └─────┘ │ [____] │
│ [DVar] │ │ │ │
│ [Extr] │ ┌─────┐ ┌──┴──┐ ┌─────┐ │ Type: │
│ [Obj] │ │ DVar├──────│Extr ├──────│ Obj │ │ [____] │
│ [Const] │ └─────┘ └─────┘ └─────┘ │ │
│ [Algo] │ │ │
│ [Surr] │ │ │
│ │ │ │
├──────────┴────────────────────────────────────────────┴───────────┤
│ [Validate] [Process with Claude] │
└───────────────────────────────────────────────────────────────────┘
```
### Dark Theme (Atomaster Palette)
| Element | Color | Tailwind Class |
|---------|-------|----------------|
| Background | `#050A12` | `bg-dark-900` |
| Surface | `#0A1525` | `bg-dark-850` |
| Card | `#0F1E32` | `bg-dark-800` |
| Border | `#1A2F4A` | `border-dark-700` |
| Muted Text | `#5A7A9A` | `text-dark-400` |
| Primary | `#00D4E6` | `text-primary-400` |
---
## Core Workflows
### 1. Building a Workflow
1. **Drag nodes** from the left palette onto the canvas
2. **Connect nodes** by dragging from output handle to input handle
3. **Configure nodes** by clicking to open the config panel
4. **Validate** using the Validate button
5. **Process with Claude** to create the study
### 2. Importing from Existing Study
1. Click **Import** in the header
2. Select the **Load Study** tab
3. **Search** for your study by name
4. **Select** a study with an optimization_config.json
5. Click **Load Study** to populate the canvas
### 3. Using Templates
1. Click **Templates** in the header
2. Browse available workflow templates:
- **Mass Minimization**: Single-objective mass reduction
- **Multi-Objective**: Pareto optimization (mass + displacement)
- **Turbo Mode**: Neural-accelerated optimization
- **Mirror WFE**: Zernike wavefront error optimization
- **Frequency Target**: Natural frequency optimization
3. Click a template to load it
### 4. Processing with Claude
1. Build and configure your workflow
2. Click **Validate** to check for errors
3. Click **Process with Claude** to:
- Validate the configuration against Atomizer protocols
- Receive recommendations (method selection, trial count)
- Create the optimization study
---
## Node Configuration
### Model Node
| Field | Description | Example |
|-------|-------------|---------|
| File Path | Path to NX model | `models/bracket.sim` |
| File Type | prt, sim, or fem | `sim` |
When loading a `.sim` file, the system introspects to find:
- Linked `.prt` (geometry part)
- Linked `.fem` (FEM file)
- Solver type (SOL101, SOL103, etc.)
- Available expressions
### Design Variable Node
| Field | Description | Example |
|-------|-------------|---------|
| Expression Name | NX expression to vary | `thickness` |
| Min Value | Lower bound | `5.0` |
| Max Value | Upper bound | `15.0` |
| Unit | Engineering unit | `mm` |
**Expression Selector**: Click the dropdown to:
- **Search** through available expressions
- **Filter** by name
- **Refresh** to reload from model
- **Enter manually** if expression not found
### Extractor Node
| Field | Description | Options |
|-------|-------------|---------|
| Extractor ID | Protocol E1-E10 | E1 (Displacement), E2 (Frequency), etc. |
| Name | Display name | `max_displacement` |
| Config | Extractor-specific settings | Node ID, component, etc. |
**Available Extractors** (SYS_12):
| ID | Physics | Function |
|----|---------|----------|
| E1 | Displacement | `extract_displacement()` |
| E2 | Frequency | `extract_frequency()` |
| E3 | Stress | `extract_solid_stress()` |
| E4 | BDF Mass | `extract_mass_from_bdf()` |
| E5 | CAD Mass | `extract_mass_from_expression()` |
| E8 | Zernike | `extract_zernike_coefficients()` |
| E9 | Zernike | `extract_zernike_rms()` |
| E10 | Zernike | `extract_zernike_wfe()` |
### Objective Node
| Field | Description | Options |
|-------|-------------|---------|
| Name | Objective identifier | `mass`, `displacement` |
| Direction | Optimization goal | `minimize`, `maximize` |
| Weight | Multi-objective weight | `1.0` (0.0-10.0) |
### Constraint Node
| Field | Description | Example |
|-------|-------------|---------|
| Name | Constraint identifier | `max_stress` |
| Operator | Comparison type | `<=`, `>=`, `==` |
| Value | Threshold value | `250.0` |
### Algorithm Node
| Field | Description | Options |
|-------|-------------|---------|
| Method | Optimization algorithm | TPE, CMA-ES, NSGA-II, GP-BO |
| Max Trials | Number of trials | `100` |
| Timeout | Optional time limit | `3600` (seconds) |
**Method Selection** (SYS_15):
| Method | Best For | Design Vars | Objectives |
|--------|----------|-------------|------------|
| TPE | General purpose | 1-10 | 1 |
| CMA-ES | Many variables | 5-100 | 1 |
| NSGA-II | Multi-objective | 1-20 | 2-4 |
| GP-BO | Expensive evaluations | 1-10 | 1 |
### Surrogate Node
| Field | Description | Options |
|-------|-------------|---------|
| Enabled | Toggle acceleration | true/false |
| Model Type | Surrogate architecture | MLP, GNN, Auto |
| Min Trials | Trials before activation | `20` |
---
## Custom Extractors (V3.1)
Custom extractors allow you to define arbitrary Python extraction functions directly in the AtomizerSpec. These functions are validated for safety and executed during optimization.
### Creating a Custom Extractor
1. Add a **Custom Extractor** node from the palette
2. Open the **Code Editor** in the config panel
3. Write your extraction function following the required signature
4. The code is validated in real-time for:
- Correct function signature
- Allowed imports only (numpy, pyNastran)
- No dangerous operations (os.system, exec, eval, etc.)
### Function Signature
```python
def extract(op2_path, bdf_path=None, params=None, working_dir=None):
"""
Custom extraction function.
Args:
op2_path: Path to the .op2 results file
bdf_path: Path to the .bdf mesh file (optional)
params: Dict of current design variable values (optional)
working_dir: Path to the trial working directory (optional)
Returns:
Dict with output values, e.g., {"custom_metric": 42.0}
"""
import numpy as np
from pyNastran.op2.op2 import OP2
op2 = OP2()
op2.read_op2(op2_path)
# Your extraction logic here
result = np.max(op2.displacements[1].data)
return {"custom_metric": result}
```
### Allowed Imports
| Module | Usage |
|--------|-------|
| `numpy` | Numerical operations |
| `pyNastran` | OP2/BDF file parsing |
| `math` | Basic math functions |
| `pathlib` | Path manipulation |
### Security Restrictions
The following are **NOT allowed** in custom extractor code:
- `import os`, `import subprocess`, `import sys`
- `exec()`, `eval()`, `compile()`
- `__import__()`, `open()` with write mode
- Network operations
- File system modifications outside working_dir
### Example: Custom Stress Ratio Extractor
```python
def extract(op2_path, bdf_path=None, params=None, working_dir=None):
"""Extract stress ratio (max_stress / allowable)."""
import numpy as np
from pyNastran.op2.op2 import OP2
op2 = OP2()
op2.read_op2(op2_path)
# Get von Mises stress from solid elements
stress_data = op2.ctetra_stress[1].data
max_von_mises = np.max(stress_data[:, :, 7]) # Column 7 is von Mises
allowable = 250.0 # MPa
stress_ratio = max_von_mises / allowable
return {
"stress_ratio": stress_ratio,
"max_stress_mpa": max_von_mises
}
```
---
## File Browser (V3)
The File Browser allows you to navigate the studies directory to select model files.
### Features
- **Directory Navigation**: Browse folder hierarchy with breadcrumbs
- **Type Filtering**: Filters to `.sim`, `.prt`, `.fem`, `.afem` by default
- **Search**: Quick search by file name
- **Single-Click Select**: Click a file to select and close
### Usage
1. Click the **Browse** button (folder icon) next to the Model file path input
2. Navigate to your study folder
3. Click a model file to select it
4. The path is automatically populated in the Model node
---
## Model Introspection (V3)
Model Introspection analyzes NX model files to discover expressions, solver type, and dependencies.
### Features
- **Expression Discovery**: Lists all expressions found in the model
- **Solver Detection**: Infers solver type from file contents (SOL101, SOL103, etc.)
- **Dependency Tracking**: Shows related .prt, .fem, .afem files
- **Extractor Suggestions**: Recommends extractors based on solver type
- **One-Click Add**: Add expressions as Design Variables instantly
### Usage
1. Configure a **Model** node with a valid file path
2. Click **Introspect Model** button
3. View discovered expressions, extractors, and files
4. Click **+** next to any expression to add as Design Variable
5. Click **+** next to any extractor to add to canvas
### Discovered Information
| Section | Contents |
|---------|----------|
| **Solver Type** | Detected solver (SOL101, SOL103, etc.) |
| **Expressions** | Name, current value, unit |
| **Extractors** | Available extractors for this solver |
| **Dependent Files** | Related .prt, .fem, .afem files |
---
## API Integration
### Backend Endpoints
#### AtomizerSpec REST API (V3.1)
The spec API provides full CRUD operations on the unified AtomizerSpec:
```
GET /api/studies/{study_id}/spec # Get full AtomizerSpec
Returns: AtomizerSpec JSON with meta, model, design_variables, extractors, objectives, constraints, optimization, canvas
PUT /api/studies/{study_id}/spec # Replace entire spec
Body: AtomizerSpec JSON
Returns: { hash: string, warnings: [] }
PATCH /api/studies/{study_id}/spec # Partial update (JSONPath)
Body: { path: "design_variables[0].bounds.max", value: 15.0, modified_by: "canvas" }
Returns: { hash: string }
POST /api/studies/{study_id}/spec/validate # Validate spec
Returns: { valid: boolean, errors: [], warnings: [] }
POST /api/studies/{study_id}/spec/nodes # Add node (design var, extractor, etc.)
Body: { type: "designVar", data: {...}, modified_by: "canvas" }
Returns: { node_id: "dv_002", hash: string }
PATCH /api/studies/{study_id}/spec/nodes/{id} # Update node
Body: { updates: {...}, modified_by: "canvas" }
Returns: { hash: string }
DELETE /api/studies/{study_id}/spec/nodes/{id} # Remove node
Query: modified_by=canvas
Returns: { hash: string }
POST /api/studies/{study_id}/spec/edges # Add edge connection
Query: source=ext_001&target=obj_001&modified_by=canvas
Returns: { hash: string }
```
#### Study Configuration
```
GET /api/studies/ # List all studies
GET /api/studies/{path}/config # Get optimization_config.json (legacy)
```
#### File Browser (V3)
```
GET /api/files/list # List files in directory
Query: path=subdir&types=.sim,.prt,.fem,.afem
Returns: { files: [{name, path, isDirectory}], path }
```
#### NX Introspection (V3)
```
POST /api/nx/introspect # Introspect NX model file
Body: { file_path: string }
Returns: {
file_path, file_type, expressions, solver_type,
dependent_files, extractors_available, warnings
}
GET /api/nx/expressions # Get expressions from model
Query: file_path=path/to/model.sim
Returns: { expressions: [{name, value, unit, type}] }
```
#### Health Check (V3)
```
GET /api/health # Check database and service health
Returns: { status: "healthy", database: "connected" }
```
### MCP Canvas Tools
The Canvas integrates with the MCP server for Claude tool use:
#### `validate_canvas_intent`
Validates an optimization intent from the Canvas.
```typescript
{
intent: OptimizationIntent // The canvas workflow as JSON
}
// Returns: { valid, errors, warnings, recommendations }
```
#### `execute_canvas_intent`
Creates an optimization study from a validated intent.
```typescript
{
intent: OptimizationIntent,
study_name: string, // snake_case name
auto_run?: boolean // Start optimization immediately
}
// Returns: { study_path, config_path, status }
```
#### `interpret_canvas_intent`
Analyzes a Canvas intent and provides recommendations.
```typescript
{
intent: OptimizationIntent
}
// Returns: {
// problem_type: "single-objective" | "multi-objective",
// complexity: "low" | "medium" | "high",
// recommended_method: string,
// recommended_trials: number,
// surrogate_recommended: boolean,
// notes: string[]
// }
```
---
## AtomizerSpec v2.0 Schema
The Canvas uses **AtomizerSpec v2.0** as the unified configuration format. This replaces the legacy `optimization_config.json` and `OptimizationIntent` with a single schema.
### Core Structure
```typescript
interface AtomizerSpec {
meta: {
version: "2.0";
created: string; // ISO timestamp
modified: string; // ISO timestamp
created_by: "canvas" | "claude" | "api" | "migration" | "manual";
modified_by: "canvas" | "claude" | "api" | "migration" | "manual";
study_name: string;
description?: string;
tags?: string[];
};
model: {
sim: { path: string; solver: string };
prt?: { path: string };
fem?: { path: string };
};
design_variables: DesignVariable[];
extractors: Extractor[];
objectives: Objective[];
constraints: Constraint[];
optimization: OptimizationConfig;
canvas: CanvasLayout;
}
```
### Design Variable
```typescript
interface DesignVariable {
id: string; // "dv_001", "dv_002", etc.
name: string; // Display name
expression_name: string; // NX expression name
type: "continuous" | "integer" | "categorical";
bounds: { min: number; max: number };
baseline?: number;
unit?: string;
enabled: boolean;
canvas_position: { x: number; y: number };
}
```
### Extractor
```typescript
interface Extractor {
id: string; // "ext_001", etc.
name: string; // Display name
type: string; // "mass", "displacement", "zernike_opd", "custom"
builtin: boolean; // true for E1-E10, false for custom
outputs: Array<{ name: string; units?: string }>;
config?: Record<string, unknown>;
custom_function?: { // For custom extractors only
code: string; // Python function code
entry_point: string; // Function name (default: "extract")
};
canvas_position: { x: number; y: number };
}
```
### Objective
```typescript
interface Objective {
id: string; // "obj_001", etc.
name: string; // Display name
direction: "minimize" | "maximize";
source: {
extractor_id: string; // Reference to extractor
output_name: string; // Which output from the extractor
};
weight?: number; // For weighted sum multi-objective
enabled: boolean;
canvas_position: { x: number; y: number };
}
```
### Canvas Layout
```typescript
interface CanvasLayout {
edges: Array<{ source: string; target: string }>;
layout_version: "2.0";
viewport?: { x: number; y: number; zoom: number };
}
```
### Example AtomizerSpec
```json
{
"meta": {
"version": "2.0",
"created": "2026-01-17T10:00:00Z",
"modified": "2026-01-17T10:30:00Z",
"created_by": "canvas",
"modified_by": "canvas",
"study_name": "bracket_mass_optimization",
"description": "Optimize bracket mass while maintaining stress limits"
},
"model": {
"sim": { "path": "bracket_sim1.sim", "solver": "nastran" }
},
"design_variables": [
{
"id": "dv_001",
"name": "Thickness",
"expression_name": "web_thickness",
"type": "continuous",
"bounds": { "min": 2.0, "max": 10.0 },
"baseline": 5.0,
"unit": "mm",
"enabled": true,
"canvas_position": { "x": 50, "y": 100 }
}
],
"extractors": [
{
"id": "ext_001",
"name": "Mass",
"type": "mass",
"builtin": true,
"outputs": [{ "name": "mass", "units": "kg" }],
"canvas_position": { "x": 740, "y": 100 }
}
],
"objectives": [
{
"id": "obj_001",
"name": "mass",
"direction": "minimize",
"source": { "extractor_id": "ext_001", "output_name": "mass" },
"enabled": true,
"canvas_position": { "x": 1020, "y": 100 }
}
],
"constraints": [],
"optimization": {
"algorithm": { "type": "TPE" },
"budget": { "max_trials": 100 }
},
"canvas": {
"edges": [
{ "source": "dv_001", "target": "model" },
{ "source": "model", "target": "solver" },
{ "source": "solver", "target": "ext_001" },
{ "source": "ext_001", "target": "obj_001" },
{ "source": "obj_001", "target": "optimization" }
],
"layout_version": "2.0"
}
}
```
### Legacy OptimizationIntent (Deprecated)
The `OptimizationIntent` format is still supported for backwards compatibility but will be automatically converted to AtomizerSpec v2.0 when saved.
```typescript
interface OptimizationIntent {
model: {
path: string;
type: 'prt' | 'sim' | 'fem';
};
solver: {
type: string; // SOL101, SOL103, etc.
};
design_variables: Array<{
name: string;
expression: string;
min: number;
max: number;
unit?: string;
}>;
extractors: Array<{
id: string; // E1, E2, etc.
name: string;
config?: Record<string, unknown>;
}>;
objectives: Array<{
name: string;
extractor: string;
direction: 'minimize' | 'maximize';
weight?: number;
}>;
constraints?: Array<{
name: string;
extractor: string;
operator: '<=' | '>=' | '==';
value: number;
}>;
optimization: {
method: string;
max_trials: number;
timeout?: number;
};
surrogate?: {
enabled: boolean;
model_type?: string;
min_trials?: number;
};
}
```
---
## Validation Rules
The Canvas validates workflows against these rules:
### Required Components
- At least 1 **Model** node
- At least 1 **Solver** node
- At least 1 **Design Variable** node
- At least 1 **Objective** node
- At least 1 **Algorithm** node
### Configuration Rules
- All nodes must be **configured** (no empty fields)
- Design variable **min < max**
- Objective must connect to an **Extractor**
- Extractor ID must be valid (E1-E10)
### Connection Rules
- Model → Solver (required)
- Solver → Extractor (required for each extractor)
- Extractor → Objective (required for each objective)
- Extractor → Constraint (optional)
### Recommendations
- Multi-objective (2+ objectives) should use **NSGA-II**
- Many variables (5+) may benefit from **surrogate**
- High trial count (100+) should consider **neural acceleration**
---
## Templates
### Mass Minimization
Single-objective mass reduction with stress constraint.
- **Nodes**: 6 (Model, Solver, DVar, Extractor, Objective, Algorithm)
- **Objective**: Minimize mass
- **Constraint**: Max stress < limit
- **Method**: TPE (100 trials)
### Multi-Objective
Pareto optimization for mass vs. displacement trade-off.
- **Nodes**: 7
- **Objectives**: Minimize mass, Minimize displacement
- **Method**: NSGA-II (150 trials)
### Turbo Mode
Neural-accelerated optimization with surrogate.
- **Nodes**: 8 (includes Surrogate)
- **Objective**: User-defined
- **Method**: TPE + MLP Surrogate
- **Trials**: 50 FEA + 5000 surrogate
### Mirror WFE
Zernike wavefront error optimization for optics.
- **Nodes**: 7
- **Objective**: Minimize WFE (E10)
- **Method**: CMA-ES (200 trials)
### Frequency Target
Natural frequency optimization with modal analysis.
- **Nodes**: 6
- **Objective**: Target frequency (E2)
- **Method**: TPE (100 trials)
---
## Keyboard Shortcuts
| Key | Action |
|-----|--------|
| `Delete` / `Backspace` | Delete selected node |
| `Escape` | Deselect all |
| `Ctrl+Z` | Undo (future) |
| `Ctrl+Shift+Z` | Redo (future) |
| `Space` (hold) | Pan canvas |
| Scroll | Zoom in/out |
---
## Troubleshooting
### Canvas Not Visible
- Ensure you're on the `/canvas` route
- Check for JavaScript errors in browser console
- Verify React Flow is properly initialized
### Nodes Not Draggable
- Check that drag-and-drop events are being captured
- Ensure `onDragStart` sets the correct data type
### Config Panel Not Updating
- Verify Zustand store is properly connected
- Check that `updateNodeData` is being called
### Claude Chat Not Working
- Check WebSocket connection status (green indicator)
- Verify backend is running on port 8000
- Check `/api/chat/` endpoint is accessible
### Expression Dropdown Empty
- Ensure a Model node is configured with a file path
- Check `/api/nx/expressions` endpoint is working
- Try the "Refresh" button to reload expressions
---
## Development
### Running Locally
```bash
# Frontend
cd atomizer-dashboard/frontend
npm install
npm run dev
# Backend
cd atomizer-dashboard/backend
python -m uvicorn api.main:app --reload --port 8000
# MCP Server
cd mcp-server/atomizer-tools
npm run build
npm run dev
```
### Building for Production
```bash
# Frontend
cd atomizer-dashboard/frontend
npm run build
# MCP Server
cd mcp-server/atomizer-tools
npm run build
```
### Adding New Node Types
1. Create node component in `components/canvas/nodes/`
2. Add type to `schema.ts`
3. Register in `nodes/index.ts`
4. Add to `NodePalette.tsx`
5. Update validation rules in `validation.ts`
6. Add serialization logic to `intent.ts`
---
## References
- **AtomizerSpec v2.0 Architecture**: See `docs/plans/UNIFIED_CONFIGURATION_ARCHITECTURE.md`
- **AtomizerSpec JSON Schema**: See `optimization_engine/schemas/atomizer_spec_v2.json`
- **React Flow Documentation**: https://reactflow.dev/
- **Lucide Icons**: https://lucide.dev/icons/
- **Zustand**: https://github.com/pmndrs/zustand
- **Atomizer Protocols**: See `docs/protocols/`
- **Extractor Library**: See `SYS_12_EXTRACTOR_LIBRARY.md`
- **Method Selector**: See `SYS_15_METHOD_SELECTOR.md`
---
*Canvas Builder: Design optimizations visually, execute with AI.*
*Powered by AtomizerSpec v2.0 - the unified configuration format.*

706
docs/guides/DASHBOARD.md Normal file
View File

@@ -0,0 +1,706 @@
# Atomizer Dashboard
**Last Updated**: January 17, 2026
**Version**: 3.1 (AtomizerSpec v2.0)
---
## Overview
The Atomizer Dashboard is a real-time web-based interface for monitoring and analyzing multi-objective optimization studies. Built with React, TypeScript, and Tailwind CSS, it provides comprehensive visualization and interaction capabilities for NSGA-II based structural optimization.
**New in V3.1**: All studies now use **AtomizerSpec v2.0** as the unified configuration format. The `atomizer_spec.json` file serves as the single source of truth for Canvas, Backend, Claude, and the Optimization Engine.
### Major Features
| Feature | Route | Description |
|---------|-------|-------------|
| **Canvas Builder** | `/canvas` | Visual node-based workflow designer (AtomizerSpec v2.0) |
| **Live Dashboard** | `/dashboard` | Real-time optimization monitoring |
| **Results Viewer** | `/results` | Markdown reports with charts |
| **Analytics** | `/analytics` | Cross-study comparison |
| **Claude Chat** | Global | AI-powered study creation with spec tools |
> **New in V3.1**: The [Canvas Builder](CANVAS.md) now uses AtomizerSpec v2.0 as the unified configuration format, with support for custom extractors and real-time WebSocket synchronization.
---
## Architecture
### Frontend Stack
- **Framework**: React 18 with TypeScript
- **Build Tool**: Vite
- **Styling**: Tailwind CSS with custom dark/light theme support
- **Charts**: Recharts for data visualization
- **State Management**: React hooks (useState, useEffect)
- **WebSocket**: Real-time optimization updates
### Backend Stack
- **Framework**: FastAPI (Python)
- **Database**: Optuna SQLite studies
- **API**: RESTful endpoints with WebSocket support
- **CORS**: Configured for local development
### Ports
- **Frontend**: `http://localhost:3003` (Vite dev server)
- **Backend**: `http://localhost:8000` (FastAPI/Uvicorn)
---
## Key Features
### 1. Multi-Objective Visualization
#### Pareto Front Plot
- 2D scatter plot showing trade-offs between objectives
- Color-coded by constraint satisfaction (green = feasible, red = infeasible)
- Interactive hover tooltips with trial details
- Automatically extracts Pareto-optimal solutions using NSGA-II
#### Parallel Coordinates Plot
**Research-Based Multi-Dimensional Visualization**
Structure: **Design Variables → Objectives → Constraints**
Features:
- **Light Theme**: White background with high-visibility dark text and colors
- **Color-Coded Axes**:
- Blue background: Design variables
- Green background: Objectives
- Yellow background: Constraints
- **Interactive Selection**:
- Hover over lines to highlight individual trials
- Click to select/deselect trials
- Multi-select with visual feedback (orange highlight)
- **Type Badges**: Labels showing DESIGN VAR, OBJECTIVE, or CONSTRAINT
- **Units Display**: Automatic unit labeling (mm, MPa, Hz, g, etc.)
- **Min/Max Labels**: Range values displayed on each axis
- **Feasibility Coloring**:
- Green: Feasible solutions
- Red: Infeasible solutions (constraint violations)
- Blue: Hover highlight
- Orange: Selected trials
**Implementation**: [ParallelCoordinatesPlot.tsx](atomizer-dashboard/frontend/src/components/ParallelCoordinatesPlot.tsx:1)
**Line colors**:
```typescript
if (isSelected) return '#FF6B00'; // Orange for selected
if (!trial.feasible) return '#DC2626'; // Red for infeasible
if (isHovered) return '#2563EB'; // Blue for hover
return '#10B981'; // Green for feasible
```
### 2. Optimizer Strategy Panel
Displays algorithm information:
- **Algorithm**: NSGA-II, TPE, or custom
- **Type**: Single-objective or Multi-objective
- **Objectives Count**: Number of optimization objectives
- **Design Variables Count**: Number of design parameters
### 3. Convergence Plot (Enhanced)
**File**: `atomizer-dashboard/frontend/src/components/ConvergencePlot.tsx`
Advanced convergence visualization:
- **Dual-line plot**: Individual trial values + running best trajectory
- **Area fill**: Gradient under trial values curve
- **Statistics panel**: Best value, improvement %, 90% convergence trial
- **Summary footer**: First value, mean, std dev, total trials
- **Step-after interpolation**: Running best shown as step function
### 4. Parameter Importance Chart
**File**: `atomizer-dashboard/frontend/src/components/ParameterImportanceChart.tsx`
Correlation-based parameter analysis:
- **Pearson correlation**: Calculates correlation between each parameter and objective
- **Horizontal bar chart**: Parameters ranked by absolute importance
- **Color coding**: Green (negative correlation - helps minimize), Red (positive - hurts minimize)
- **Tooltip**: Shows percentage importance and raw correlation coefficient (r)
- **Minimum 3 trials**: Required for statistical significance
### 5. Study Report Viewer
**File**: `atomizer-dashboard/frontend/src/components/StudyReportViewer.tsx`
Full-featured markdown report viewer:
- **Modal overlay**: Full-screen report viewing
- **Math equations**: KaTeX support for LaTeX math (`$...$` inline, `$$...$$` block)
- **GitHub-flavored markdown**: Tables, code blocks, task lists
- **Custom styling**: Dark theme with proper typography
- **Syntax highlighting**: Code blocks with language detection
- **Refresh button**: Re-fetch report for live updates
- **External link**: Open in system editor
### 6. Trial History Table
- Comprehensive list of all trials
- Sortable columns
- Status indicators (COMPLETE, PRUNED, FAIL)
- Parameter values and objective values
- User attributes (constraints)
### 7. Pruned Trials Tracking
- **Real-time count**: Fetched directly from Optuna database
- **Pruning diagnostics**: Tracks pruned trial params and causes
- **Database query**: Uses SQLite `state = 'PRUNED'` filter
### 8. Analytics Page (Cross-Study Comparisons)
**File**: `atomizer-dashboard/frontend/src/pages/Analytics.tsx`
Dedicated analytics page for comparing optimization studies:
#### Aggregate Statistics
- **Total Studies**: Count of all studies in the system
- **Running/Paused/Completed**: Status distribution breakdown
- **Total Trials**: Sum of trials across all studies
- **Avg Trials/Study**: Average trial count per study
- **Best Overall**: Best objective value across all studies with study ID
#### Study Comparison Table
- **Sortable columns**: Name, Status, Progress, Best Value
- **Status indicators**: Color-coded badges (running=green, paused=orange, completed=blue)
- **Progress bars**: Visual completion percentage with color coding
- **Quick actions**: Open button to navigate directly to a study's dashboard
- **Selected highlight**: Current study highlighted with "Selected" badge
- **Click-to-expand**: Row expansion for additional details
#### Status Distribution Chart
- Visual breakdown of studies by status
- Horizontal bar chart with percentage fill
- Color-coded: Running (green), Paused (orange), Completed (blue), Not Started (gray)
#### Top Performers Panel
- Ranking of top 5 studies by best objective value (assumes minimization)
- Medal-style numbering (gold, silver, bronze for top 3)
- Clickable rows to navigate to study
- Trial count display
**Usage**: Navigate to `/analytics` when a study is selected. Provides aggregate view across all studies.
### 9. Global Claude Terminal
**Files**:
- `atomizer-dashboard/frontend/src/components/GlobalClaudeTerminal.tsx`
- `atomizer-dashboard/frontend/src/components/ClaudeTerminal.tsx`
- `atomizer-dashboard/frontend/src/context/ClaudeTerminalContext.tsx`
Persistent AI assistant terminal:
- **Global persistence**: Terminal persists across page navigation
- **WebSocket connection**: Real-time communication with Claude Code backend
- **Context awareness**: Automatically includes current study context when available
- **New Study mode**: When no study selected, offers guided study creation wizard
- **Visual indicators**: Connection status shown in sidebar footer
- **Keyboard shortcut**: Open/close terminal from anywhere
**Modes**:
- **With Study Selected**: "Set Context" button loads study-specific context
- **No Study Selected**: "New Study" button starts guided wizard from `.claude/skills/guided-study-wizard.md`
### 10. Shared Markdown Renderer
**File**: `atomizer-dashboard/frontend/src/components/MarkdownRenderer.tsx`
Reusable markdown rendering component:
- **Syntax highlighting**: Prism-based code highlighting with `oneDark` theme
- **GitHub-flavored markdown**: Tables, task lists, strikethrough
- **LaTeX math support**: KaTeX rendering with `remark-math` and `rehype-katex`
- **Custom styling**: Dark theme typography optimized for dashboard
- **Used by**: Home page (README display), Results page (reports)
---
## Pages Structure
### Home Page (`/`)
- Study navigator and selector
- README.md display with full markdown rendering
- New study creation via Claude terminal
### Canvas Page (`/canvas`) **NEW**
- Visual node-based workflow builder
- Drag-and-drop node palette with 8 node types
- Claude integration for workflow processing
- Auto-load from existing studies
- Expression search for design variables
- See [Canvas Documentation](CANVAS.md) for details
### Dashboard Page (`/dashboard`)
- Real-time live tracker for selected study
- Convergence plot, Pareto front, parameter importance
- Trial history table
### Reports Page (`/results`)
- AI-generated optimization report viewer
- Full markdown rendering with syntax highlighting and math
- Copy and download capabilities
### Analytics Page (`/analytics`)
- Cross-study comparison and aggregate statistics
- Study ranking and status distribution
- Quick navigation to individual studies
---
## API Endpoints
### AtomizerSpec (V3.1)
#### GET `/api/studies/{study_id}/spec`
Get the full AtomizerSpec for a study.
**Response**:
```json
{
"meta": { "version": "2.0", "study_name": "..." },
"model": { "sim": { "path": "...", "solver": "nastran" } },
"design_variables": [...],
"extractors": [...],
"objectives": [...],
"constraints": [...],
"optimization": { "algorithm": { "type": "TPE" }, "budget": { "max_trials": 100 } },
"canvas": { "edges": [...], "layout_version": "2.0" }
}
```
#### PUT `/api/studies/{study_id}/spec`
Replace the entire spec with validation.
#### PATCH `/api/studies/{study_id}/spec`
Partial update using JSONPath. Body: `{ "path": "design_variables[0].bounds.max", "value": 15.0 }`
#### POST `/api/studies/{study_id}/spec/validate`
Validate spec and return detailed report. Returns: `{ "valid": true, "errors": [], "warnings": [] }`
#### POST `/api/studies/{study_id}/spec/nodes`
Add a new node (design variable, extractor, objective, constraint).
#### PATCH `/api/studies/{study_id}/spec/nodes/{node_id}`
Update an existing node's properties.
#### DELETE `/api/studies/{study_id}/spec/nodes/{node_id}`
Remove a node and clean up related edges.
### Studies
#### GET `/api/optimization/studies`
List all available optimization studies.
**Response**:
```json
[
{
"id": "drone_gimbal_arm_optimization",
"name": "drone_gimbal_arm_optimization",
"direction": ["minimize", "maximize"],
"n_trials": 100,
"best_value": [3245.67, 165.3],
"sampler": "NSGAIISampler"
}
]
```
#### GET `/api/optimization/studies/{study_id}/trials`
Get all trials for a study.
**Response**:
```json
{
"trials": [
{
"number": 0,
"values": [3456.2, 145.6],
"params": {
"beam_half_core_thickness": 7.5,
"beam_face_thickness": 2.1,
"holes_diameter": 30.0,
"hole_count": 11
},
"state": "COMPLETE",
"user_attrs": {
"max_stress": 95.3,
"max_displacement": 1.2,
"frequency": 145.6,
"mass": 3456.2,
"constraint_satisfied": true
}
}
]
}
```
#### GET `/api/optimization/studies/{study_id}/metadata`
Get study metadata including objectives and design variables.
**Response**:
```json
{
"objectives": [
{
"name": "mass",
"type": "minimize",
"unit": "g"
},
{
"name": "frequency",
"type": "maximize",
"unit": "Hz"
}
],
"design_variables": [
{
"name": "beam_half_core_thickness",
"unit": "mm",
"min": 5.0,
"max": 10.0
}
],
"sampler": "NSGAIISampler"
}
```
#### GET `/api/optimization/studies/{study_id}/pareto-front`
Get Pareto-optimal solutions for multi-objective studies.
**Response**:
```json
{
"is_multi_objective": true,
"pareto_front": [
{
"trial_number": 0,
"values": [3245.67, 165.3],
"params": {...},
"user_attrs": {...},
"constraint_satisfied": true
}
]
}
```
### WebSocket
#### WS `/ws/optimization/{study_id}`
Real-time trial updates during optimization.
**Message Format**:
```json
{
"type": "trial_complete",
"trial": {
"number": 5,
"values": [3456.2, 145.6],
"params": {...}
}
}
```
---
## Running the Dashboard
### Backend
```bash
cd atomizer-dashboard/backend
python -m uvicorn api.main:app --reload --port 8000
```
### Frontend
```bash
cd atomizer-dashboard/frontend
npm run dev
```
Access at: `http://localhost:3003`
---
## Configuration
### Vite Proxy ([vite.config.ts](atomizer-dashboard/frontend/vite.config.ts:1))
```typescript
export default defineConfig({
plugins: [react()],
server: {
host: '0.0.0.0',
port: 3003,
proxy: {
'/api': {
target: 'http://127.0.0.1:8000',
changeOrigin: true,
secure: false,
ws: true, // WebSocket support
}
}
}
})
```
### CORS ([backend/api/main.py](atomizer-dashboard/backend/api/main.py:1))
```python
app.add_middleware(
CORSMiddleware,
allow_origins=["http://localhost:3003"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
```
---
## Component Structure
```
atomizer-dashboard/
├── frontend/
│ ├── src/
│ │ ├── components/
│ │ │ ├── canvas/ # Canvas Builder V2
│ │ │ │ ├── AtomizerCanvas.tsx # Main canvas with React Flow
│ │ │ │ ├── nodes/ # 8 node type components
│ │ │ │ │ ├── BaseNode.tsx # Base with Lucide icons
│ │ │ │ │ ├── ModelNode.tsx # Cube icon
│ │ │ │ │ ├── SolverNode.tsx # Cpu icon
│ │ │ │ │ ├── DesignVarNode.tsx # SlidersHorizontal icon
│ │ │ │ │ ├── ExtractorNode.tsx # FlaskConical icon
│ │ │ │ │ ├── ObjectiveNode.tsx # Target icon
│ │ │ │ │ ├── ConstraintNode.tsx # ShieldAlert icon
│ │ │ │ │ ├── AlgorithmNode.tsx # BrainCircuit icon
│ │ │ │ │ └── SurrogateNode.tsx # Rocket icon
│ │ │ │ ├── panels/
│ │ │ │ │ ├── NodeConfigPanel.tsx # Config sidebar
│ │ │ │ │ ├── ValidationPanel.tsx # Validation toast
│ │ │ │ │ ├── ChatPanel.tsx # Claude chat
│ │ │ │ │ ├── ConfigImporter.tsx # Study browser
│ │ │ │ │ └── TemplateSelector.tsx # Workflow templates
│ │ │ │ └── palette/
│ │ │ │ └── NodePalette.tsx # Draggable palette
│ │ │ ├── chat/ # Chat components
│ │ │ │ ├── ChatMessage.tsx # Message display
│ │ │ │ └── ThinkingIndicator.tsx # Loading indicator
│ │ │ ├── ParallelCoordinatesPlot.tsx # Multi-objective viz
│ │ │ ├── ParetoPlot.tsx # Pareto front scatter
│ │ │ ├── OptimizerPanel.tsx # Strategy info
│ │ │ ├── ConvergencePlot.tsx # Convergence chart
│ │ │ ├── ParameterImportanceChart.tsx # Parameter importance
│ │ │ ├── StudyReportViewer.tsx # Report viewer
│ │ │ ├── MarkdownRenderer.tsx # Markdown renderer
│ │ │ ├── ClaudeTerminal.tsx # Claude terminal
│ │ │ ├── GlobalClaudeTerminal.tsx # Global terminal
│ │ │ ├── common/
│ │ │ │ ├── Card.tsx # Card component
│ │ │ │ └── Button.tsx # Button component
│ │ │ ├── layout/
│ │ │ │ ├── Sidebar.tsx # Navigation
│ │ │ │ └── MainLayout.tsx # Layout wrapper
│ │ │ └── dashboard/
│ │ │ ├── MetricCard.tsx # KPI display
│ │ │ └── StudyCard.tsx # Study selector
│ │ ├── pages/
│ │ │ ├── Home.tsx # Study selection
│ │ │ ├── CanvasView.tsx # Canvas builder
│ │ │ ├── Dashboard.tsx # Live tracker
│ │ │ ├── Results.tsx # Report viewer
│ │ │ └── Analytics.tsx # Analytics
│ │ ├── hooks/
│ │ │ ├── useCanvasStore.ts # Zustand canvas state
│ │ │ ├── useCanvasChat.ts # Canvas chat
│ │ │ ├── useChat.ts # WebSocket chat
│ │ │ └── useWebSocket.ts # WebSocket base
│ │ ├── lib/canvas/
│ │ │ ├── schema.ts # Type definitions
│ │ │ ├── intent.ts # Intent serialization
│ │ │ ├── validation.ts # Graph validation
│ │ │ └── templates.ts # Workflow templates
│ │ ├── context/
│ │ │ ├── StudyContext.tsx # Study state
│ │ │ └── ClaudeTerminalContext.tsx # Terminal state
│ │ ├── api/
│ │ │ └── client.ts # API client
│ │ └── types/
│ │ └── index.ts # TypeScript types
│ └── vite.config.ts
├── backend/
│ └── api/
│ ├── main.py # FastAPI app
│ ├── services/
│ │ ├── claude_agent.py # Claude API integration
│ │ ├── session_manager.py # Session lifecycle
│ │ ├── context_builder.py # Context assembly
│ │ └── conversation_store.py # SQLite persistence
│ └── routes/
│ ├── optimization.py # Optimization endpoints
│ ├── studies.py # Study config endpoints
│ ├── nx.py # NX introspection
│ └── terminal.py # Claude WebSocket
└── mcp-server/atomizer-tools/
└── src/
├── index.ts # MCP server entry
└── tools/
├── canvas.ts # Canvas tools
├── study.ts # Study management
├── optimization.ts # Optimization control
└── analysis.ts # Analysis tools
```
## NPM Dependencies
The frontend uses these key packages:
- `react-markdown` - Markdown rendering
- `remark-gfm` - GitHub-flavored markdown support
- `remark-math` - Math equation parsing
- `rehype-katex` - KaTeX math rendering
- `recharts` - Interactive charts
---
## Data Flow
1. **Optimization Engine** runs trials and stores results in Optuna SQLite database
2. **Backend API** reads from database and exposes REST endpoints
3. **Frontend** fetches data via `/api/optimization/*` endpoints
4. **WebSocket** pushes real-time updates to connected clients
5. **React Components** render visualizations based on fetched data
---
## Troubleshooting
### Dashboard Page Crashes
**Issue**: `TypeError: Cannot read properties of undefined (reading 'split')`
**Fix**: Ensure all data is validated before rendering. ParallelCoordinatesPlot now includes:
```typescript
if (!paretoData || paretoData.length === 0) return <EmptyState />;
if (!objectives || !designVariables) return <EmptyState />;
```
### No Data Showing
1. Check backend is running: `curl http://localhost:8000/api/optimization/studies`
2. Verify study exists in Optuna database
3. Check browser console for API errors
4. Ensure WebSocket connection is established
### CORS Errors
- Backend must allow origin `http://localhost:3003`
- Frontend proxy must target `http://127.0.0.1:8000` (not `localhost`)
---
## Best Practices
### For Multi-Objective Studies
1. **Always use metadata endpoint** to get objective/variable definitions
2. **Extract constraints from user_attrs** for parallel coordinates
3. **Filter Pareto front** using `paretoData.pareto_front` array
4. **Validate constraint_satisfied** field before coloring
### For Real-Time Updates
1. **Use WebSocket** for live trial updates
2. **Debounce state updates** to avoid excessive re-renders
3. **Close WebSocket** connection on component unmount
### For Performance
1. **Limit displayed trials** for large studies (e.g., show last 1000)
2. **Use React.memo** for expensive components
3. **Virtualize large lists** if showing >100 trials in tables
---
## Recent Updates
### January 2026 (V3.1) - AtomizerSpec v2.0
- [x] **AtomizerSpec v2.0**: Unified configuration architecture
- **Single Source of Truth**: One `atomizer_spec.json` file for Canvas, Backend, Claude, and Optimization Engine
- **Spec REST API**: Full CRUD operations at `/api/studies/{id}/spec`
- **Pydantic Validation**: Backend validation with detailed error messages
- **Legacy Migration**: Automatic migration from `optimization_config.json`
- **29 Studies Migrated**: All existing studies converted to v2.0 format
- [x] **Custom Extractors**: Define custom Python extraction functions in the spec
- **Security Validation**: Code checked for dangerous patterns
- **Runtime Loading**: Functions executed during optimization
- **Canvas UI**: Code editor panel for custom extractor nodes
- [x] **WebSocket Sync**: Real-time spec synchronization
- Multiple clients stay synchronized
- Optimistic updates with rollback on error
- Conflict detection with hash comparison
- [x] **Comprehensive Testing**: Full test coverage for Phase 4
- Unit tests for SpecManager and Pydantic models
- API integration tests for all spec endpoints
- Migration tests for legacy config formats
- E2E tests for complete workflow
### January 2026 (V3.0)
- [x] **Canvas Builder V3**: Major upgrade with model introspection and Claude fixes
- **File Browser**: Browse studies directory for .sim/.prt/.fem/.afem files
- **Model Introspection**: Auto-discover expressions, solver type, and dependencies
- **One-Click Add**: Add expressions as design variables, add suggested extractors
- **Claude Bug Fixes**: Fixed SQL errors, WebSocket reconnection, chat integration
- **Connection Flow Fix**: Design variables now correctly flow INTO model nodes
- **Health Check Endpoint**: `/api/health` for database status monitoring
- [x] **Canvas Builder V2**: Complete visual workflow designer with React Flow
- 8 node types with professional Lucide icons
- Drag-and-drop node palette
- Expression search dropdown for design variables
- Auto-load from existing optimization_config.json
- "Process with Claude" button for AI-assisted study creation
- MCP canvas tools (validate, execute, interpret)
- Responsive full-screen layout
- [x] **Backend Services**:
- NX introspection service (`/api/nx/introspect`, `/api/nx/expressions`)
- File browser API (`/api/files/list`)
- Claude session management with SQLite persistence
- Context builder for study-aware conversations
- [x] **Optimization Engine v2.0**: Major code reorganization
- New modular structure: `core/`, `nx/`, `study/`, `config/`, `reporting/`, `processors/`
- Backwards-compatible imports with deprecation warnings
- 120 files reorganized for better maintainability
### December 2025
- [x] **Convergence Plot**: Enhanced with running best, statistics, and gradient fill
- [x] **Parameter Importance Chart**: Correlation analysis with color-coded bars
- [x] **Study Report Viewer**: Full markdown rendering with KaTeX math support
- [x] **Pruned Trials**: Real-time count from Optuna database (not JSON file)
- [x] **Chart Data Transformation**: Fixed `values` array mapping for single/multi-objective
- [x] **Analytics Page**: Dedicated cross-study comparison and aggregate statistics view
- [x] **Global Claude Terminal**: Persistent AI terminal with study context awareness
- [x] **Shared Markdown Renderer**: Reusable component with syntax highlighting and math support
- [x] **Study Session Persistence**: localStorage-based study selection that survives page refresh
- [x] **Paused Status Support**: Full support for paused optimization status throughout UI
- [x] **Guided Study Wizard**: Interactive wizard skill for creating new studies via Claude
### Future Enhancements
- [ ] 3D Pareto front visualization for 3+ objectives
- [ ] Advanced filtering and search in trial history
- [ ] Export results to CSV/JSON
- [ ] Custom parallel coordinates brushing/filtering
- [ ] Hypervolume indicator tracking
- [ ] Interactive design variable sliders
- [ ] Constraint importance analysis
- [ ] Tauri desktop application (Phase 5)
---
## References
- **Optuna Documentation**: https://optuna.readthedocs.io/
- **NSGA-II Algorithm**: Deb et al. (2002)
- **Parallel Coordinates**: Inselberg & Dimsdale (1990)
- **React Documentation**: https://react.dev/
- **FastAPI Documentation**: https://fastapi.tiangolo.com/

View File

@@ -0,0 +1,300 @@
# Dashboard Implementation Status
**Last Updated**: January 16, 2026
**Version**: 3.0
---
## Overview
The Atomizer Dashboard V2 is now feature-complete with the Canvas Builder. This document tracks implementation status across all major features.
---
## Phase Summary
| Phase | Name | Status | Notes |
|-------|------|--------|-------|
| 0 | MCP Chat Foundation | COMPLETE | Claude API integration, session management |
| 1 | Canvas with React Flow | COMPLETE | 8 node types, validation, serialization |
| 2 | LLM Intelligence Layer | COMPLETE | Canvas chat hook, MCP canvas tools |
| 3 | Bidirectional Sync | COMPLETE | Session persistence, context builder |
| 4 | Templates & Polish | COMPLETE | Template selector, config importer |
| 5 | Tauri Desktop | PLANNED | Future phase |
---
## Phase 0: MCP Chat Foundation - COMPLETE
### Backend Services
| Component | File | Lines | Status |
|-----------|------|-------|--------|
| Claude Agent | `backend/api/services/claude_agent.py` | 722 | COMPLETE |
| CLI Agent | `backend/api/services/claude_cli_agent.py` | 202 | COMPLETE |
| Conversation Store | `backend/api/services/conversation_store.py` | 295 | COMPLETE |
| Session Manager | `backend/api/services/session_manager.py` | 425 | COMPLETE |
| Context Builder | `backend/api/services/context_builder.py` | 246 | COMPLETE |
### MCP Server
| Tool | Description | Status |
|------|-------------|--------|
| `list_studies` | List all studies | COMPLETE |
| `get_study_status` | Study details | COMPLETE |
| `create_study` | Create from description | COMPLETE |
| `run_optimization` | Start optimization | COMPLETE |
| `stop_optimization` | Stop optimization | COMPLETE |
| `get_trial_data` | Query trials | COMPLETE |
| `analyze_convergence` | Convergence metrics | COMPLETE |
| `compare_trials` | Side-by-side comparison | COMPLETE |
| `get_best_design` | Best design details | COMPLETE |
| `generate_report` | Markdown reports | COMPLETE |
| `export_data` | CSV/JSON export | COMPLETE |
| `explain_physics` | FEA concepts | COMPLETE |
| `recommend_method` | Algorithm recommendation | COMPLETE |
| `query_extractors` | Extractor list | COMPLETE |
---
## Phase 1: Canvas with React Flow - COMPLETE
### Core Components
| Component | Location | Status |
|-----------|----------|--------|
| Schema | `frontend/src/lib/canvas/schema.ts` | COMPLETE |
| Intent Serializer | `frontend/src/lib/canvas/intent.ts` | COMPLETE |
| Validation | `frontend/src/lib/canvas/validation.ts` | COMPLETE |
| Templates | `frontend/src/lib/canvas/templates.ts` | COMPLETE |
| Canvas Store | `frontend/src/hooks/useCanvasStore.ts` | COMPLETE |
| Main Canvas | `frontend/src/components/canvas/AtomizerCanvas.tsx` | COMPLETE |
### Node Types (8)
| Node | Icon | Color | Status |
|------|------|-------|--------|
| Model | `Cube` | Blue | COMPLETE |
| Solver | `Cpu` | Violet | COMPLETE |
| Design Variable | `SlidersHorizontal` | Emerald | COMPLETE |
| Extractor | `FlaskConical` | Cyan | COMPLETE |
| Objective | `Target` | Rose | COMPLETE |
| Constraint | `ShieldAlert` | Amber | COMPLETE |
| Algorithm | `BrainCircuit` | Indigo | COMPLETE |
| Surrogate | `Rocket` | Pink | COMPLETE |
### Panels
| Panel | Purpose | Status |
|-------|---------|--------|
| NodeConfigPanel | Configure selected node | COMPLETE |
| ValidationPanel | Display validation errors | COMPLETE |
| ExecuteDialog | Confirm study creation | COMPLETE |
| ChatPanel | Claude chat sidebar | COMPLETE |
| ConfigImporter | Load from study/JSON | COMPLETE |
| TemplateSelector | Choose workflow template | COMPLETE |
---
## Phase 2: LLM Intelligence Layer - COMPLETE
### Canvas MCP Tools
| Tool | Purpose | Status |
|------|---------|--------|
| `validate_canvas_intent` | Validate graph before execution | COMPLETE |
| `execute_canvas_intent` | Create study + optionally run | COMPLETE |
| `interpret_canvas_intent` | Get recommendations | COMPLETE |
### Canvas Chat Hook
| Hook | File | Status |
|------|------|--------|
| `useCanvasChat` | `frontend/src/hooks/useCanvasChat.ts` | COMPLETE |
Features:
- `processWithClaude(intent)` - Full processing with study creation
- `validateWithClaude(intent)` - Validation only
- `analyzeWithClaude(intent)` - Get recommendations
---
## Phase 3: Bidirectional Sync - COMPLETE
| Feature | Status |
|---------|--------|
| Session persistence (SQLite) | COMPLETE |
| Context builder | COMPLETE |
| Canvas to Chat bridge | COMPLETE |
| Study context loading | COMPLETE |
---
## Phase 4: Templates & Polish - COMPLETE
### Templates
| Template | Description | Complexity |
|----------|-------------|------------|
| Mass Minimization | Single-objective mass reduction | Simple |
| Multi-Objective | Mass + displacement Pareto | Medium |
| Turbo Mode | Neural-accelerated | Advanced |
| Mirror WFE | Zernike optimization | Advanced |
| Frequency Target | Modal analysis | Medium |
### UI Features
| Feature | Status |
|---------|--------|
| Lucide icons (no emojis) | COMPLETE |
| Dark theme (Atomaster) | COMPLETE |
| Responsive layout | COMPLETE |
| Full-screen canvas | COMPLETE |
| Floating action buttons | COMPLETE |
---
## Canvas V3 Upgrade - COMPLETE
All Canvas V2 and V3 features have been implemented:
| Feature | Status |
|---------|--------|
| Professional Lucide icons | COMPLETE |
| Responsive full-screen layout | COMPLETE |
| Auto-load from optimization_config.json | COMPLETE |
| NX model introspection endpoint | COMPLETE |
| Expression search dropdown | COMPLETE |
| "Process with Claude" button | COMPLETE |
| MCP canvas tools | COMPLETE |
| Backend study list endpoint | COMPLETE |
| File browser for model selection | COMPLETE |
| Introspection panel (expressions, extractors) | COMPLETE |
| Claude WebSocket fixes | COMPLETE |
| Health check endpoint | COMPLETE |
---
## File Inventory
### MCP Server (`mcp-server/atomizer-tools/`)
```
src/
├── index.ts # Server entry (imports canvasTools)
├── tools/
│ ├── study.ts # Study management
│ ├── optimization.ts # Optimization control
│ ├── analysis.ts # Analysis tools
│ ├── reporting.ts # Report generation
│ ├── physics.ts # Physics explanations
│ ├── canvas.ts # Canvas intent tools
│ └── admin.ts # Power mode tools
└── utils/
└── paths.ts # Path utilities
```
### Backend Services (`atomizer-dashboard/backend/api/services/`)
```
__init__.py
claude_agent.py # Full Claude API integration (722 lines)
claude_cli_agent.py # CLI-based agent (202 lines)
conversation_store.py # SQLite persistence (295 lines)
session_manager.py # Session lifecycle (425 lines)
context_builder.py # Context assembly (246 lines)
nx_introspection.py # NX model introspection (NEW)
```
### Backend Routes (`atomizer-dashboard/backend/api/routes/`)
```
__init__.py
terminal.py # Claude WebSocket endpoint
optimization.py # Optimization API
studies.py # Study configuration
files.py # File browser API (NEW)
nx.py # NX introspection API (NEW)
```
### Frontend Canvas (`atomizer-dashboard/frontend/src/components/canvas/`)
```
AtomizerCanvas.tsx # Main canvas component
nodes/
├── index.ts # Node type registry
├── BaseNode.tsx # Base with multiple handles
├── ModelNode.tsx
├── SolverNode.tsx
├── DesignVarNode.tsx
├── ExtractorNode.tsx
├── ObjectiveNode.tsx
├── ConstraintNode.tsx
├── AlgorithmNode.tsx
└── SurrogateNode.tsx
panels/
├── NodeConfigPanel.tsx # Node configuration sidebar
├── ValidationPanel.tsx # Validation toast display
├── ExecuteDialog.tsx # Execute confirmation modal
├── ChatPanel.tsx # Claude chat sidebar
├── ConfigImporter.tsx # Study import dialog
├── TemplateSelector.tsx # Workflow template chooser
├── FileBrowser.tsx # File picker for model selection (NEW)
├── IntrospectionPanel.tsx # Model introspection results (NEW)
└── ExpressionSelector.tsx # Expression search dropdown (NEW)
palette/
└── NodePalette.tsx
```
### Canvas Library (`atomizer-dashboard/frontend/src/lib/canvas/`)
```
schema.ts # Type definitions
intent.ts # Serialization (174 lines)
validation.ts # Graph validation
templates.ts # Workflow templates
index.ts # Exports
```
---
## Testing Checklist
### Build Verification
```bash
# Build MCP server
cd mcp-server/atomizer-tools
npm run build
# Expected: Compiles without errors
# Build frontend
cd atomizer-dashboard/frontend
npm run build
# Expected: Compiles without errors
```
### Functional Testing
- [ ] Navigate to `/canvas`
- [ ] Drag nodes from palette
- [ ] Connect nodes with edges
- [ ] Configure node properties
- [ ] Click "Validate"
- [ ] Click "Process with Claude"
- [ ] Chat panel responds
- [ ] Import from existing study
- [ ] Select workflow template
- [ ] Expression dropdown works
---
## References
- [Canvas Documentation](CANVAS.md) - Full Canvas Builder guide
- [Dashboard Overview](DASHBOARD.md) - Main dashboard documentation
- [RALPH_LOOP_CANVAS_V2](../plans/RALPH_LOOP_CANVAS_V2.md) - V2 upgrade prompt
---
*Implementation completed via autonomous Claude Code sessions.*

View File

@@ -0,0 +1,902 @@
# Atomizer Dashboard - Master Plan
**Version**: 1.0
**Date**: November 21, 2025
**Status**: Planning Phase
---
## Executive Summary
A modern, real-time web dashboard for Atomizer that provides:
1. **Study Configurator** - Interactive UI + LLM chat interface for study setup
2. **Live Dashboard** - Real-time optimization monitoring with charts/graphs
3. **Results Viewer** - Rich markdown report display with interactive visualizations
---
## Architecture Overview
### Tech Stack Recommendation
#### Backend
- **FastAPI** - Modern Python web framework
- Native async support for real-time updates
- Automatic OpenAPI documentation
- WebSocket support for live streaming
- Easy integration with existing Python codebase
#### Frontend
- **React** - Component-based UI framework
- **Vite** - Fast development and build tool
- **TailwindCSS** - Utility-first styling
- **Recharts** - React charting library
- **React Markdown** - Markdown rendering with code highlighting
- **Socket.IO** (or native WebSocket) - Real-time communication
#### State Management
- **React Query (TanStack Query)** - Server state management
- Automatic caching and refetching
- Real-time updates
- Optimistic updates
#### Database (Optional Enhancement)
- **SQLite** (already using via Optuna) - Study metadata
- File-based JSON for real-time data (current approach works well)
---
## Application Structure
```
atomizer-dashboard/
├── backend/
│ ├── api/
│ │ ├── main.py # FastAPI app entry
│ │ ├── routes/
│ │ │ ├── studies.py # Study CRUD operations
│ │ │ ├── optimization.py # Start/stop/monitor optimization
│ │ │ ├── llm.py # LLM chat interface
│ │ │ └── reports.py # Report generation/viewing
│ │ ├── websocket/
│ │ │ └── optimization_stream.py # Real-time optimization updates
│ │ └── services/
│ │ ├── study_service.py # Study management logic
│ │ ├── optimization_service.py # Optimization runner
│ │ └── llm_service.py # LLM integration
│ └── requirements.txt
├── frontend/
│ ├── src/
│ │ ├── pages/
│ │ │ ├── Configurator.tsx # Study configuration page
│ │ │ ├── Dashboard.tsx # Live optimization dashboard
│ │ │ └── Results.tsx # Results viewer
│ │ ├── components/
│ │ │ ├── StudyForm.tsx # Manual study configuration
│ │ │ ├── LLMChat.tsx # Chat interface with Claude
│ │ │ ├── LiveCharts.tsx # Real-time optimization charts
│ │ │ ├── MarkdownReport.tsx # Markdown report renderer
│ │ │ └── ParameterTable.tsx # Design variables table
│ │ ├── hooks/
│ │ │ ├── useStudies.ts # Study data fetching
│ │ │ ├── useOptimization.ts # Optimization control
│ │ │ └── useWebSocket.ts # WebSocket connection
│ │ └── App.tsx
│ └── package.json
└── docs/
└── DASHBOARD_MASTER_PLAN.md (this file)
```
---
## Page 1: Study Configurator
### Purpose
Create and configure new optimization studies through:
- Manual form-based configuration
- LLM-assisted natural language setup (future)
### Layout
```
┌─────────────────────────────────────────────────────────────┐
│ Atomizer - Study Configurator [Home] [Help]│
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────┬─────────────────────────────────┐ │
│ │ Study Setup │ LLM Assistant (Future) │ │
│ │ │ ┌───────────────────────────┐ │ │
│ │ Study Name: │ │ Chat with Claude Code │ │ │
│ │ [____________] │ │ │ │ │
│ │ │ │ > "Create a study to │ │ │
│ │ Model Files: │ │ tune circular plate │ │ │
│ │ [Browse .prt] │ │ to 115 Hz" │ │ │
│ │ [Browse .sim] │ │ │ │ │
│ │ │ │ Claude: "I'll configure │ │ │
│ │ Design Variables: │ │ the study for you..." │ │ │
│ │ + Add Variable │ │ │ │ │
│ │ • diameter │ │ [Type message...] │ │ │
│ │ [50-150] mm │ └───────────────────────────┘ │ │
│ │ • thickness │ │ │
│ │ [2-10] mm │ Generated Configuration: │ │
│ │ │ ┌───────────────────────────┐ │ │
│ │ Optimization Goal: │ │ • Study: freq_tuning │ │ │
│ │ [Minimize ▼] │ │ • Target: 115.0 Hz │ │ │
│ │ │ │ • Variables: 2 │ │ │
│ │ Target Value: │ │ • Trials: 50 │ │ │
│ │ [115.0] Hz │ │ │ │ │
│ │ Tolerance: [0.1] │ │ [Apply Configuration] │ │ │
│ │ │ └───────────────────────────┘ │ │
│ │ [Advanced Options] │ │ │
│ │ │ │ │
│ │ [Create Study] │ │ │
│ └─────────────────────┴─────────────────────────────────┘ │
│ │
│ Recent Studies │
│ ┌───────────────────────────────────────────────────────┐ │
│ │ • circular_plate_frequency_tuning [View] [Resume] │ │
│ │ • beam_deflection_minimization [View] [Resume] │ │
│ └───────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
```
### Features
#### Manual Configuration
- **Study metadata**: Name, description, tags
- **Model upload**: .prt, .sim, .fem files (drag-and-drop)
- **Design variables**:
- Add/remove parameters
- Set bounds (min, max, step)
- Units specification
- **Objective function**:
- Goal type (minimize, maximize, target)
- Target value + tolerance
- Multi-objective support (future)
- **Optimization settings**:
- Number of trials
- Sampler selection (TPE, CMA-ES, Random)
- Early stopping rules
- **Validation rules**: Optional constraints
#### LLM Assistant (Future Phase)
- **Chat interface**: Embedded terminal-like chat with Claude Code
- Natural language study configuration
- Example: "Create a study to tune the first natural frequency of a circular plate to exactly 115 Hz"
- **Real-time configuration generation**:
- LLM parses intent
- Generates `workflow_config.json` and `optimization_config.json`
- Shows preview of generated config
- User can review and approve
- **Iterative refinement**:
- User: "Change target to 120 Hz"
- User: "Add thickness constraint < 8mm"
- **Context awareness**: LLM has access to:
- Uploaded model files
- Available extractors
- Previous studies
- PROTOCOL.md guidelines
### API Endpoints
```python
# backend/api/routes/studies.py
@router.post("/studies")
async def create_study(study_config: StudyConfig):
"""Create new study from configuration"""
@router.get("/studies")
async def list_studies():
"""List all studies with metadata"""
@router.get("/studies/{study_id}")
async def get_study(study_id: str):
"""Get study details"""
@router.put("/studies/{study_id}")
async def update_study(study_id: str, config: StudyConfig):
"""Update study configuration"""
@router.delete("/studies/{study_id}")
async def delete_study(study_id: str):
"""Delete study"""
```
```python
# backend/api/routes/llm.py (Future)
@router.post("/llm/chat")
async def chat_with_llm(message: str, context: dict):
"""Send message to Claude Code, get response + generated config"""
@router.post("/llm/apply-config")
async def apply_llm_config(study_id: str, generated_config: dict):
"""Apply LLM-generated configuration to study"""
```
---
## Page 2: Live Optimization Dashboard
### Purpose
Monitor running optimizations in real-time with interactive visualizations.
### Layout
```
┌─────────────────────────────────────────────────────────────┐
│ Atomizer - Live Dashboard [Configurator] [Help]│
├─────────────────────────────────────────────────────────────┤
│ Study: circular_plate_frequency_tuning [Stop] [Pause]│
│ Status: RUNNING Progress: 23/50 trials (46%) │
│ Best: 0.185 Hz Time: 15m 32s ETA: 18m │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────────────┬───────────────────────────┐ │
│ │ Convergence Plot │ Parameter Space │ │
│ │ │ │ │
│ │ Objective (Hz) │ thickness │ │
│ │ ↑ │ ↑ │ │
│ │ 5 │ • │ 10│ │ │
│ │ │ • │ │ • • │ │
│ │ 3 │ • • │ 8│ • ⭐ • │ │
│ │ │ • • │ │ • • • • │ │
│ │ 1 │ ••• │ 6│ • • │ │
│ │ │ •⭐ │ │ • │ │
│ │ 0 └─────────────────→ Trial │ 4│ │ │
│ │ 0 10 20 30 │ └──────────────→ │ │
│ │ │ 50 100 150 │ │
│ │ Target: 115.0 Hz ±0.1 │ diameter │ │
│ │ Current Best: 115.185 Hz │ ⭐ = Best trial │ │
│ └─────────────────────────────┴───────────────────────────┘ │
│ │
│ ┌─────────────────────────────┬───────────────────────────┐ │
│ │ Recent Trials │ System Stats │ │
│ │ │ │ │
│ │ #23 0.234 Hz ✓ │ CPU: 45% │ │
│ │ #22 1.456 Hz ✓ │ Memory: 2.1 GB │ │
│ │ #21 0.876 Hz ✓ │ NX Sessions: 1 │ │
│ │ #20 0.185 Hz ⭐ NEW BEST │ Solver Queue: 0 │ │
│ │ #19 2.345 Hz ✓ │ │ │
│ │ #18 PRUNED ✗ │ Pruned: 3 (13%) │ │
│ │ │ Success: 20 (87%) │ │
│ └─────────────────────────────┴───────────────────────────┘ │
│ │
│ ┌───────────────────────────────────────────────────────┐ │
│ │ Strategy Performance (Protocol 10) │ │
│ │ │ │
│ │ Phase: EXPLOITATION (CMA-ES) │ │
│ │ Transition at Trial #15 (confidence: 72%) │ │
│ │ │ │
│ │ TPE (Trials 1-15): Best = 0.485 Hz │ │
│ │ CMA-ES (Trials 16+): Best = 0.185 Hz │ │
│ │ │ │
│ └───────────────────────────────────────────────────────┘ │
│ │
│ [View Full Report] [Download Data] [Clone Study] │
└─────────────────────────────────────────────────────────────┘
```
### Features
#### Real-Time Updates (WebSocket)
- **Trial completion**: Instant notification when trial finishes
- **Best value updates**: Highlight new best trials
- **Progress tracking**: Current trial number, elapsed time, ETA
- **Status changes**: Running → Paused → Completed
#### Interactive Charts
1. **Convergence Plot**
- X-axis: Trial number
- Y-axis: Objective value
- Target line (if applicable)
- Best value trajectory
- Hover: Show trial details
2. **Parameter Space Visualization**
- 2D scatter plot (for 2D problems)
- 3D scatter plot (for 3D problems, using Three.js)
- High-D: Parallel coordinates plot
- Color-coded by objective value
- Click trial → Show details popup
3. **Parameter Importance** (Protocol 9)
- Bar chart from Optuna's fANOVA
- Shows which parameters matter most
- Updates after characterization phase
4. **Strategy Performance** (Protocol 10)
- Timeline showing strategy switches
- Performance comparison table
- Confidence metrics over time
#### Trial Table
- Recent 10 trials (scrollable to see all)
- Columns: Trial #, Objective, Parameters, Status, Time
- Click row → Expand details:
- Full parameter values
- Simulation time
- Solver logs (if failed)
- Pruning reason (if pruned)
#### Control Panel
- **Stop**: Gracefully stop optimization
- **Pause**: Pause after current trial
- **Resume**: Continue optimization
- **Clone**: Create new study with same config
#### Pruning Diagnostics
- Real-time pruning alerts
- Pruning breakdown (validation, simulation, OP2)
- False positive detection warnings
- Link to detailed pruning log
### API Endpoints
```python
# backend/api/routes/optimization.py
@router.post("/studies/{study_id}/start")
async def start_optimization(study_id: str):
"""Start optimization (spawns background process)"""
@router.post("/studies/{study_id}/stop")
async def stop_optimization(study_id: str):
"""Stop optimization gracefully"""
@router.post("/studies/{study_id}/pause")
async def pause_optimization(study_id: str):
"""Pause after current trial"""
@router.get("/studies/{study_id}/status")
async def get_status(study_id: str):
"""Get current optimization status"""
@router.get("/studies/{study_id}/history")
async def get_history(study_id: str, limit: int = 100):
"""Get trial history (reads optimization_history_incremental.json)"""
```
```python
# backend/api/websocket/optimization_stream.py
@router.websocket("/ws/optimization/{study_id}")
async def optimization_stream(websocket: WebSocket, study_id: str):
"""
WebSocket endpoint for real-time updates.
Watches:
- optimization_history_incremental.json (file watcher)
- pruning_history.json
- study.db (Optuna trial completion events)
Sends:
- trial_completed: { trial_number, objective, params, status }
- new_best: { trial_number, objective }
- status_change: { status: "running" | "paused" | "completed" }
- progress_update: { current, total, eta }
"""
```
---
## Page 3: Results Viewer
### Purpose
Display completed optimization reports with rich markdown rendering and interactive visualizations.
### Layout
```
┌─────────────────────────────────────────────────────────────┐
│ Atomizer - Results [Dashboard] [Configurator]│
├─────────────────────────────────────────────────────────────┤
│ Study: circular_plate_frequency_tuning │
│ Status: COMPLETED Trials: 50/50 Time: 35m 12s │
│ Best: 0.185 Hz (Trial #45) Target: 115.0 Hz │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌────────────────┬──────────────────────────────────────┐ │
│ │ Navigation │ Report Content │ │
│ │ │ │ │
│ │ • Summary │ # Optimization Report │ │
│ │ • Best Result │ **Study**: circular_plate_... │ │
│ │ • All Trials │ │ │
│ │ • Convergence │ ## Achieved Performance │ │
│ │ • Parameters │ - **First Frequency**: 115.185 Hz │ │
│ │ • Strategy │ - Target: 115.000 Hz │ │
│ │ • Pruning │ - Error: 0.185 Hz (0.16%) │ │
│ │ • Downloads │ │ │
│ │ │ ## Design Parameters │ │
│ │ [Live View] │ - **Inner Diameter**: 94.07 mm │ │
│ │ [Refresh] │ - **Plate Thickness**: 6.14 mm │ │
│ │ │ │ │
│ │ │ ## Convergence Plot │ │
│ │ │ [Interactive Chart Embedded] │ │
│ │ │ │ │
│ │ │ ## Top 10 Trials │ │
│ │ │ | Rank | Trial | Frequency | ... │ │
│ │ │ |------|-------|-----------|------- │ │
│ │ │ | 1 | #45 | 115.185 | ... │ │
│ │ │ │ │
│ └────────────────┴──────────────────────────────────────┘ │
│ │
│ Actions: │
│ [Download Report (MD)] [Download Data (JSON)] [Download │
│ Charts (PNG)] [Clone Study] [Continue Optimization] │
└─────────────────────────────────────────────────────────────┘
```
### Features
#### Markdown Report Rendering
- **Rich formatting**: Headings, tables, lists, code blocks
- **Syntax highlighting**: For code snippets (using highlight.js)
- **LaTeX support** (future): For mathematical equations
- **Auto-linking**: File references → clickable links
#### Embedded Interactive Charts
- **Static images replaced with live charts**:
- Convergence plot (Recharts)
- Design space scatter (Recharts or Plotly)
- Parameter importance (Recharts)
- Optuna visualizations (converted to Plotly/Recharts)
- **Hover tooltips**: Show trial details on hover
- **Zoom/pan**: Interactive exploration
- **Toggle series**: Show/hide data series
#### Navigation Sidebar
- **Auto-generated TOC**: From markdown headings
- **Smooth scrolling**: Click heading → scroll to section
- **Active section highlighting**: Current visible section
#### Live Report Mode
- **Watch for changes**: File watcher on `OPTIMIZATION_REPORT.md`
- **Auto-refresh**: When report is regenerated
- **Notification**: "Report updated - click to reload"
#### Data Downloads
- **Markdown report**: Raw `.md` file
- **Trial data**: JSON export of `optimization_history_incremental.json`
- **Charts**: High-res PNG/SVG exports
- **Full study**: Zip archive of entire study folder
### API Endpoints
```python
# backend/api/routes/reports.py
@router.get("/studies/{study_id}/report")
async def get_report(study_id: str):
"""Get markdown report content (reads 3_reports/OPTIMIZATION_REPORT.md)"""
@router.get("/studies/{study_id}/report/charts/{chart_name}")
async def get_chart(study_id: str, chart_name: str):
"""Get chart image (PNG/SVG)"""
@router.get("/studies/{study_id}/download")
async def download_study(study_id: str, format: str = "json"):
"""Download study data (JSON, CSV, or ZIP)"""
@router.post("/studies/{study_id}/report/regenerate")
async def regenerate_report(study_id: str):
"""Regenerate report from current data"""
```
---
## Implementation Phases
### Phase 1: Backend Foundation (Week 1)
**Goal**: Create FastAPI backend with basic study management
**Tasks**:
1. Set up FastAPI project structure
2. Implement study CRUD endpoints
3. Create optimization control endpoints (start/stop/status)
4. Add file upload handling
5. Integrate with existing Atomizer modules
6. Write API documentation (Swagger)
**Files to Create**:
- `backend/api/main.py`
- `backend/api/routes/studies.py`
- `backend/api/routes/optimization.py`
- `backend/api/services/study_service.py`
- `backend/requirements.txt`
**Deliverable**: Working REST API for study management
---
### Phase 2: Frontend Shell (Week 2)
**Goal**: Create React app with routing and basic UI
**Tasks**:
1. Set up Vite + React + TypeScript project
2. Configure TailwindCSS
3. Create page routing (Configurator, Dashboard, Results)
4. Build basic layout components (Header, Sidebar, Footer)
5. Implement study list view
6. Connect to backend API (React Query setup)
**Files to Create**:
- `frontend/src/App.tsx`
- `frontend/src/pages/*.tsx`
- `frontend/src/components/Layout.tsx`
- `frontend/src/hooks/useStudies.ts`
- `frontend/package.json`
**Deliverable**: Navigable UI shell with API integration
---
### Phase 3: Study Configurator Page (Week 3)
**Goal**: Functional study creation interface
**Tasks**:
1. Build study configuration form
2. Add file upload (drag-and-drop)
3. Design variable management (add/remove)
4. Optimization settings panel
5. Form validation
6. Study creation workflow
7. Recent studies list
**Files to Create**:
- `frontend/src/pages/Configurator.tsx`
- `frontend/src/components/StudyForm.tsx`
- `frontend/src/components/FileUpload.tsx`
- `frontend/src/components/VariableEditor.tsx`
**Deliverable**: Working study creation form
---
### Phase 4: Real-Time Dashboard (Week 4-5)
**Goal**: Live optimization monitoring
**Tasks**:
1. Implement WebSocket connection
2. Build real-time charts (Recharts):
- Convergence plot
- Parameter space scatter
- Parameter importance
3. Create trial table with auto-update
4. Add control panel (start/stop/pause)
5. System stats display
6. Pruning diagnostics integration
7. File watcher for `optimization_history_incremental.json`
**Files to Create**:
- `frontend/src/pages/Dashboard.tsx`
- `frontend/src/components/LiveCharts.tsx`
- `frontend/src/components/TrialTable.tsx`
- `frontend/src/hooks/useWebSocket.ts`
- `backend/api/websocket/optimization_stream.py`
**Deliverable**: Real-time optimization dashboard
---
### Phase 5: Results Viewer (Week 6)
**Goal**: Rich markdown report display
**Tasks**:
1. Markdown rendering (react-markdown)
2. Code syntax highlighting
3. Embedded interactive charts
4. Navigation sidebar (auto-generated TOC)
5. Live report mode (file watcher)
6. Data download endpoints
7. Chart export functionality
**Files to Create**:
- `frontend/src/pages/Results.tsx`
- `frontend/src/components/MarkdownReport.tsx`
- `frontend/src/components/ReportNavigation.tsx`
- `backend/api/routes/reports.py`
**Deliverable**: Complete results viewer
---
### Phase 6: LLM Integration (Future - Week 7-8)
**Goal**: Chat-based study configuration
**Tasks**:
1. Backend LLM integration:
- Claude API client
- Context management (uploaded files, PROTOCOL.md)
- Configuration generation from natural language
2. Frontend chat interface:
- Chat UI component
- Message streaming
- Configuration preview
- Apply/reject buttons
3. Iterative refinement workflow
**Files to Create**:
- `backend/api/routes/llm.py`
- `backend/api/services/llm_service.py`
- `frontend/src/components/LLMChat.tsx`
**Deliverable**: LLM-assisted study configuration
---
### Phase 7: Polish & Deployment (Week 9)
**Goal**: Production-ready deployment
**Tasks**:
1. Error handling and loading states
2. Responsive design (mobile-friendly)
3. Performance optimization
4. Security (CORS, authentication future)
5. Docker containerization
6. Deployment documentation
7. User guide
**Deliverables**:
- Docker compose setup
- Deployment guide
- User documentation
---
## Technical Specifications
### WebSocket Protocol
#### Client → Server
```json
{
"action": "subscribe",
"study_id": "circular_plate_frequency_tuning"
}
```
#### Server → Client Events
```json
// Trial completed
{
"type": "trial_completed",
"data": {
"trial_number": 23,
"objective": 0.234,
"params": { "diameter": 94.5, "thickness": 6.2 },
"status": "success",
"timestamp": "2025-11-21T10:30:45"
}
}
// New best trial
{
"type": "new_best",
"data": {
"trial_number": 20,
"objective": 0.185,
"params": { "diameter": 94.07, "thickness": 6.14 }
}
}
// Progress update
{
"type": "progress",
"data": {
"current": 23,
"total": 50,
"elapsed_seconds": 932,
"eta_seconds": 1080,
"status": "running"
}
}
// Status change
{
"type": "status_change",
"data": {
"status": "completed",
"reason": "Target achieved"
}
}
```
### File Watching Strategy
Use **watchdog** (Python) to monitor JSON files:
- `optimization_history_incremental.json` - Trial updates
- `pruning_history.json` - Pruning events
- `OPTIMIZATION_REPORT.md` - Report regeneration
```python
# backend/api/services/file_watcher.py
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
class OptimizationWatcher(FileSystemEventHandler):
def on_modified(self, event):
if event.src_path.endswith('optimization_history_incremental.json'):
# Read updated file
# Broadcast to WebSocket clients
await broadcast_update(study_id, new_trial_data)
```
---
## Security Considerations
### Authentication (Future Phase)
- **JWT tokens**: Secure API access
- **Session management**: User login/logout
- **Role-based access**: Admin vs. read-only users
### File Upload Security
- **File type validation**: Only .prt, .sim, .fem allowed
- **Size limits**: Max 100 MB per file
- **Virus scanning** (future): ClamAV integration
- **Sandboxed storage**: Isolated study folders
### API Rate Limiting
- **Per-endpoint limits**: Prevent abuse
- **WebSocket connection limits**: Max 10 concurrent per study
---
## Performance Optimization
### Backend
- **Async I/O**: All file operations async
- **Caching**: Redis for study metadata (future)
- **Pagination**: Large trial lists paginated
- **Compression**: Gzip responses
### Frontend
- **Code splitting**: Route-based chunks
- **Lazy loading**: Charts load on demand
- **Virtual scrolling**: Large trial tables
- **Image optimization**: Lazy load chart images
- **Service worker** (future): Offline support
---
## Deployment Options
### Option 1: Local Development Server
```bash
# Start backend
cd backend
python -m uvicorn api.main:app --reload
# Start frontend
cd frontend
npm run dev
```
### Option 2: Docker Compose (Production)
```yaml
# docker-compose.yml
version: '3.8'
services:
backend:
build: ./backend
ports:
- "8000:8000"
volumes:
- ./studies:/app/studies
environment:
- NX_PATH=/usr/local/nx2412
frontend:
build: ./frontend
ports:
- "3000:3000"
depends_on:
- backend
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
depends_on:
- frontend
- backend
```
### Option 3: Cloud Deployment (Future)
- **Backend**: AWS Lambda / Google Cloud Run
- **Frontend**: Vercel / Netlify
- **Database**: AWS RDS / Google Cloud SQL
- **File storage**: AWS S3 / Google Cloud Storage
---
## Future Enhancements
### Advanced Features
1. **Multi-user collaboration**: Shared studies, comments
2. **Study comparison**: Side-by-side comparison of studies
3. **Experiment tracking**: MLflow integration
4. **Version control**: Git-like versioning for studies
5. **Automated reporting**: Scheduled report generation
6. **Email notifications**: Optimization complete alerts
7. **Mobile app**: React Native companion app
### Integrations
1. **CAD viewers**: Embed 3D model viewer (Three.js)
2. **Simulation previews**: Show mesh/results in browser
3. **Cloud solvers**: Run Nastran in cloud
4. **Jupyter notebooks**: Embedded analysis notebooks
5. **CI/CD**: Automated testing for optimization workflows
---
## Success Metrics
### User Experience
- **Study creation time**: < 5 minutes (manual), < 2 minutes (LLM)
- **Dashboard refresh rate**: < 1 second latency
- **Report load time**: < 2 seconds
### System Performance
- **WebSocket latency**: < 100ms
- **API response time**: < 200ms (p95)
- **Concurrent users**: Support 10+ simultaneous optimizations
---
## Dependencies
### Backend
```
fastapi>=0.104.0
uvicorn[standard]>=0.24.0
python-multipart>=0.0.6
watchdog>=3.0.0
optuna>=3.4.0
pynastran>=1.4.0
python-socketio>=5.10.0
aiofiles>=23.2.1
```
### Frontend
```
react>=18.2.0
react-router-dom>=6.18.0
@tanstack/react-query>=5.8.0
recharts>=2.10.0
react-markdown>=9.0.0
socket.io-client>=4.7.0
tailwindcss>=3.3.5
```
---
## Next Steps
1. **Review this plan**: Discuss architecture, tech stack, priorities
2. **Prototype Phase 1**: Build minimal FastAPI backend
3. **Design mockups**: High-fidelity UI designs (Figma)
4. **Set up development environment**: Create project structure
5. **Begin Phase 1 implementation**: Backend foundation
---
**Confirmed Decisions** ✅:
1. **Architecture**: REST + WebSocket
2. **Deployment**: Self-hosted (local/Docker)
3. **Authentication**: Future phase
4. **Design**: Desktop-first
5. **Implementation Priority**: Live Dashboard → Study Configurator → Results Viewer
---
**Status**: ✅ Approved - Implementation starting with Phase 4 (Live Dashboard)

View File

@@ -0,0 +1,524 @@
# Dashboard React Implementation - Progress Report
**Date**: November 21, 2025
**Status**: 🔄 In Progress - Configuration Complete, Components Ready
---
## ✅ What Has Been Completed
### 1. Project Structure Created
```
atomizer-dashboard/frontend/
├── src/
│ ├── components/ # Reusable UI components ✅
│ ├── hooks/ # Custom React hooks ✅
│ ├── pages/ # Page components (empty - ready for Dashboard.tsx)
│ ├── types/ # TypeScript type definitions ✅
│ ├── utils/ # Utility functions (empty - ready for API utils)
│ ├── index.css # Tailwind CSS styles ✅
│ └── (main.tsx, App.tsx pending)
├── public/ # Static assets
├── index.html # HTML entry point ✅
├── package.json # Dependencies ✅
├── tsconfig.json # TypeScript config ✅
├── vite.config.ts # Vite config with proxy ✅
├── tailwind.config.js # Tailwind CSS config ✅
└── postcss.config.js # PostCSS config ✅
```
### 2. Configuration Files ✅
- **[package.json](../atomizer-dashboard/frontend/package.json)** - All dependencies specified:
- React 18.2.0
- React Router DOM 6.20.0
- Recharts 2.10.3
- TailwindCSS 3.3.6
- TypeScript 5.2.2
- Vite 5.0.8
- **[vite.config.ts](../atomizer-dashboard/frontend/vite.config.ts)** - Dev server on port 3000 with proxy to backend
- **[tsconfig.json](../atomizer-dashboard/frontend/tsconfig.json)** - Strict TypeScript configuration
- **[tailwind.config.js](../atomizer-dashboard/frontend/tailwind.config.js)** - Custom dark theme colors
- **[index.html](../atomizer-dashboard/frontend/index.html)** - HTML entry point
### 3. TypeScript Types ✅
**File**: [src/types/index.ts](../atomizer-dashboard/frontend/src/types/index.ts)
Complete type definitions for:
- `Study` - Study metadata
- `Trial` - Optimization trial data
- `PrunedTrial` - Pruned trial diagnostics
- `WebSocketMessage` - WebSocket message types
- `ConvergenceDataPoint` - Chart data for convergence plot
- `ParameterSpaceDataPoint` - Chart data for parameter space
- All API response types
### 4. Custom Hooks ✅
**File**: [src/hooks/useWebSocket.ts](../atomizer-dashboard/frontend/src/hooks/useWebSocket.ts)
Professional WebSocket hook with:
- Automatic connection management
- Reconnection with exponential backoff (up to 5 attempts)
- Type-safe message handling
- Callback system for different message types
- Connection status tracking
**Usage**:
```typescript
const { isConnected, lastMessage } = useWebSocket({
studyId: selectedStudyId,
onTrialCompleted: (trial) => setTrials(prev => [trial, ...prev]),
onNewBest: (trial) => showAlert('New best trial!'),
onProgress: (progress) => updateProgress(progress),
onTrialPruned: (pruned) => showWarning(pruned.pruning_cause),
});
```
### 5. Reusable UI Components ✅
Created professional, typed components:
**[Card.tsx](../atomizer-dashboard/frontend/src/components/Card.tsx)**
- Wrapper for content sections
- Optional title prop
- Tailwind CSS card styling
**[MetricCard.tsx](../atomizer-dashboard/frontend/src/components/MetricCard.tsx)**
- Display key metrics (total trials, best value, etc.)
- Customizable label and value color
- Responsive design
**[Badge.tsx](../atomizer-dashboard/frontend/src/components/Badge.tsx)**
- Status indicators (success, warning, error, info)
- Variant-based styling
- Used for study status display
**[StudyCard.tsx](../atomizer-dashboard/frontend/src/components/StudyCard.tsx)**
- Study list item component
- Progress bar visualization
- Active state highlighting
- Click handler for study selection
### 6. Tailwind CSS Setup ✅
**File**: [src/index.css](../atomizer-dashboard/frontend/src/index.css)
Custom utility classes:
- `.card` - Card container styling
- `.btn`, `.btn-primary`, `.btn-secondary` - Button variants
- `.input` - Form input styling
- `.badge-*` - Badge variants
- Custom scrollbar styling for dark theme
---
## 🔨 Next Steps - Manual Completion Required
Since `npm` is not available in the current environment, you'll need to complete the following steps manually:
### Step 1: Install Dependencies
```bash
cd atomizer-dashboard/frontend
npm install
```
This will install:
- React, React DOM, React Router
- Recharts for charts
- TailwindCSS, Autoprefixer, PostCSS
- Vite and TypeScript
- ESLint for code quality
### Step 2: Create Remaining Components
I've prepared the structure, but you'll need to create:
#### **src/main.tsx** (React entry point)
```typescript
import React from 'react';
import ReactDOM from 'react-dom/client';
import App from './App';
import './index.css';
ReactDOM.createRoot(document.getElementById('root')!).render(
<React.StrictMode>
<App />
</React.StrictMode>,
);
```
#### **src/App.tsx** (Main application component)
```typescript
import { useState, useEffect } from 'react';
import Dashboard from './pages/Dashboard';
import type { Study } from './types';
function App() {
const [studies, setStudies] = useState<Study[]>([]);
const [selectedStudyId, setSelectedStudyId] = useState<string | null>(null);
useEffect(() => {
// Fetch studies from API
fetch('/api/optimization/studies')
.then(res => res.json())
.then(data => {
setStudies(data.studies);
if (data.studies.length > 0) {
setSelectedStudyId(data.studies[0].id);
}
})
.catch(err => console.error('Failed to load studies:', err));
}, []);
return (
<div className="min-h-screen bg-dark-700">
<Dashboard
studies={studies}
selectedStudyId={selectedStudyId}
onStudySelect={setSelectedStudyId}
/>
</div>
);
}
export default App;
```
#### **src/pages/Dashboard.tsx** (Main dashboard page)
This is the core component that needs to be created. It should:
- Use the `useWebSocket` hook for real-time updates
- Display study list in sidebar using `StudyCard` components
- Show metrics using `MetricCard` components
- Render convergence and parameter space charts with Recharts
- Handle trial feed display
- Show pruning alerts
**Template structure**:
```typescript
import { useState, useEffect } from 'react';
import { useWebSocket } from '../hooks/useWebSocket';
import { Card } from '../components/Card';
import { MetricCard } from '../components/MetricCard';
import { StudyCard } from '../components/StudyCard';
import type { Study, Trial } from '../types';
// Import Recharts components for charts
interface DashboardProps {
studies: Study[];
selectedStudyId: string | null;
onStudySelect: (studyId: string) => void;
}
export default function Dashboard({ studies, selectedStudyId, onStudySelect }: DashboardProps) {
const [trials, setTrials] = useState<Trial[]>([]);
const [bestValue, setBestValue] = useState<number>(Infinity);
// WebSocket connection
const { isConnected } = useWebSocket({
studyId: selectedStudyId,
onTrialCompleted: (trial) => {
setTrials(prev => [trial, ...prev].slice(0, 20));
if (trial.objective < bestValue) {
setBestValue(trial.objective);
}
},
onNewBest: (trial) => {
console.log('New best trial:', trial);
// Show alert
},
onTrialPruned: (pruned) => {
console.log('Trial pruned:', pruned);
// Show warning
},
});
// Load initial trial history when study changes
useEffect(() => {
if (selectedStudyId) {
fetch(`/api/optimization/studies/${selectedStudyId}/history?limit=20`)
.then(res => res.json())
.then(data => {
setTrials(data.trials.reverse());
if (data.trials.length > 0) {
setBestValue(Math.min(...data.trials.map(t => t.objective)));
}
})
.catch(err => console.error('Failed to load history:', err));
}
}, [selectedStudyId]);
return (
<div className="container mx-auto p-6">
{/* Header */}
<header className="mb-8">
<h1 className="text-3xl font-bold text-primary-400">
Atomizer Dashboard
</h1>
<p className="text-dark-200 mt-2">
Real-time optimization monitoring
</p>
</header>
<div className="grid grid-cols-12 gap-6">
{/* Sidebar - Study List */}
<aside className="col-span-3">
<Card title="Active Studies">
<div className="space-y-3">
{studies.map(study => (
<StudyCard
key={study.id}
study={study}
isActive={study.id === selectedStudyId}
onClick={() => onStudySelect(study.id)}
/>
))}
</div>
</Card>
</aside>
{/* Main Content */}
<main className="col-span-9">
{/* Metrics Grid */}
<div className="grid grid-cols-4 gap-4 mb-6">
<MetricCard label="Total Trials" value={trials.length} />
<MetricCard
label="Best Value"
value={bestValue === Infinity ? '-' : bestValue.toFixed(4)}
valueColor="text-green-400"
/>
<MetricCard
label="Connection"
value={isConnected ? 'Connected' : 'Disconnected'}
valueColor={isConnected ? 'text-green-400' : 'text-red-400'}
/>
<MetricCard label="Pruned" value={0} valueColor="text-yellow-400" />
</div>
{/* Charts */}
<div className="grid grid-cols-2 gap-6 mb-6">
<Card title="Convergence Plot">
{/* Add Recharts LineChart here */}
<div className="h-64 flex items-center justify-center text-dark-300">
Convergence chart will go here
</div>
</Card>
<Card title="Parameter Space">
{/* Add Recharts ScatterChart here */}
<div className="h-64 flex items-center justify-center text-dark-300">
Parameter space chart will go here
</div>
</Card>
</div>
{/* Trial Feed */}
<Card title="Recent Trials">
<div className="space-y-2 max-h-96 overflow-y-auto">
{trials.map(trial => (
<div
key={trial.trial_number}
className={`p-3 rounded-lg ${
trial.objective === bestValue
? 'bg-green-900 border-l-4 border-green-400'
: 'bg-dark-500'
}`}
>
<div className="flex justify-between mb-1">
<span className="font-semibold text-primary-400">
Trial #{trial.trial_number}
</span>
<span className="font-mono text-green-400">
{trial.objective.toFixed(4)} Hz
</span>
</div>
<div className="text-xs text-dark-200">
{Object.entries(trial.design_variables).map(([key, val]) => (
<span key={key} className="mr-3">
{key}: {val.toFixed(2)}
</span>
))}
</div>
</div>
))}
</div>
</Card>
</main>
</div>
</div>
);
}
```
### Step 3: Add Recharts Integration
In the Dashboard component, add the convergence and parameter space charts using Recharts:
```typescript
import {
LineChart, Line, ScatterChart, Scatter,
XAxis, YAxis, CartesianGrid, Tooltip, Legend, ResponsiveContainer
} from 'recharts';
// Convergence chart
const convergenceData = trials.map((trial, idx) => ({
trial_number: trial.trial_number,
objective: trial.objective,
best_so_far: Math.min(...trials.slice(0, idx + 1).map(t => t.objective)),
}));
<ResponsiveContainer width="100%" height={250}>
<LineChart data={convergenceData}>
<CartesianGrid strokeDasharray="3 3" stroke="#334155" />
<XAxis dataKey="trial_number" stroke="#94a3b8" />
<YAxis stroke="#94a3b8" />
<Tooltip
contentStyle={{ backgroundColor: '#1e293b', border: 'none' }}
labelStyle={{ color: '#e2e8f0' }}
/>
<Legend />
<Line type="monotone" dataKey="objective" stroke="#60a5fa" name="Objective" />
<Line type="monotone" dataKey="best_so_far" stroke="#10b981" name="Best So Far" />
</LineChart>
</ResponsiveContainer>
// Parameter space chart (similar structure with ScatterChart)
```
### Step 4: Run Development Server
```bash
npm run dev
```
The React app will be available at **http://localhost:3000**
The Vite proxy will forward:
- `/api/*``http://localhost:8000/api/*`
- WebSocket connections automatically
---
## 📊 Architecture Summary
### Data Flow
```
Backend (FastAPI) :8000
↓ REST API
React App :3000 (Vite dev server with proxy)
↓ WebSocket
useWebSocket hook → Dashboard component → UI updates
```
### Component Hierarchy
```
App
└── Dashboard
├── Sidebar
│ └── StudyCard (multiple)
├── Metrics Grid
│ └── MetricCard (multiple)
├── Charts
│ ├── Card (Convergence)
│ │ └── LineChart (Recharts)
│ └── Card (Parameter Space)
│ └── ScatterChart (Recharts)
└── Trial Feed
└── Card
└── Trial Items
```
---
## 🚀 Benefits of React Implementation
### vs. Current HTML Dashboard
1. **Component Reusability** - UI components can be used across pages
2. **Type Safety** - TypeScript catches errors at compile time
3. **Better State Management** - React state + hooks vs. manual DOM manipulation
4. **Easier Testing** - React Testing Library for component tests
5. **Professional Architecture** - Scalable for adding Configurator and Results pages
6. **Hot Module Replacement** - Instant updates during development
7. **Better Performance** - React's virtual DOM optimizations
---
## 📁 Files Created
### Configuration (Complete)
- `package.json`
- `vite.config.ts`
- `tsconfig.json`
- `tsconfig.node.json`
- `tailwind.config.js`
- `postcss.config.js`
- `index.html`
### Source Files (Complete)
- `src/index.css`
- `src/types/index.ts`
- `src/hooks/useWebSocket.ts`
- `src/components/Card.tsx`
- `src/components/MetricCard.tsx`
- `src/components/Badge.tsx`
- `src/components/StudyCard.tsx`
### Source Files (Pending - Templates Provided Above)
- `src/main.tsx`
- `src/App.tsx`
- `src/pages/Dashboard.tsx`
---
## 🐛 Troubleshooting
### npm not found
- Install Node.js from nodejs.org
- Verify: `node --version` and `npm --version`
### Dependency installation fails
- Delete `node_modules` and `package-lock.json`
- Run `npm install` again
- Check for network/proxy issues
### TypeScript errors
- Run `npm run build` to see all errors
- Check `tsconfig.json` settings
- Ensure all imports use correct paths
### WebSocket connection fails
- Ensure backend is running on port 8000
- Check Vite proxy configuration in `vite.config.ts`
- Verify CORS settings in backend
### Charts not displaying
- Check Recharts is installed: `npm list recharts`
- Verify data format matches Recharts API
- Check browser console for errors
---
## 📚 Next Steps After Manual Completion
1. **Test the Dashboard**
- Start backend: `python -m uvicorn api.main:app --reload`
- Start frontend: `npm run dev`
- Open browser: http://localhost:3000
- Select a study and verify real-time updates
2. **Add Enhancements**
- Data export buttons
- Pruning alert toasts
- Study control buttons (future)
- Parameter importance chart (if Protocol 9 data available)
3. **Build for Production**
- `npm run build`
- Serve `dist/` folder from FastAPI static files
4. **Add More Pages**
- Study Configurator page
- Results Viewer page
- Add React Router for navigation
---
**Status**: ✅ Configuration and foundation complete. Ready for manual `npm install` and component completion.
**Next Session**: Complete Dashboard.tsx, integrate Recharts, test end-to-end.

View File

@@ -0,0 +1,287 @@
# Dashboard Implementation - Session Summary
**Date**: November 21, 2025
**Status**: ✅ Functional Live Dashboard Complete
---
## What We Built Today
### ✅ Complete FastAPI Backend
**Location**: `atomizer-dashboard/backend/`
**Features**:
- **REST API**: Study listing, status, history, pruning data
- **WebSocket Streaming**: Real-time trial updates via file watching
- **File Watcher**: Monitors `optimization_history_incremental.json` automatically
- **CORS Configured**: Serves dashboard at http://localhost:8000
**Files Created**:
- `api/main.py` - FastAPI app with WebSocket support
- `api/routes/optimization.py` - REST endpoints
- `api/websocket/optimization_stream.py` - WebSocket + file watching
- `requirements.txt` - Dependencies
- `README.md` - Complete API documentation
### ✅ Live Dashboard (HTML)
**Location**: `atomizer-dashboard/dashboard-test.html`
**Features Working**:
- Auto-discovers all running studies
- Real-time WebSocket connection to selected study
- Live metrics (best value, trial count, average objective)
- Animated trial feed with last 20 trials
- Progress bars for each study
- Green highlighting for new best trials
- Connection status monitoring
- WebSocket message log
**Access**: http://localhost:8000
---
## How to Use
### Start the Backend
```bash
cd atomizer-dashboard/backend
python -m uvicorn api.main:app --reload --host 0.0.0.0 --port 8000
```
### Access Dashboard
Open browser: http://localhost:8000
### Monitor Live Optimization
1. Dashboard loads all active studies
2. Click any study in left sidebar
3. Watch real-time updates stream in
4. See new trials appear instantly
5. Best trials highlighted in green
---
## Architecture
### Backend Stack
- **FastAPI**: Async Python web framework
- **Uvicorn**: ASGI server
- **Watchdog**: File system monitoring
- **WebSockets**: Real-time bidirectional communication
### Communication Flow
```
Optimization completes trial
Updates optimization_history_incremental.json
Watchdog detects file change
OptimizationFileHandler processes update
WebSocket broadcasts to all connected clients
Dashboard JavaScript receives message
DOM updates with new trial data (animated)
```
### WebSocket Protocol
**Message Types**:
- `connected` - Initial connection confirmation
- `trial_completed` - New trial finished
- `new_best` - New best trial found
- `progress` - Progress update (X/Y trials)
- `trial_pruned` - Trial pruned with diagnostics
---
## ✅ Completed Enhancements (Option A)
### 1. Charts (Chart.js v4.4.0)
-**Convergence plot** - Line chart with objective value + "best so far" trajectory
-**Parameter space** - 2D scatter plot of first two design variables
- ⏸️ **Parameter importance** - Planned for React frontend (requires Protocol 9 data)
### 2. Pruning Alerts
- ✅ Toast notifications for pruned trials
- ✅ Pruning count in metric dashboard
- ✅ Orange warning styling for pruned trials
### 3. Data Export
- ✅ Download history as JSON
- ✅ Export to CSV
- ✅ Success alerts on export
### 4. Study Details
- ✅ Show target value (in study list)
- ✅ Display progress (current/total trials)
- ✅ Best value for each study
- ⏸️ Show intelligent optimizer strategy - Planned for React frontend
---
## Future Phases
### Phase 2: React Frontend
- Full React + Vite + TypeScript app
- Professional component structure
- TailwindCSS styling
- React Query for state management
- Multiple pages (Dashboard, Configurator, Results)
### Phase 3: Study Configurator
- Create new studies via UI
- Upload model files
- Configure design variables
- LLM chat interface (future)
### Phase 4: Results Viewer
- Markdown report rendering
- Interactive charts embedded
- Data download options
---
## Files Created This Session
```
atomizer-dashboard/
├── backend/
│ ├── api/
│ │ ├── __init__.py
│ │ ├── main.py # FastAPI app ✅
│ │ ├── routes/
│ │ │ ├── __init__.py
│ │ │ └── optimization.py # REST endpoints ✅
│ │ └── websocket/
│ │ ├── __init__.py
│ │ └── optimization_stream.py # WebSocket + file watching ✅
│ ├── requirements.txt # Dependencies ✅
│ └── README.md # API docs ✅
├── dashboard-test.html # Basic live dashboard ✅
├── dashboard-enhanced.html # Enhanced with charts/export ✅
├── README.md # Dashboard overview ✅
└── docs/ (project root)
├── DASHBOARD_MASTER_PLAN.md # Full architecture plan ✅
├── DASHBOARD_IMPLEMENTATION_STATUS.md # Implementation status ✅
└── DASHBOARD_SESSION_SUMMARY.md # This file ✅
```
---
## Testing Performed
### Backend Testing
✅ REST API endpoints working
- `GET /api/optimization/studies` - Returns all studies
- `GET /api/optimization/studies/{id}/status` - Returns study details
- `GET /api/optimization/studies/{id}/history` - Returns trials
- `GET /api/optimization/studies/{id}/pruning` - Returns pruning data
✅ WebSocket connection working
- Connects successfully to study
- Receives real-time updates
- Handles disconnection gracefully
- Multiple concurrent connections supported
✅ File watching working
- Detects changes to optimization_history_incremental.json
- Broadcasts to all connected clients
- Processes trial data correctly
### Frontend Testing
✅ Study discovery working
✅ WebSocket connection established
✅ Real-time updates displaying
✅ Animations working
✅ Progress bars updating
---
## Known Limitations
1. **No charts yet** - Only text-based trial display
2. **No data export** - Can't download trial data yet
3. **No pruning alerts** - Pruned trials logged but not visually highlighted
4. **No study control** - Can't start/stop optimization from UI
5. **Single HTML file** - Not a full React app yet
---
## Performance
- **WebSocket latency**: <100ms typical
- **File watching overhead**: ~1ms per trial
- **Dashboard refresh**: Instant via WebSocket push
- **Concurrent studies**: Tested with 5+ simultaneous streams
- **Memory**: ~50MB per active study observer
---
## Success Criteria Met ✅
- [x] Backend API functional
- [x] WebSocket streaming working
- [x] Real-time updates displaying
- [x] Multiple studies supported
- [x] File watching reliable
- [x] Dashboard accessible and usable
- [x] Documentation complete
---
## Ready for Next Session
**Immediate tasks**:
1. Add Chart.js for convergence plot
2. Add parameter space scatter plot
3. Add pruning diagnostics display
4. Add data export (JSON/CSV)
**Medium term**:
5. Build full React app
6. Add study configurator
7. Add results viewer
8. Deploy with Docker
---
**Session Status**: 🎉 Enhanced live dashboard complete with charts, pruning alerts, and data export!
---
## How to Use the Dashboard
### Start the Backend
```bash
cd atomizer-dashboard/backend
python -m uvicorn api.main:app --reload --host 0.0.0.0 --port 8000
```
### Access Dashboard
Open browser: **http://localhost:8000**
### Monitor Live Optimization
1. Dashboard loads all active studies automatically
2. Click any study in left sidebar to connect
3. Watch real-time updates stream in:
- New trials appear instantly in the feed
- Convergence chart updates automatically
- Parameter space plot shows trial distribution
- Best trials highlighted in green
- Pruned trials show orange toast alerts
4. Export data anytime with JSON or CSV buttons
### Features Demonstrated
- ✅ Real-time WebSocket updates (<100ms latency)
- ✅ Interactive Chart.js visualizations
- ✅ Pruning diagnostics and alerts
- ✅ Data export (JSON/CSV)
- ✅ Study auto-discovery
- ✅ Connection monitoring
---
**Next Session**: Build full React + Vite + TypeScript frontend (see [DASHBOARD_MASTER_PLAN.md](DASHBOARD_MASTER_PLAN.md))

View File

@@ -0,0 +1,635 @@
# Atomizer Neural Features - Complete Guide
**Version**: 1.0.0
**Last Updated**: 2025-11-25
**Status**: Production Ready
---
## Executive Summary
AtomizerField brings **Graph Neural Network (GNN) acceleration** to Atomizer, enabling:
| Metric | Traditional FEA | Neural Network | Improvement |
|--------|-----------------|----------------|-------------|
| Time per evaluation | 10-30 minutes | 4.5 milliseconds | **2,000-500,000x** |
| Trials per hour | 2-6 | 800,000+ | **1000x** |
| Design exploration | ~50 designs | ~50,000 designs | **1000x** |
This guide covers all neural network features, architectures, and integration points.
---
## Table of Contents
1. [Overview](#overview)
2. [Neural Model Types](#neural-model-types)
3. [Architecture Deep Dive](#architecture-deep-dive)
4. [Training Pipeline](#training-pipeline)
5. [Integration Layer](#integration-layer)
6. [Loss Functions](#loss-functions)
7. [Uncertainty Quantification](#uncertainty-quantification)
8. [Pre-trained Models](#pre-trained-models)
9. [Configuration Reference](#configuration-reference)
10. [Performance Benchmarks](#performance-benchmarks)
---
## Overview
### What is AtomizerField?
AtomizerField is a neural network system that learns to predict FEA simulation results. Instead of solving physics equations numerically (expensive), it uses trained neural networks to predict results instantly.
```
Traditional Workflow:
Design → NX Model → Mesh → Solve (30 min) → Results → Objective
Neural Workflow:
Design → Neural Network (4.5 ms) → Results → Objective
```
### Core Components
| Component | File | Purpose |
|-----------|------|---------|
| **BDF/OP2 Parser** | `neural_field_parser.py` | Converts NX Nastran files to neural format |
| **Data Validator** | `validate_parsed_data.py` | Physics and quality checks |
| **Field Predictor** | `field_predictor.py` | GNN for displacement/stress fields |
| **Parametric Predictor** | `parametric_predictor.py` | GNN for direct objective prediction |
| **Physics Loss** | `physics_losses.py` | Physics-informed training |
| **Neural Surrogate** | `neural_surrogate.py` | Integration with Atomizer |
| **Neural Runner** | `runner_with_neural.py` | Optimization with neural acceleration |
---
## Neural Model Types
### 1. Field Predictor GNN
**Purpose**: Predicts complete displacement and stress fields across the entire mesh.
**Architecture**:
```
Input Features (12D per node):
├── Node coordinates (x, y, z)
├── Material properties (E, nu, rho)
├── Boundary conditions (fixed/free per DOF)
└── Load information (force magnitude, direction)
Edge Features (5D per edge):
├── Edge length
├── Direction vector (3D)
└── Element type indicator
GNN Layers (6 message passing):
├── MeshGraphConv (custom for FEA topology)
├── Layer normalization
├── ReLU activation
└── Dropout (0.1)
Output (per node):
├── Displacement (6 DOF: Tx, Ty, Tz, Rx, Ry, Rz)
└── Von Mises stress (1 value)
```
**Parameters**: 718,221 trainable parameters
**Use Case**: When you need full field predictions (stress distribution, deformation shape).
### 2. Parametric Predictor GNN (Recommended)
**Purpose**: Predicts all 4 optimization objectives directly from design parameters.
**Architecture**:
```
Design Parameters (4D):
├── beam_half_core_thickness
├── beam_face_thickness
├── holes_diameter
└── hole_count
Design Encoder (MLP):
├── Linear(4 → 64)
├── ReLU
├── Linear(64 → 128)
└── ReLU
GNN Backbone (4 layers):
├── Design-conditioned message passing
├── Hidden channels: 128
└── Global pooling: Mean + Max
Scalar Heads (MLP):
├── Linear(384 → 128)
├── ReLU
├── Linear(128 → 64)
├── ReLU
└── Linear(64 → 4)
Output (4 objectives):
├── mass (grams)
├── frequency (Hz)
├── max_displacement (mm)
└── max_stress (MPa)
```
**Parameters**: ~500,000 trainable parameters
**Use Case**: Direct optimization objective prediction (fastest option).
### 3. Ensemble Models
**Purpose**: Uncertainty quantification through multiple model predictions.
**How it works**:
1. Train 3-5 models with different random seeds
2. At inference, run all models
3. Use mean for prediction, std for uncertainty
4. High uncertainty → trigger FEA validation
---
## Architecture Deep Dive
### Graph Construction from FEA Mesh
The neural network treats the FEA mesh as a graph:
```
FEA Mesh → Neural Graph
─────────────────────────────────────────────
Nodes (grid points) → Graph nodes
Elements (CTETRA, CQUAD) → Graph edges
Node properties → Node features
Element connectivity → Edge connections
```
### Message Passing
The GNN learns physics through message passing:
```python
# Simplified message passing
for layer in gnn_layers:
# Aggregate neighbor information
messages = aggregate(
node_features,
edge_features,
adjacency
)
# Update node features
node_features = update(
node_features,
messages
)
```
This is analogous to how FEA distributes forces through elements.
### Design Conditioning (Parametric GNN)
The parametric model conditions the GNN on design parameters:
```python
# Design parameters are encoded
design_encoding = design_encoder(design_params) # [batch, 128]
# Broadcast to all nodes
node_features = node_features + design_encoding.unsqueeze(1)
# GNN processes with design context
for layer in gnn_layers:
node_features = layer(node_features, edge_index, edge_attr)
```
---
## Training Pipeline
### Step 1: Collect Training Data
Run optimization with training data export:
```python
# In workflow_config.json
{
"training_data_export": {
"enabled": true,
"export_dir": "atomizer_field_training_data/my_study"
}
}
```
Output structure:
```
atomizer_field_training_data/my_study/
├── trial_0001/
│ ├── input/model.bdf # Nastran input
│ ├── output/model.op2 # Binary results
│ └── metadata.json # Design params + objectives
├── trial_0002/
│ └── ...
└── study_summary.json
```
### Step 2: Parse to Neural Format
```bash
cd atomizer-field
python batch_parser.py ../atomizer_field_training_data/my_study
```
Creates HDF5 + JSON files:
```
trial_0001/
├── neural_field_data.json # Metadata, structure
└── neural_field_data.h5 # Mesh coordinates, field results
```
### Step 3: Train Model
**Field Predictor**:
```bash
python train.py \
--train_dir ../training_data/parsed \
--epochs 200 \
--model FieldPredictorGNN \
--hidden_channels 128 \
--num_layers 6 \
--physics_loss_weight 0.3
```
**Parametric Predictor** (recommended):
```bash
python train_parametric.py \
--train_dir ../training_data/parsed \
--val_dir ../validation_data/parsed \
--epochs 200 \
--hidden_channels 128 \
--num_layers 4
```
### Step 4: Validate
```bash
python validate.py --checkpoint runs/my_model/checkpoint_best.pt
```
Expected output:
```
Validation Results:
├── Mean Absolute Error: 2.3% (mass), 1.8% (frequency)
├── R² Score: 0.987
├── Inference Time: 4.5ms ± 0.8ms
└── Physics Violations: 0.2%
```
### Step 5: Deploy
```python
# In workflow_config.json
{
"neural_surrogate": {
"enabled": true,
"model_checkpoint": "atomizer-field/runs/my_model/checkpoint_best.pt",
"confidence_threshold": 0.85
}
}
```
---
## Integration Layer
### NeuralSurrogate Class
```python
from optimization_engine.neural_surrogate import NeuralSurrogate
# Load trained model
surrogate = NeuralSurrogate(
model_path="atomizer-field/runs/model/checkpoint_best.pt",
device="cuda",
confidence_threshold=0.85
)
# Predict
results, confidence, used_nn = surrogate.predict(
design_variables={"thickness": 5.0, "width": 10.0},
bdf_template="model.bdf"
)
if used_nn:
print(f"Predicted with {confidence:.1%} confidence")
else:
print("Fell back to FEA (low confidence)")
```
### ParametricSurrogate Class (Recommended)
```python
from optimization_engine.neural_surrogate import create_parametric_surrogate_for_study
# Auto-detect and load
surrogate = create_parametric_surrogate_for_study()
# Predict all 4 objectives
results = surrogate.predict({
"beam_half_core_thickness": 7.0,
"beam_face_thickness": 2.5,
"holes_diameter": 35.0,
"hole_count": 10.0
})
print(f"Mass: {results['mass']:.2f} g")
print(f"Frequency: {results['frequency']:.2f} Hz")
print(f"Max displacement: {results['max_displacement']:.6f} mm")
print(f"Max stress: {results['max_stress']:.2f} MPa")
print(f"Inference time: {results['inference_time_ms']:.2f} ms")
```
### HybridOptimizer Class
```python
from optimization_engine.neural_surrogate import create_hybrid_optimizer_from_config
# Smart FEA/NN switching
hybrid = create_hybrid_optimizer_from_config(config_path)
for trial in range(1000):
if hybrid.should_use_nn(trial):
result = hybrid.predict_with_nn(design_vars)
else:
result = hybrid.run_fea(design_vars)
hybrid.add_training_sample(design_vars, result)
```
---
## Loss Functions
### 1. MSE Loss (Standard)
```python
loss = mean_squared_error(predicted, target)
```
Equal weighting of all outputs. Simple but effective.
### 2. Relative Loss
```python
loss = mean(|predicted - target| / |target|)
```
Better for multi-scale outputs (stress in MPa, displacement in mm).
### 3. Physics-Informed Loss
```python
loss = mse_loss + lambda_physics * physics_loss
physics_loss = (
equilibrium_violation + # F = ma
constitutive_violation + # σ = Eε
boundary_condition_violation # u = 0 at supports
)
```
Enforces physical laws during training. Improves generalization.
### 4. Max Error Loss
```python
loss = max(|predicted - target|)
```
Penalizes worst predictions. Critical for safety-critical applications.
### When to Use Each
| Loss Function | Use Case |
|--------------|----------|
| MSE | General training, balanced errors |
| Relative | Multi-scale outputs |
| Physics | Better generalization, extrapolation |
| Max Error | Safety-critical, avoid outliers |
---
## Uncertainty Quantification
### Ensemble Method
```python
from atomizer_field.neural_models.uncertainty import EnsemblePredictor
# Load 5 models
ensemble = EnsemblePredictor([
"model_fold_1.pt",
"model_fold_2.pt",
"model_fold_3.pt",
"model_fold_4.pt",
"model_fold_5.pt"
])
# Predict with uncertainty
mean_pred, std_pred = ensemble.predict_with_uncertainty(design_vars)
confidence = 1.0 / (1.0 + std_pred / mean_pred)
if confidence < 0.85:
# Trigger FEA validation
fea_result = run_fea(design_vars)
```
### Monte Carlo Dropout
```python
# Enable dropout at inference
model.train() # Keeps dropout active
predictions = []
for _ in range(10):
pred = model(input_data)
predictions.append(pred)
mean_pred = np.mean(predictions)
std_pred = np.std(predictions)
```
---
## Pre-trained Models
### Available Models
| Model | Location | Design Variables | Objectives |
|-------|----------|------------------|------------|
| UAV Arm (Parametric) | `runs/parametric_uav_arm_v2/` | 4 | 4 |
| UAV Arm (Field) | `runs/uav_arm_model/` | 4 | 2 fields |
### Using Pre-trained Models
```python
from optimization_engine.neural_surrogate import create_parametric_surrogate_for_study
# Auto-detects model in atomizer-field/runs/
surrogate = create_parametric_surrogate_for_study()
# Immediate predictions - no training needed!
result = surrogate.predict({
"beam_half_core_thickness": 7.0,
"beam_face_thickness": 2.5,
"holes_diameter": 35.0,
"hole_count": 10.0
})
```
---
## Configuration Reference
### Complete workflow_config.json
```json
{
"study_name": "neural_optimization_study",
"neural_surrogate": {
"enabled": true,
"model_checkpoint": "atomizer-field/runs/parametric_uav_arm_v2/checkpoint_best.pt",
"confidence_threshold": 0.85,
"device": "cuda",
"cache_predictions": true,
"cache_size": 10000
},
"hybrid_optimization": {
"enabled": true,
"exploration_trials": 30,
"validation_frequency": 20,
"retrain_frequency": 100,
"drift_threshold": 0.15,
"retrain_on_drift": true
},
"training_data_export": {
"enabled": true,
"export_dir": "atomizer_field_training_data/my_study",
"include_failed_trials": false
}
}
```
### Parameter Reference
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `enabled` | bool | false | Enable neural surrogate |
| `model_checkpoint` | str | - | Path to trained model |
| `confidence_threshold` | float | 0.85 | Min confidence for NN |
| `device` | str | "cuda" | "cuda" or "cpu" |
| `cache_predictions` | bool | true | Cache repeated designs |
| `exploration_trials` | int | 30 | Initial FEA trials |
| `validation_frequency` | int | 20 | FEA validation interval |
| `retrain_frequency` | int | 100 | Retrain interval |
| `drift_threshold` | float | 0.15 | Max error before retrain |
---
## Performance Benchmarks
### UAV Arm Study (4 design variables, 4 objectives)
| Metric | FEA Only | Neural Only | Hybrid |
|--------|----------|-------------|--------|
| Time per trial | 10.2s | 4.5ms | 0.5s avg |
| Total time (1000 trials) | 2.8 hours | 4.5 seconds | 8 minutes |
| Prediction error | - | 2.3% | 1.8% |
| Speedup | 1x | 2,267x | 21x |
### Accuracy by Objective
| Objective | MAE | MAPE | R² |
|-----------|-----|------|-----|
| Mass | 0.5g | 0.8% | 0.998 |
| Frequency | 2.1 Hz | 1.2% | 0.995 |
| Max Displacement | 0.001mm | 2.8% | 0.987 |
| Max Stress | 3.2 MPa | 3.5% | 0.981 |
### GPU vs CPU
| Device | Inference Time | Throughput |
|--------|---------------|------------|
| CPU (i7-12700) | 45ms | 22/sec |
| GPU (RTX 3080) | 4.5ms | 220/sec |
| Speedup | 10x | 10x |
---
## Quick Reference
### Files and Locations
```
atomizer-field/
├── neural_field_parser.py # Parse BDF/OP2
├── batch_parser.py # Batch processing
├── validate_parsed_data.py # Data validation
├── train.py # Train field predictor
├── train_parametric.py # Train parametric model
├── predict.py # Inference engine
├── neural_models/
│ ├── field_predictor.py # GNN architecture
│ ├── parametric_predictor.py # Parametric GNN
│ ├── physics_losses.py # Loss functions
│ ├── uncertainty.py # Uncertainty quantification
│ └── data_loader.py # PyTorch dataset
├── runs/ # Trained models
│ └── parametric_uav_arm_v2/
│ └── checkpoint_best.pt
└── tests/ # 18 comprehensive tests
```
### Common Commands
```bash
# Parse training data
python batch_parser.py ../training_data
# Train parametric model
python train_parametric.py --train_dir ../data --epochs 200
# Validate model
python validate.py --checkpoint runs/model/checkpoint_best.pt
# Run tests
python -m pytest tests/ -v
```
### Python API
```python
# Quick start
from optimization_engine.neural_surrogate import create_parametric_surrogate_for_study
surrogate = create_parametric_surrogate_for_study()
result = surrogate.predict({"param1": 1.0, "param2": 2.0})
```
---
## See Also
- [Neural Workflow Tutorial](NEURAL_WORKFLOW_TUTORIAL.md) - Step-by-step guide
- [GNN Architecture](GNN_ARCHITECTURE.md) - Technical deep dive
- [Physics Loss Guide](PHYSICS_LOSS_GUIDE.md) - Loss function selection
- [Atomizer-Field Integration Plan](ATOMIZER_FIELD_INTEGRATION_PLAN.md) - Implementation details
---
**AtomizerField**: Revolutionizing structural optimization through neural field learning.
*Built with PyTorch Geometric, designed for the future of engineering.*

View File

@@ -0,0 +1,576 @@
# Neural Workflow Tutorial
**End-to-End Guide: From FEA Data to Neural-Accelerated Optimization**
This tutorial walks you through the complete workflow of setting up neural network acceleration for your optimization studies.
---
## Prerequisites
Before starting, ensure you have:
- [ ] Atomizer installed and working
- [ ] An NX Nastran model with parametric geometry
- [ ] Python environment with PyTorch and PyTorch Geometric
- [ ] GPU recommended (CUDA) but not required
### Install Neural Dependencies
```bash
# Install PyTorch (with CUDA support)
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
# Install PyTorch Geometric
pip install torch-geometric
# Install other dependencies
pip install h5py pyNastran
```
---
## Overview
The workflow consists of 5 phases:
```
Phase 1: Initial FEA Study → Collect Training Data
Phase 2: Parse Data → Convert BDF/OP2 to Neural Format
Phase 3: Train Model → Train GNN on Collected Data
Phase 4: Validate → Verify Model Accuracy
Phase 5: Deploy → Run Neural-Accelerated Optimization
```
**Time Investment**:
- Phase 1: 4-8 hours (initial FEA runs)
- Phase 2: 30 minutes (parsing)
- Phase 3: 30-60 minutes (training)
- Phase 4: 10 minutes (validation)
- Phase 5: Minutes instead of hours!
---
## Phase 1: Collect Training Data
### Step 1.1: Configure Training Data Export
Edit your `workflow_config.json` to enable training data export:
```json
{
"study_name": "uav_arm_optimization",
"design_variables": [
{
"name": "beam_half_core_thickness",
"expression_name": "beam_half_core_thickness",
"min": 5.0,
"max": 15.0,
"units": "mm"
},
{
"name": "beam_face_thickness",
"expression_name": "beam_face_thickness",
"min": 1.0,
"max": 5.0,
"units": "mm"
},
{
"name": "holes_diameter",
"expression_name": "holes_diameter",
"min": 20.0,
"max": 50.0,
"units": "mm"
},
{
"name": "hole_count",
"expression_name": "hole_count",
"min": 5,
"max": 15,
"units": ""
}
],
"objectives": [
{"name": "mass", "direction": "minimize"},
{"name": "frequency", "direction": "maximize"},
{"name": "max_displacement", "direction": "minimize"},
{"name": "max_stress", "direction": "minimize"}
],
"training_data_export": {
"enabled": true,
"export_dir": "atomizer_field_training_data/uav_arm"
},
"optimization_settings": {
"n_trials": 50,
"sampler": "TPE"
}
}
```
### Step 1.2: Run Initial Optimization
```bash
cd studies/uav_arm_optimization
python run_optimization.py --trials 50
```
This will:
1. Run 50 FEA simulations
2. Export each trial's BDF and OP2 files
3. Save design parameters and objectives
**Expected output**:
```
Trial 1/50: beam_half_core_thickness=10.2, beam_face_thickness=2.8...
→ Exporting training data to atomizer_field_training_data/uav_arm/trial_0001/
Trial 2/50: beam_half_core_thickness=7.5, beam_face_thickness=3.1...
→ Exporting training data to atomizer_field_training_data/uav_arm/trial_0002/
...
```
### Step 1.3: Verify Exported Data
Check the exported data structure:
```bash
ls atomizer_field_training_data/uav_arm/
```
Expected:
```
trial_0001/
trial_0002/
...
trial_0050/
study_summary.json
README.md
```
Each trial folder contains:
```
trial_0001/
├── input/
│ └── model.bdf # Nastran input deck
├── output/
│ └── model.op2 # Binary results
└── metadata.json # Design variables and objectives
```
---
## Phase 2: Parse Data
### Step 2.1: Navigate to AtomizerField
```bash
cd atomizer-field
```
### Step 2.2: Parse All Cases
```bash
python batch_parser.py ../atomizer_field_training_data/uav_arm
```
**What this does**:
1. Reads each BDF file (mesh, materials, BCs, loads)
2. Reads each OP2 file (displacement, stress, strain fields)
3. Converts to HDF5 + JSON format
4. Validates physics consistency
**Expected output**:
```
Processing 50 cases...
[1/50] trial_0001: ✓ Parsed successfully (2.3s)
[2/50] trial_0002: ✓ Parsed successfully (2.1s)
...
[50/50] trial_0050: ✓ Parsed successfully (2.4s)
Summary:
├── Successful: 50/50
├── Failed: 0
└── Total time: 115.2s
```
### Step 2.3: Validate Parsed Data
Run validation on a few cases:
```bash
python validate_parsed_data.py ../atomizer_field_training_data/uav_arm/trial_0001
```
**Expected output**:
```
Validation Results for trial_0001:
├── File Structure: ✓ Valid
├── Mesh Quality: ✓ Valid (15,432 nodes, 8,765 elements)
├── Material Properties: ✓ Valid (E=70 GPa, nu=0.33)
├── Boundary Conditions: ✓ Valid (12 fixed nodes)
├── Load Data: ✓ Valid (1 gravity load)
├── Displacement Field: ✓ Valid (max: 0.042 mm)
├── Stress Field: ✓ Valid (max: 125.3 MPa)
└── Overall: ✓ VALID
```
---
## Phase 3: Train Model
### Step 3.1: Split Data
Create train/validation split:
```bash
# Create directories
mkdir -p ../atomizer_field_training_data/uav_arm_train
mkdir -p ../atomizer_field_training_data/uav_arm_val
# Move 80% to train, 20% to validation
# (You can write a script or do this manually)
```
### Step 3.2: Train Parametric Model
```bash
python train_parametric.py \
--train_dir ../atomizer_field_training_data/uav_arm_train \
--val_dir ../atomizer_field_training_data/uav_arm_val \
--epochs 200 \
--hidden_channels 128 \
--num_layers 4 \
--learning_rate 0.001 \
--output_dir runs/my_uav_model
```
**What this does**:
1. Loads parsed training data
2. Builds design-conditioned GNN
3. Trains with physics-informed loss
4. Saves best checkpoint based on validation loss
**Expected output**:
```
Training Parametric GNN
├── Training samples: 40
├── Validation samples: 10
├── Model parameters: 523,412
Epoch [1/200]:
├── Train Loss: 0.3421
├── Val Loss: 0.2987
└── Best model saved!
Epoch [50/200]:
├── Train Loss: 0.0234
├── Val Loss: 0.0312
Epoch [200/200]:
├── Train Loss: 0.0089
├── Val Loss: 0.0156
└── Training complete!
Best validation loss: 0.0142 (epoch 187)
Model saved to: runs/my_uav_model/checkpoint_best.pt
```
### Step 3.3: Monitor Training (Optional)
If TensorBoard is installed:
```bash
tensorboard --logdir runs/my_uav_model/logs
```
Open http://localhost:6006 to view:
- Loss curves
- Learning rate schedule
- Validation metrics
---
## Phase 4: Validate Model
### Step 4.1: Run Validation Script
```bash
python validate.py --checkpoint runs/my_uav_model/checkpoint_best.pt
```
**Expected output**:
```
Model Validation Results
========================
Per-Objective Metrics:
├── mass:
│ ├── MAE: 0.52 g
│ ├── MAPE: 0.8%
│ └── R²: 0.998
├── frequency:
│ ├── MAE: 2.1 Hz
│ ├── MAPE: 1.2%
│ └── R²: 0.995
├── max_displacement:
│ ├── MAE: 0.001 mm
│ ├── MAPE: 2.8%
│ └── R²: 0.987
└── max_stress:
├── MAE: 3.2 MPa
├── MAPE: 3.5%
└── R²: 0.981
Performance:
├── Inference time: 4.5 ms ± 0.8 ms
├── GPU memory: 512 MB
└── Throughput: 220 predictions/sec
✓ Model validation passed!
```
### Step 4.2: Test on New Designs
```python
# test_model.py
import torch
from atomizer_field.neural_models.parametric_predictor import ParametricFieldPredictor
# Load model
checkpoint = torch.load('runs/my_uav_model/checkpoint_best.pt')
model = ParametricFieldPredictor(**checkpoint['config'])
model.load_state_dict(checkpoint['model_state_dict'])
model.eval()
# Test prediction
design = {
'beam_half_core_thickness': 7.0,
'beam_face_thickness': 2.5,
'holes_diameter': 35.0,
'hole_count': 10.0
}
# Convert to tensor
design_tensor = torch.tensor([[
design['beam_half_core_thickness'],
design['beam_face_thickness'],
design['holes_diameter'],
design['hole_count']
]])
# Predict
with torch.no_grad():
predictions = model(design_tensor)
print(f"Mass: {predictions[0, 0]:.2f} g")
print(f"Frequency: {predictions[0, 1]:.2f} Hz")
print(f"Displacement: {predictions[0, 2]:.6f} mm")
print(f"Stress: {predictions[0, 3]:.2f} MPa")
```
---
## Phase 5: Deploy Neural-Accelerated Optimization
### Step 5.1: Update Configuration
Edit `workflow_config.json` to enable neural acceleration:
```json
{
"study_name": "uav_arm_optimization_neural",
"neural_surrogate": {
"enabled": true,
"model_checkpoint": "atomizer-field/runs/my_uav_model/checkpoint_best.pt",
"confidence_threshold": 0.85,
"device": "cuda"
},
"hybrid_optimization": {
"enabled": true,
"exploration_trials": 20,
"validation_frequency": 50
},
"optimization_settings": {
"n_trials": 5000
}
}
```
### Step 5.2: Run Neural-Accelerated Optimization
```bash
python run_optimization.py --trials 5000 --use-neural
```
**Expected output**:
```
Neural-Accelerated Optimization
===============================
Loading neural model from: atomizer-field/runs/my_uav_model/checkpoint_best.pt
Model loaded successfully (4.5 ms inference time)
Phase 1: Exploration (FEA)
Trial [1/5000]: Using FEA (exploration phase)
Trial [2/5000]: Using FEA (exploration phase)
...
Trial [20/5000]: Using FEA (exploration phase)
Phase 2: Exploitation (Neural)
Trial [21/5000]: Using Neural (conf: 94.2%, time: 4.8 ms)
Trial [22/5000]: Using Neural (conf: 91.8%, time: 4.3 ms)
...
Trial [5000/5000]: Using Neural (conf: 93.1%, time: 4.6 ms)
============================================================
OPTIMIZATION COMPLETE
============================================================
Total trials: 5,000
├── FEA trials: 120 (2.4%)
├── Neural trials: 4,880 (97.6%)
├── Total time: 8.3 minutes
├── Equivalent FEA time: 14.2 hours
└── Speedup: 103x
Best Design Found:
├── beam_half_core_thickness: 6.8 mm
├── beam_face_thickness: 2.3 mm
├── holes_diameter: 32.5 mm
├── hole_count: 12
Objectives:
├── mass: 45.2 g (minimized)
├── frequency: 312.5 Hz (maximized)
├── max_displacement: 0.028 mm
└── max_stress: 89.3 MPa
============================================================
```
### Step 5.3: Validate Best Designs
Run FEA validation on top designs:
```python
# validate_best_designs.py
from optimization_engine.runner import OptimizationRunner
runner = OptimizationRunner(config_path="workflow_config.json")
# Get top 10 designs from neural optimization
top_designs = runner.get_best_trials(10)
print("Validating top 10 designs with FEA...")
for i, design in enumerate(top_designs):
# Run actual FEA
fea_result = runner.run_fea_simulation(design.params)
nn_result = design.values
# Compare
mass_error = abs(fea_result['mass'] - nn_result['mass']) / fea_result['mass'] * 100
freq_error = abs(fea_result['frequency'] - nn_result['frequency']) / fea_result['frequency'] * 100
print(f"Design {i+1}: Mass error={mass_error:.1f}%, Freq error={freq_error:.1f}%")
```
---
## Troubleshooting
### Common Issues
**Issue: Low confidence predictions**
```
WARNING: Neural confidence below threshold (65.3% < 85%)
```
**Solution**:
- Collect more diverse training data
- Train for more epochs
- Reduce confidence threshold
- Check if design is outside training distribution
**Issue: Training loss not decreasing**
```
Epoch [100/200]: Train Loss: 0.3421 (same as epoch 1)
```
**Solution**:
- Reduce learning rate
- Check data preprocessing
- Increase hidden channels
- Add more training data
**Issue: Large validation error**
```
Val MAE: 15.2% (expected < 5%)
```
**Solution**:
- Check for data leakage
- Add regularization (dropout)
- Use physics-informed loss
- Collect more training data
---
## Best Practices
### Data Collection
1. **Diverse sampling**: Use Latin Hypercube or Sobol sequences
2. **Sufficient quantity**: Aim for 10-20x the number of design variables
3. **Full range coverage**: Ensure designs span the entire design space
4. **Quality control**: Validate all FEA results before training
### Training
1. **Start simple**: Begin with smaller models, increase if needed
2. **Use validation**: Always monitor validation loss
3. **Early stopping**: Stop training when validation loss plateaus
4. **Save checkpoints**: Keep intermediate models
### Deployment
1. **Conservative thresholds**: Start with high confidence (0.9)
2. **Periodic validation**: Always validate with FEA periodically
3. **Monitor drift**: Track prediction accuracy over time
4. **Retrain**: Update model when drift is detected
---
## Next Steps
After completing this tutorial, explore:
1. **[Neural Features Complete](NEURAL_FEATURES_COMPLETE.md)** - Advanced features
2. **[GNN Architecture](GNN_ARCHITECTURE.md)** - Technical deep-dive
3. **[Physics Loss Guide](PHYSICS_LOSS_GUIDE.md)** - Loss function selection
---
## Summary
You've learned how to:
- [x] Configure training data export
- [x] Collect training data from FEA
- [x] Parse BDF/OP2 to neural format
- [x] Train a parametric GNN
- [x] Validate model accuracy
- [x] Deploy neural-accelerated optimization
**Result**: 1000x faster optimization with <5% prediction error!
---
*Questions? See the [troubleshooting section](#troubleshooting) or check the [main documentation](../README.md).*

View File

@@ -0,0 +1,530 @@
# Physics Loss Functions Guide
**Selecting and configuring loss functions for AtomizerField training**
---
## Overview
AtomizerField uses physics-informed loss functions to train neural networks that respect engineering principles. This guide explains each loss function and when to use them.
---
## Available Loss Functions
| Loss Function | Purpose | Best For |
|--------------|---------|----------|
| **MSE Loss** | Standard L2 error | General training, balanced outputs |
| **Relative Loss** | Percentage error | Multi-scale outputs (MPa + mm) |
| **Physics-Informed Loss** | Enforce physics | Better generalization, extrapolation |
| **Max Error Loss** | Penalize outliers | Safety-critical applications |
| **Combined Loss** | Weighted combination | Production models |
---
## 1. MSE Loss (Mean Squared Error)
### Description
Standard L2 loss that treats all predictions equally.
```python
loss = mean((predicted - target)²)
```
### Implementation
```python
def mse_loss(predicted, target):
"""Simple MSE loss"""
return torch.mean((predicted - target) ** 2)
```
### When to Use
- Starting point for new models
- When all outputs have similar magnitudes
- When you don't have physics constraints
### Pros & Cons
| Pros | Cons |
|------|------|
| Simple and stable | Ignores physics |
| Fast computation | Scale-sensitive |
| Well-understood | Large errors dominate |
---
## 2. Relative Loss
### Description
Computes percentage error instead of absolute error. Critical for multi-scale outputs.
```python
loss = mean(|predicted - target| / |target|)
```
### Implementation
```python
def relative_loss(predicted, target, epsilon=1e-8):
"""Relative (percentage) loss"""
relative_error = torch.abs(predicted - target) / (torch.abs(target) + epsilon)
return torch.mean(relative_error)
```
### When to Use
- Outputs have different scales (stress in MPa, displacement in mm)
- Percentage accuracy matters more than absolute accuracy
- Training data has wide range of values
### Pros & Cons
| Pros | Cons |
|------|------|
| Scale-independent | Unstable near zero |
| Intuitive (% error) | Requires epsilon |
| Equal weight to all magnitudes | May overfit small values |
### Example
```python
# Without relative loss
stress_error = |100 MPa - 105 MPa| = 5 MPa
displacement_error = |0.01 mm - 0.02 mm| = 0.01 mm
# MSE dominated by stress, displacement ignored
# With relative loss
stress_error = |5| / |100| = 5%
displacement_error = |0.01| / |0.01| = 100%
# Both contribute proportionally
```
---
## 3. Physics-Informed Loss
### Description
Adds physics constraints as regularization terms. The network learns to satisfy physical laws.
```python
loss = mse_loss + λ₁·equilibrium + λ₂·constitutive + λ₃·boundary
```
### Implementation
```python
def physics_informed_loss(predicted, target, data, config):
"""
Physics-informed loss with multiple constraint terms.
Components:
1. Data loss (MSE)
2. Equilibrium loss (F = ma)
3. Constitutive loss (σ = Eε)
4. Boundary condition loss (u = 0 at supports)
"""
# Data loss
data_loss = mse_loss(predicted, target)
# Equilibrium loss: sum of forces at each node = 0
equilibrium_loss = compute_equilibrium_residual(
predicted['displacement'],
data.edge_index,
data.stiffness
)
# Constitutive loss: stress-strain relationship
predicted_stress = compute_stress_from_displacement(
predicted['displacement'],
data.material,
data.strain_operator
)
constitutive_loss = mse_loss(predicted['stress'], predicted_stress)
# Boundary condition loss: fixed nodes have zero displacement
bc_mask = data.boundary_conditions > 0
bc_loss = torch.mean(predicted['displacement'][bc_mask] ** 2)
# Combine with weights
total_loss = (
data_loss +
config.lambda_equilibrium * equilibrium_loss +
config.lambda_constitutive * constitutive_loss +
config.lambda_bc * bc_loss
)
return total_loss
```
### Physics Constraints
#### Equilibrium (Force Balance)
At each node, the sum of forces must be zero:
```
∑F = 0 at every node
```
```python
def equilibrium_residual(displacement, stiffness_matrix):
"""
Check if Ku = F (stiffness × displacement = force)
Residual should be zero for valid solutions.
"""
internal_forces = stiffness_matrix @ displacement
external_forces = get_external_forces()
residual = internal_forces - external_forces
return torch.mean(residual ** 2)
```
#### Constitutive (Stress-Strain)
Stress must follow material law:
```
σ = Eε (Hooke's law)
```
```python
def constitutive_residual(displacement, stress, material):
"""
Check if stress follows constitutive law.
"""
strain = compute_strain(displacement)
predicted_stress = material.E * strain
residual = stress - predicted_stress
return torch.mean(residual ** 2)
```
#### Boundary Conditions
Fixed nodes must have zero displacement:
```python
def boundary_residual(displacement, bc_mask):
"""
Fixed nodes should have zero displacement.
"""
return torch.mean(displacement[bc_mask] ** 2)
```
### When to Use
- When you need good generalization
- When extrapolating beyond training data
- When physical correctness is important
- When training data is limited
### Pros & Cons
| Pros | Cons |
|------|------|
| Physics consistency | More computation |
| Better extrapolation | Requires physics info |
| Works with less data | Weight tuning needed |
### Weight Selection
| Constraint | Typical λ | Notes |
|------------|-----------|-------|
| Equilibrium | 0.1 - 0.5 | Most important |
| Constitutive | 0.05 - 0.2 | Material law |
| Boundary | 0.5 - 1.0 | Hard constraint |
---
## 4. Max Error Loss
### Description
Penalizes the worst predictions. Critical for safety-critical applications.
```python
loss = max(|predicted - target|)
```
### Implementation
```python
def max_error_loss(predicted, target, percentile=99):
"""
Penalize worst predictions.
Uses percentile to avoid single outlier domination.
"""
errors = torch.abs(predicted - target)
# Use percentile instead of max for stability
max_error = torch.quantile(errors, percentile / 100.0)
return max_error
```
### When to Use
- Safety-critical applications
- When outliers are unacceptable
- Quality assurance requirements
- Certification contexts
### Pros & Cons
| Pros | Cons |
|------|------|
| Controls worst case | Unstable gradients |
| Safety-focused | May slow convergence |
| Clear metric | Sensitive to outliers |
---
## 5. Combined Loss (Production)
### Description
Combines multiple loss functions for production models.
```python
loss = α·MSE + β·Relative + γ·Physics + δ·MaxError
```
### Implementation
```python
def combined_loss(predicted, target, data, config):
"""
Production loss combining multiple objectives.
"""
losses = {}
# MSE component
losses['mse'] = mse_loss(predicted, target)
# Relative component
losses['relative'] = relative_loss(predicted, target)
# Physics component
losses['physics'] = physics_informed_loss(predicted, target, data, config)
# Max error component
losses['max'] = max_error_loss(predicted, target)
# Weighted combination
total = (
config.alpha * losses['mse'] +
config.beta * losses['relative'] +
config.gamma * losses['physics'] +
config.delta * losses['max']
)
return total, losses
```
### Recommended Weights
| Application | MSE (α) | Relative (β) | Physics (γ) | Max (δ) |
|-------------|---------|--------------|-------------|---------|
| General | 0.5 | 0.3 | 0.2 | 0.0 |
| Multi-scale | 0.2 | 0.5 | 0.2 | 0.1 |
| Safety-critical | 0.2 | 0.2 | 0.3 | 0.3 |
| Extrapolation | 0.2 | 0.2 | 0.5 | 0.1 |
---
## Configuration Examples
### Basic Training
```python
# config.yaml
loss:
type: "mse"
```
### Multi-Scale Outputs
```python
# config.yaml
loss:
type: "combined"
weights:
mse: 0.2
relative: 0.5
physics: 0.2
max_error: 0.1
```
### Physics-Informed Training
```python
# config.yaml
loss:
type: "physics_informed"
physics_weight: 0.3
constraints:
equilibrium: 0.3
constitutive: 0.1
boundary: 0.5
```
### Safety-Critical
```python
# config.yaml
loss:
type: "combined"
weights:
mse: 0.2
relative: 0.2
physics: 0.3
max_error: 0.3
max_error_percentile: 99
```
---
## Training Strategies
### Curriculum Learning
Start simple, add complexity:
```python
def get_loss_weights(epoch, total_epochs):
"""Gradually increase physics loss weight"""
progress = epoch / total_epochs
if progress < 0.3:
# Phase 1: Pure MSE
return {'mse': 1.0, 'physics': 0.0}
elif progress < 0.6:
# Phase 2: Add physics
physics_weight = (progress - 0.3) / 0.3 * 0.3
return {'mse': 1.0 - physics_weight, 'physics': physics_weight}
else:
# Phase 3: Full physics
return {'mse': 0.7, 'physics': 0.3}
```
### Adaptive Weighting
Adjust weights based on loss magnitudes:
```python
def adaptive_weights(losses):
"""Balance losses to similar magnitudes"""
# Compute inverse of each loss (normalized)
total = sum(losses.values())
weights = {k: total / (v + 1e-8) for k, v in losses.items()}
# Normalize to sum to 1
weight_sum = sum(weights.values())
weights = {k: v / weight_sum for k, v in weights.items()}
return weights
```
---
## Troubleshooting
### Loss Not Decreasing
**Symptom**: Training loss stays flat.
**Solutions**:
1. Reduce learning rate
2. Check data normalization
3. Simplify loss (use MSE first)
4. Increase model capacity
### Physics Loss Dominates
**Symptom**: Physics loss >> data loss.
**Solutions**:
1. Reduce physics weight (λ)
2. Use curriculum learning
3. Check physics computation
4. Normalize constraints
### Unstable Training
**Symptom**: Loss oscillates or explodes.
**Solutions**:
1. Use gradient clipping
2. Reduce learning rate
3. Check for NaN in physics terms
4. Add epsilon to divisions
---
## Metrics for Evaluation
### Training Metrics
```python
metrics = {
'train_loss': total_loss.item(),
'train_mse': losses['mse'].item(),
'train_physics': losses['physics'].item(),
'train_max': losses['max'].item()
}
```
### Validation Metrics
```python
def compute_validation_metrics(model, val_loader):
"""Compute physics-aware validation metrics"""
all_errors = []
physics_violations = []
for batch in val_loader:
pred = model(batch)
# Prediction errors
errors = torch.abs(pred - batch.y)
all_errors.append(errors)
# Physics violations
violations = compute_physics_residual(pred, batch)
physics_violations.append(violations)
return {
'val_mae': torch.cat(all_errors).mean(),
'val_max': torch.cat(all_errors).max(),
'val_physics_violation': torch.cat(physics_violations).mean(),
'val_physics_compliance': (torch.cat(physics_violations) < 0.01).float().mean()
}
```
---
## Summary
| Situation | Recommended Loss |
|-----------|-----------------|
| Starting out | MSE |
| Multi-scale outputs | Relative + MSE |
| Need generalization | Physics-informed |
| Safety-critical | Combined with max error |
| Limited training data | Physics-informed |
| Production deployment | Combined (tuned) |
---
## See Also
- [Neural Features Complete](NEURAL_FEATURES_COMPLETE.md) - Overview
- [GNN Architecture](GNN_ARCHITECTURE.md) - Model details
- [Neural Workflow Tutorial](NEURAL_WORKFLOW_TUTORIAL.md) - Training guide

View File

@@ -0,0 +1,521 @@
# Training Data Export for AtomizerField
## Overview
The Training Data Export feature automatically captures NX Nastran input/output files and metadata during Atomizer optimization runs. This data is used to train AtomizerField neural network surrogate models that can replace slow FEA evaluations (30 min) with fast predictions (50 ms).
## Quick Start
Add this configuration to your `workflow_config.json`:
```json
{
"study_name": "my_optimization",
"design_variables": [...],
"objectives": [...],
"training_data_export": {
"enabled": true,
"export_dir": "atomizer_field_training_data/my_study_001"
}
}
```
Run your optimization as normal:
```bash
cd studies/my_optimization
python run_optimization.py
```
The training data will be automatically exported to the specified directory.
## How It Works
### During Optimization
After each trial:
1. **FEA Solve Completes**: NX Nastran generates `.dat` (input deck) and `.op2` (binary results) files
2. **Results Extraction**: Atomizer extracts objectives, constraints, and other metrics
3. **Data Export**: The exporter copies the NX files and creates metadata
4. **Trial Directory Created**: Structured directory with input, output, and metadata
### After Optimization
When optimization completes:
1. **Finalize Called**: Creates `study_summary.json` with overall study metadata
2. **README Generated**: Instructions for using the data with AtomizerField
3. **Ready for Training**: Data is structured for AtomizerField batch parser
## Directory Structure
After running an optimization with training data export enabled:
```
atomizer_field_training_data/my_study_001/
├── trial_0001/
│ ├── input/
│ │ └── model.bdf # NX Nastran input deck (BDF format)
│ ├── output/
│ │ └── model.op2 # NX Nastran binary results (OP2 format)
│ └── metadata.json # Design parameters, objectives, constraints
├── trial_0002/
│ └── ...
├── trial_0003/
│ └── ...
├── study_summary.json # Overall study metadata
└── README.md # Usage instructions
```
### metadata.json Format
Each trial's `metadata.json` contains:
```json
{
"trial_number": 42,
"timestamp": "2025-01-15T10:30:45.123456",
"atomizer_study": "my_optimization",
"design_parameters": {
"thickness": 3.5,
"width": 50.0,
"length": 200.0
},
"results": {
"objectives": {
"max_stress": 245.3,
"mass": 1.25
},
"constraints": {
"stress_limit": -54.7
},
"max_displacement": 1.23
}
}
```
### study_summary.json Format
The `study_summary.json` file contains:
```json
{
"study_name": "my_optimization",
"total_trials": 100,
"design_variables": ["thickness", "width", "length"],
"objectives": ["max_stress", "mass"],
"constraints": ["stress_limit"],
"export_timestamp": "2025-01-15T12:00:00.000000",
"metadata": {
"atomizer_version": "1.0",
"optimization_algorithm": "NSGA-II",
"n_trials": 100
}
}
```
## Configuration Options
### Basic Configuration
```json
"training_data_export": {
"enabled": true,
"export_dir": "path/to/export/directory"
}
```
**Parameters:**
- `enabled` (required): `true` to enable export, `false` to disable
- `export_dir` (required if enabled): Path to export directory (relative or absolute)
### Recommended Directory Structure
For organizing multiple studies:
```
atomizer_field_training_data/
├── beam_study_001/ # First beam optimization
│ └── trial_0001/ ...
├── beam_study_002/ # Second beam optimization (different parameters)
│ └── trial_0001/ ...
├── bracket_study_001/ # Bracket optimization
│ └── trial_0001/ ...
└── plate_study_001/ # Plate optimization
└── trial_0001/ ...
```
## Using Exported Data with AtomizerField
### Step 1: Parse Training Data
Convert BDF/OP2 files to PyTorch Geometric format:
```bash
cd Atomizer-Field
python batch_parser.py --data-dir "../Atomizer/atomizer_field_training_data/my_study_001"
```
This creates graph representations of the FEA data suitable for GNN training.
### Step 2: Validate Parsed Data
Ensure data was parsed correctly:
```bash
python validate_parsed_data.py
```
### Step 3: Train Neural Network
Train the GNN surrogate model:
```bash
python train.py --data-dir "training_data/parsed/" --epochs 200
```
### Step 4: Use Trained Model in Atomizer
Enable neural network surrogate in your optimization:
```bash
cd ../Atomizer
python run_optimization.py --config studies/my_study/workflow_config.json --use-neural
```
## Integration Points
The training data exporter integrates seamlessly with Atomizer's optimization flow:
### In `optimization_engine/runner.py`:
```python
from optimization_engine.training_data_exporter import create_exporter_from_config
class OptimizationRunner:
def __init__(self, config_path):
# ... existing initialization ...
# Initialize training data exporter (if enabled)
self.training_data_exporter = create_exporter_from_config(self.config)
if self.training_data_exporter:
print(f"Training data export enabled: {self.training_data_exporter.export_dir}")
def objective(self, trial):
# ... simulation and results extraction ...
# Export training data (if enabled)
if self.training_data_exporter:
simulation_files = {
'dat_file': path_to_dat,
'op2_file': path_to_op2
}
self.training_data_exporter.export_trial(
trial_number=trial.number,
design_variables=design_vars,
results=extracted_results,
simulation_files=simulation_files
)
def run(self):
# ... optimization loop ...
# Finalize training data export (if enabled)
if self.training_data_exporter:
self.training_data_exporter.finalize()
```
## File Formats
### BDF (.bdf) - Nastran Bulk Data File
- **Format**: ASCII text
- **Contains**:
- Mesh geometry (nodes, elements)
- Material properties
- Loads and boundary conditions
- Analysis parameters
### OP2 (.op2) - Nastran Output2
- **Format**: Binary
- **Contains**:
- Displacements
- Stresses (von Mises, principal, etc.)
- Strains
- Reaction forces
- Modal results (if applicable)
### JSON (.json) - Metadata
- **Format**: UTF-8 JSON
- **Contains**:
- Design parameter values
- Objective function values
- Constraint values
- Trial metadata (number, timestamp, study name)
## Example: Complete Workflow
### 1. Create Optimization Study
```python
import json
from pathlib import Path
config = {
"study_name": "beam_optimization",
"sim_file": "examples/Models/Beam/Beam.sim",
"fem_file": "examples/Models/Beam/Beam_fem1.fem",
"design_variables": [
{"name": "thickness", "expression_name": "thickness", "min": 2.0, "max": 8.0},
{"name": "width", "expression_name": "width", "min": 20.0, "max": 60.0}
],
"objectives": [
{
"name": "max_stress",
"type": "minimize",
"extractor": {"type": "result_parameter", "parameter_name": "Max Von Mises Stress"}
},
{
"name": "mass",
"type": "minimize",
"extractor": {"type": "expression", "expression_name": "mass"}
}
],
"optimization": {
"algorithm": "NSGA-II",
"n_trials": 100
},
# Enable training data export
"training_data_export": {
"enabled": True,
"export_dir": "atomizer_field_training_data/beam_study_001"
}
}
# Save config
config_path = Path("studies/beam_optimization/1_setup/workflow_config.json")
config_path.parent.mkdir(parents=True, exist_ok=True)
with open(config_path, 'w') as f:
json.dump(config, f, indent=2)
```
### 2. Run Optimization
```bash
cd studies/beam_optimization
python run_optimization.py
```
Console output will show:
```
Training data export enabled: atomizer_field_training_data/beam_study_001
...
Training data export finalized: 100 trials exported
```
### 3. Verify Export
```bash
dir atomizer_field_training_data\beam_study_001
```
You should see:
```
trial_0001/
trial_0002/
...
trial_0100/
study_summary.json
README.md
```
### 4. Train AtomizerField
```bash
cd Atomizer-Field
python batch_parser.py --data-dir "../Atomizer/atomizer_field_training_data/beam_study_001"
python train.py --data-dir "training_data/parsed/" --epochs 200
```
## Troubleshooting
### No .dat or .op2 Files Found
**Problem**: Export logs show "dat file not found" or "op2 file not found"
**Solution**:
- Ensure NX Nastran solver is writing these files
- Check NX simulation settings
- Verify file paths in `result_path`
### Export Directory Permission Error
**Problem**: `PermissionError` when creating export directory
**Solution**:
- Use absolute path or path relative to Atomizer root
- Ensure write permissions for the target directory
- Check disk space
### Missing Metadata Fields
**Problem**: `metadata.json` doesn't contain expected fields
**Solution**:
- Verify extractors are configured correctly in `workflow_config.json`
- Check that results are being extracted before export
- Review `extracted_results` dict in runner
### Large File Sizes
**Problem**: Export directory grows very large
**Solution**:
- OP2 files can be large (10-100 MB per trial)
- For 1000 trials, expect 10-100 GB of training data
- Use compression or cloud storage for large datasets
## Performance Considerations
### Disk I/O
- Each trial export involves 2 file copies (.dat and .op2)
- Minimal overhead (~100-500ms per trial)
- Negligible compared to FEA solve time (30 minutes)
### Storage Requirements
Typical file sizes per trial:
- `.dat` file: 1-10 MB (depends on mesh density)
- `.op2` file: 5-50 MB (depends on results requested)
- `metadata.json`: 1-5 KB
For 100 trials: ~600 MB - 6 GB
For 1000 trials: ~6 GB - 60 GB
## API Reference
### TrainingDataExporter Class
```python
from optimization_engine.training_data_exporter import TrainingDataExporter
exporter = TrainingDataExporter(
export_dir=Path("training_data/study_001"),
study_name="my_study",
design_variable_names=["thickness", "width"],
objective_names=["stress", "mass"],
constraint_names=["stress_limit"], # Optional
metadata={"version": "1.0"} # Optional
)
```
#### Methods
**export_trial(trial_number, design_variables, results, simulation_files)**
Export training data for a single trial.
- `trial_number` (int): Optuna trial number
- `design_variables` (dict): Design parameter names and values
- `results` (dict): Objectives, constraints, and other results
- `simulation_files` (dict): Paths to 'dat_file' and 'op2_file'
Returns `True` if successful, `False` otherwise.
**finalize()**
Finalize export by creating `study_summary.json`.
### Factory Function
**create_exporter_from_config(config)**
Create exporter from workflow configuration dict.
- `config` (dict): Workflow configuration
Returns `TrainingDataExporter` if enabled, `None` otherwise.
## Best Practices
### 1. Organize by Study Type
Group related studies together:
```
atomizer_field_training_data/
├── beams/
│ ├── cantilever_001/
│ ├── cantilever_002/
│ └── simply_supported_001/
└── brackets/
├── L_bracket_001/
└── T_bracket_001/
```
### 2. Use Descriptive Names
Include important parameters in study names:
```
beam_study_thickness_2-8_width_20-60_100trials
```
### 3. Version Your Studies
Track changes to design space or objectives:
```
bracket_study_001 # Initial study
bracket_study_002 # Expanded design space
bracket_study_003 # Added constraint
```
### 4. Document Metadata
Add custom metadata to track study details:
```json
"metadata": {
"description": "Initial beam study with basic design variables",
"date": "2025-01-15",
"engineer": "Your Name",
"validation_status": "pending"
}
```
### 5. Backup Training Data
Training data is valuable:
- Expensive to generate (hours/days of computation)
- Back up to cloud storage
- Consider version control for study configurations
## Future Enhancements
Planned improvements:
- [ ] Incremental export (resume after crash)
- [ ] Compression options (gzip .dat and .op2 files)
- [ ] Cloud upload integration (S3, Azure Blob)
- [ ] Export filtering (only export Pareto-optimal trials)
- [ ] Multi-fidelity support (tag high/low fidelity trials)
## See Also
- [AtomizerField Documentation](../../Atomizer-Field/docs/)
- [How to Extend Optimization](HOW_TO_EXTEND_OPTIMIZATION.md)
- [Hybrid Mode Guide](HYBRID_MODE_GUIDE.md)
## Support
For issues or questions:
1. Check the troubleshooting section above
2. Review [AtomizerField integration test plan](../Atomizer-Field/AtomizerField_Integration_Test_Plan.md)
3. Open an issue on GitHub with:
- Your `workflow_config.json`
- Export logs
- Error messages

450
docs/guides/hybrid_mode.md Normal file
View File

@@ -0,0 +1,450 @@
# Hybrid LLM Mode Guide
**Recommended Mode for Development** | Phase 3.2 Architecture | November 18, 2025
## What is Hybrid Mode?
Hybrid Mode (Mode 2) gives you **90% of the automation** with **10% of the complexity**. It's the sweet spot between manual configuration and full LLM autonomy.
### Why Hybrid Mode?
-**No API Key Required** - Use Claude Code/Desktop instead of Claude API
-**90% Automation** - Auto-generates extractors, calculations, and hooks
-**Full Transparency** - You see and approve the workflow JSON
-**Production Ready** - Uses centralized library system
-**Easy to Upgrade** - Can enable full API mode later
## How It Works
### The Workflow
```
┌─────────────────────────────────────────────────────────────┐
│ HYBRID MODE - 90% Automation, No API Key │
└─────────────────────────────────────────────────────────────┘
1. YOU + CLAUDE CODE:
├─ Describe optimization in natural language
└─ Claude helps create workflow JSON
2. SAVE workflow JSON:
├─ llm_workflow_config.json
└─ Contains: design vars, objectives, constraints
3. LLMOptimizationRunner:
├─ Auto-generates extractors (pyNastran)
├─ Auto-generates calculations
├─ Auto-generates hooks
├─ Adds to core library (deduplication!)
└─ Runs optimization loop (Optuna)
4. RESULTS:
├─ optimization_results.json (best design)
├─ optimization_history.json (all trials)
├─ extractors_manifest.json (what was used)
└─ Study folder stays clean!
```
## Step-by-Step: Your First Hybrid Optimization
### Step 1: Describe Your Optimization to Claude
**Example conversation with Claude Code:**
```
YOU: I want to optimize a bracket design.
- Design variables: wall_thickness (1-5mm), fillet_radius (2-8mm)
- Objective: minimize mass
- Constraints: max_stress < 200 MPa, max_displacement < 0.5mm
- I have a Beam.prt file and Beam_sim1.sim file ready
CLAUDE: I'll help you create the workflow JSON...
```
### Step 2: Claude Creates Workflow JSON
Claude will generate a file like this:
```json
{
"study_name": "bracket_optimization",
"optimization_request": "Minimize mass while keeping stress below 200 MPa and displacement below 0.5mm",
"design_variables": [
{
"parameter": "wall_thickness",
"bounds": [1, 5],
"description": "Bracket wall thickness in mm"
},
{
"parameter": "fillet_radius",
"bounds": [2, 8],
"description": "Fillet radius in mm"
}
],
"objectives": [
{
"name": "mass",
"goal": "minimize",
"weight": 1.0,
"extraction": {
"action": "extract_mass",
"domain": "result_extraction",
"params": {
"result_type": "mass",
"metric": "total"
}
}
}
],
"constraints": [
{
"name": "max_stress_limit",
"type": "less_than",
"threshold": 200,
"extraction": {
"action": "extract_von_mises_stress",
"domain": "result_extraction",
"params": {
"result_type": "stress",
"metric": "max"
}
}
},
{
"name": "max_displacement_limit",
"type": "less_than",
"threshold": 0.5,
"extraction": {
"action": "extract_displacement",
"domain": "result_extraction",
"params": {
"result_type": "displacement",
"metric": "max"
}
}
}
]
}
```
### Step 3: Save and Review
Save the JSON to your study directory:
```
studies/
bracket_optimization/
1_setup/
model/
Bracket.prt # Your NX model
Bracket_sim1.sim # Your FEM setup
workflow_config.json # ← SAVE HERE
```
**IMPORTANT**: Review the JSON before running! Check:
- ✅ Design variable names match your NX expressions
- ✅ Bounds are in correct units (mm not m!)
- ✅ Extraction actions match available OP2 results
### Step 4: Run LLMOptimizationRunner
Now the magic happens - 90% automation kicks in:
```python
from pathlib import Path
from optimization_engine.llm_optimization_runner import LLMOptimizationRunner
# Point to your files
study_dir = Path("studies/bracket_optimization")
workflow_json = study_dir / "1_setup/workflow_config.json"
prt_file = study_dir / "1_setup/model/Bracket.prt"
sim_file = study_dir / "1_setup/model/Bracket_sim1.sim"
output_dir = study_dir / "2_substudies/optimization_run_001"
# Create runner
runner = LLMOptimizationRunner(
llm_workflow_file=workflow_json,
prt_file=prt_file,
sim_file=sim_file,
output_dir=output_dir,
n_trials=20
)
# Run optimization (this is where automation happens!)
study = runner.run()
print(f"Best design found:")
print(f" wall_thickness: {study.best_params['wall_thickness']:.2f} mm")
print(f" fillet_radius: {study.best_params['fillet_radius']:.2f} mm")
print(f" mass: {study.best_value:.4f} kg")
```
### Step 5: What Gets Auto-Generated
During the run, the system automatically:
1. **Analyzes OP2 structure** (pyNastran research agent)
2. **Generates extractors** and adds to core library:
```
optimization_engine/extractors/
├── extract_mass.py ← Generated!
├── extract_von_mises_stress.py ← Generated!
└── extract_displacement.py ← Generated!
```
3. **Creates study manifest** (no code pollution!):
```
2_substudies/optimization_run_001/
└── extractors_manifest.json ← References only
```
4. **Runs optimization loop** with Optuna
5. **Saves full audit trail**:
```
2_substudies/optimization_run_001/
├── llm_workflow_config.json ← What you specified
├── extractors_manifest.json ← What was used
├── optimization_results.json ← Best design
└── optimization_history.json ← All trials
```
## Real Example: Beam Optimization
Let's walk through the existing beam optimization:
### Natural Language Request
"I want to optimize a sandwich beam. Design variables are core thickness (20-30mm), face thickness (1-3mm), hole diameter (180-280mm), and number of holes (8-14). Minimize weight while keeping displacement under 2mm."
### Claude Creates JSON
```json
{
"study_name": "simple_beam_optimization",
"optimization_request": "Minimize weight subject to max displacement < 2mm",
"design_variables": [
{"parameter": "beam_half_core_thickness", "bounds": [20, 30]},
{"parameter": "beam_face_thickness", "bounds": [1, 3]},
{"parameter": "holes_diameter", "bounds": [180, 280]},
{"parameter": "hole_count", "bounds": [8, 14]}
],
"objectives": [
{
"name": "mass",
"goal": "minimize",
"weight": 1.0,
"extraction": {
"action": "extract_mass",
"domain": "result_extraction",
"params": {"result_type": "mass", "metric": "total"}
}
}
],
"constraints": [
{
"name": "max_displacement_limit",
"type": "less_than",
"threshold": 2.0,
"extraction": {
"action": "extract_displacement",
"domain": "result_extraction",
"params": {"result_type": "displacement", "metric": "max"}
}
}
]
}
```
### Run Script
```python
from pathlib import Path
from optimization_engine.llm_optimization_runner import LLMOptimizationRunner
study_dir = Path("studies/simple_beam_optimization")
runner = LLMOptimizationRunner(
llm_workflow_file=study_dir / "1_setup/workflow_config.json",
prt_file=study_dir / "1_setup/model/Beam.prt",
sim_file=study_dir / "1_setup/model/Beam_sim1.sim",
output_dir=study_dir / "2_substudies/test_run",
n_trials=20
)
study = runner.run()
```
### Results After 20 Trials
```
Best design found:
beam_half_core_thickness: 27.3 mm
beam_face_thickness: 2.1 mm
holes_diameter: 245.2 mm
hole_count: 11
mass: 1.234 kg (45% reduction!)
max_displacement: 1.87 mm (within limit)
```
### Study Folder (Clean!)
```
2_substudies/test_run/
├── extractors_manifest.json # Just references
├── llm_workflow_config.json # What you wanted
├── optimization_results.json # Best design
└── optimization_history.json # All 20 trials
```
## Advanced: Extractor Library Reuse
The beauty of centralized library system:
### First Optimization Run
```python
# First beam optimization
runner1 = LLMOptimizationRunner(...)
runner1.run()
# Creates:
# optimization_engine/extractors/extract_mass.py
# optimization_engine/extractors/extract_displacement.py
```
### Second Optimization Run (Different Study!)
```python
# Different bracket optimization (but same extractions!)
runner2 = LLMOptimizationRunner(...)
runner2.run()
# REUSES existing extractors!
# No duplicate code generated
# Study folder stays clean
```
The system automatically detects identical extraction functionality and reuses code from the core library.
## Comparison: Three Modes
| Feature | Manual Mode | **Hybrid Mode** | Full LLM Mode |
|---------|-------------|-----------------|---------------|
| API Key Required | ❌ No | ❌ No | ✅ Yes |
| Automation Level | 0% (you code) | 90% (auto-gen) | 100% (NL only) |
| Extractor Generation | Manual | ✅ Auto | ✅ Auto |
| Hook Generation | Manual | ✅ Auto | ✅ Auto |
| Core Library | Manual | ✅ Auto | ✅ Auto |
| Transparency | Full | Full | High |
| Development Speed | Slow | **Fast** | Fastest |
| Production Ready | ✅ Yes | ✅ Yes | ⚠️ Alpha |
| **Recommended For** | Complex custom | **Most users** | Future |
## Troubleshooting
### Issue: "Expression not found in NX model"
**Problem**: Design variable name doesn't match NX expression name
**Solution**:
1. Open your `.prt` file in NX
2. Tools → Expression → check exact names
3. Update workflow JSON with exact names
**Example**:
```json
// WRONG
"parameter": "thickness"
// RIGHT (must match NX exactly)
"parameter": "beam_half_core_thickness"
```
### Issue: "No mass results in OP2"
**Problem**: OP2 file doesn't contain mass data
**Solution**:
1. Check what's actually in the OP2:
```python
from pyNastran.op2.op2 import OP2
model = OP2()
model.read_op2('path/to/results.op2')
print(dir(model)) # See available results
```
2. Use available result type instead (e.g., von Mises stress, displacement)
### Issue: "Extractor generation failed"
**Problem**: pyNastran research agent couldn't figure out extraction pattern
**Solution**:
1. Check `optimization_engine/knowledge_base/` for available patterns
2. Manually create extractor in `optimization_engine/extractors/`
3. Reference it in workflow JSON using existing action name
### Issue: "Parameter values too extreme"
**Problem**: Design variables using wrong range (0.2-1.0 instead of 20-30)
**Fixed**: This was the bug we fixed on Nov 17! Make sure you're using latest code.
**Verify**:
```python
# Check bounds parsing in llm_optimization_runner.py
if 'bounds' in var_config:
var_min, var_max = var_config['bounds'] # Should use this!
```
## Tips for Success
### 1. Start Small
- First run: 5-10 trials to verify everything works
- Check results, review auto-generated extractors
- Then scale up to 50-100 trials
### 2. Verify Units
- NX expressions: Check Tools → Expression
- Workflow JSON: Match units exactly
- Common mistake: mm vs m, kg vs g
### 3. Use Existing Examples
- `studies/simple_beam_optimization/` - Working example
- Copy the structure, modify workflow JSON
- Reuse proven patterns
### 4. Review Auto-Generated Code
```python
# After first run, check what was generated:
from optimization_engine.extractor_library import ExtractorLibrary
library = ExtractorLibrary()
print(library.get_library_summary())
```
### 5. Leverage Deduplication
- Same extraction across studies? Library reuses code!
- No need to regenerate extractors
- Study folders stay clean automatically
## Next Steps
### Ready to Test?
1. ✅ Read this guide
2. ✅ Review beam optimization example
3. ✅ Create your workflow JSON with Claude's help
4. ✅ Run your first optimization!
### Want Full Automation?
When you're ready for Full LLM Mode (Mode 3):
1. Set up Claude API key
2. Use natural language requests (no JSON needed!)
3. System creates workflow JSON automatically
4. Everything else identical to Hybrid Mode
### Questions?
- Check `docs/ARCHITECTURE_REFACTOR_NOV17.md` for library system details
- Review `optimization_engine/llm_optimization_runner.py` for implementation
- Run E2E test: `python tests/test_phase_3_2_e2e.py`
---
**Status**: Production Ready ✅
**Mode**: Hybrid (90% Automation)
**API Required**: No
**Testing**: E2E tests passing (18/18 checks)
**Architecture**: Centralized library with deduplication
**Ready to revolutionize your optimization workflow!** 🚀