feat: Add Protocol 13 adaptive optimization, Plotly charts, and dashboard improvements
## Protocol 13: Adaptive Multi-Objective Optimization - Iterative FEA + Neural Network surrogate workflow - Initial FEA sampling, NN training, NN-accelerated search - FEA validation of top NN predictions, retraining loop - adaptive_state.json tracks iteration history and best values - M1 mirror study (V11) with 103 FEA, 3000 NN trials ## Dashboard Visualization Enhancements - Added Plotly.js interactive charts (parallel coords, Pareto, convergence) - Lazy loading with React.lazy() for performance - Code splitting: plotly.js-basic-dist (~1MB vs 3.5MB) - Chart library toggle (Recharts default, Plotly on-demand) - ExpandableChart component for full-screen modal views - ConsoleOutput component for real-time log viewing ## Documentation - Protocol 13 detailed documentation - Dashboard visualization guide - Plotly components README - Updated run-optimization skill with Mode 5 (adaptive) ## Bug Fixes - Fixed TypeScript errors in dashboard components - Fixed Card component to accept ReactNode title - Removed unused imports across components 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
@@ -2,6 +2,33 @@
|
||||
|
||||
This document describes the multi-part assembly FEM workflow used when optimizing complex assemblies with `.afm` (Assembly FEM) files.
|
||||
|
||||
## CRITICAL: Working Copy Requirement
|
||||
|
||||
**NEVER run optimization directly on user's master model files.**
|
||||
|
||||
Before any optimization run, ALL model files must be copied to the study's working directory:
|
||||
|
||||
```
|
||||
Source (NEVER MODIFY) Working Copy (optimization runs here)
|
||||
────────────────────────────────────────────────────────────────────────────
|
||||
C:/Users/.../M1-Gigabit/Latest/ studies/{study}/1_setup/model/
|
||||
├── M1_Blank.prt → ├── M1_Blank.prt
|
||||
├── M1_Blank_fem1.fem → ├── M1_Blank_fem1.fem
|
||||
├── M1_Blank_fem1_i.prt → ├── M1_Blank_fem1_i.prt
|
||||
├── M1_Vertical_Support_Skeleton.prt → ├── M1_Vertical_Support_Skeleton.prt
|
||||
├── ASSY_M1_assyfem1.afm → ├── ASSY_M1_assyfem1.afm
|
||||
└── ASSY_M1_assyfem1_sim1.sim → └── ASSY_M1_assyfem1_sim1.sim
|
||||
```
|
||||
|
||||
**Why**: Optimization iteratively modifies expressions, meshes, and saves files. If corruption occurs during iteration (solver crash, bad parameter combo), the working copy can be deleted and re-copied. Master files remain safe.
|
||||
|
||||
**Files to Copy**:
|
||||
- `*.prt` - All part files (geometry + idealized)
|
||||
- `*.fem` - All FEM files
|
||||
- `*.afm` - Assembly FEM files
|
||||
- `*.sim` - Simulation files
|
||||
- `*.exp` - Expression files (if any)
|
||||
|
||||
## Overview
|
||||
|
||||
Assembly FEMs have a more complex dependency chain than single-part simulations:
|
||||
@@ -40,6 +67,29 @@ Open M1_Blank.prt
|
||||
|
||||
The `.prt` file contains the parametric CAD model with expressions that drive dimensions. These expressions are updated with new design parameter values, then the geometry is rebuilt.
|
||||
|
||||
### Step 1b: Update ALL Linked Geometry Parts (CRITICAL!)
|
||||
|
||||
**⚠️ THIS STEP IS CRITICAL - SKIPPING IT CAUSES CORRUPT RESULTS ⚠️**
|
||||
|
||||
```
|
||||
For each geometry part with linked expressions:
|
||||
├── Open M1_Vertical_Support_Skeleton.prt
|
||||
├── DoUpdate() - propagate linked expression changes
|
||||
├── Geometry rebuilds to match M1_Blank
|
||||
└── Save part
|
||||
```
|
||||
|
||||
**Why this is critical:**
|
||||
- M1_Vertical_Support_Skeleton has expressions linked to M1_Blank
|
||||
- When M1_Blank geometry changes, the support skeleton MUST also update
|
||||
- If not updated, FEM nodes will be at OLD positions → nodes not coincident → merge fails
|
||||
- Result: "billion nm" RMS values (corrupt displacement data)
|
||||
|
||||
**Rule: YOU MUST UPDATE ALL GEOMETRY PARTS UNDER THE .sim FILE!**
|
||||
- If there are 5 geometry parts, update all 5
|
||||
- If there are 10 geometry parts, update all 10
|
||||
- Unless explicitly told otherwise in the study config
|
||||
|
||||
### Step 2: Update Component FEM Files (.fem)
|
||||
|
||||
```
|
||||
@@ -180,9 +230,100 @@ See `optimization_engine/solve_simulation.py` for the full implementation:
|
||||
- `update_assembly_fem()` - Step 3 implementation
|
||||
- `solve_simulation_file()` - Step 4 implementation
|
||||
|
||||
## HEEDS-Style Iteration Folder Management (V9+)
|
||||
|
||||
For complex assemblies, each optimization trial uses a fresh copy of the master model:
|
||||
|
||||
```
|
||||
study_name/
|
||||
├── 1_setup/
|
||||
│ └── model/ # Master model files (NEVER MODIFY)
|
||||
│ ├── ASSY_M1.prt
|
||||
│ ├── ASSY_M1_assyfem1.afm
|
||||
│ ├── ASSY_M1_assyfem1_sim1.sim
|
||||
│ ├── M1_Blank.prt
|
||||
│ ├── M1_Blank_fem1.fem
|
||||
│ └── ...
|
||||
├── 2_iterations/
|
||||
│ ├── iter0/ # Trial 0 working copy
|
||||
│ │ ├── [all model files]
|
||||
│ │ ├── params.exp # Expression values for this trial
|
||||
│ │ └── results/ # OP2, Zernike CSV, etc.
|
||||
│ ├── iter1/ # Trial 1 working copy
|
||||
│ └── ...
|
||||
└── 3_results/
|
||||
└── study.db # Optuna database
|
||||
```
|
||||
|
||||
### Why Fresh Copies Per Iteration?
|
||||
|
||||
1. **Corruption isolation**: If mesh regeneration fails mid-trial, only that iteration is affected
|
||||
2. **Reproducibility**: Can re-run any trial by using its params.exp
|
||||
3. **Debugging**: All intermediate files preserved for post-mortem analysis
|
||||
4. **Parallelization**: Multiple NX sessions could run different iterations (future)
|
||||
|
||||
### Iteration Folder Contents
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `*.prt, *.fem, *.afm, *.sim` | Fresh copy of all NX model files |
|
||||
| `params.exp` | Expression file with trial parameter values |
|
||||
| `*-solution_1.op2` | Nastran results (after solve) |
|
||||
| `results/zernike_trial_N.csv` | Extracted Zernike metrics |
|
||||
|
||||
### 0-Based Iteration Numbering
|
||||
|
||||
Iterations are numbered starting from 0 to match Optuna trial numbers:
|
||||
- `iter0` = Optuna trial 0 = Dashboard shows trial 0
|
||||
- `iter1` = Optuna trial 1 = Dashboard shows trial 1
|
||||
|
||||
This ensures cross-referencing between dashboard, database, and file system is straightforward.
|
||||
|
||||
## Multi-Subcase Solutions
|
||||
|
||||
For gravity analysis at multiple orientations, use subcases:
|
||||
|
||||
```
|
||||
Simulation Setup in NX:
|
||||
├── Subcase 1: 90 deg elevation (zenith/polishing)
|
||||
├── Subcase 2: 20 deg elevation (low angle reference)
|
||||
├── Subcase 3: 40 deg elevation
|
||||
└── Subcase 4: 60 deg elevation
|
||||
```
|
||||
|
||||
### Solving All Subcases
|
||||
|
||||
Use `solution_name=None` or `solve_all_subcases=True` to ensure all subcases are solved:
|
||||
|
||||
```json
|
||||
"nx_settings": {
|
||||
"solution_name": "Solution 1",
|
||||
"solve_all_subcases": true
|
||||
}
|
||||
```
|
||||
|
||||
### Subcase ID Mapping
|
||||
|
||||
NX subcase IDs (1, 2, 3, 4) may not match the angle labels. Always define explicit mapping:
|
||||
|
||||
```json
|
||||
"zernike_settings": {
|
||||
"subcases": ["1", "2", "3", "4"],
|
||||
"subcase_labels": {
|
||||
"1": "90deg",
|
||||
"2": "20deg",
|
||||
"3": "40deg",
|
||||
"4": "60deg"
|
||||
},
|
||||
"reference_subcase": "2"
|
||||
}
|
||||
```
|
||||
|
||||
## Tips
|
||||
|
||||
1. **Start with baseline solve**: Before optimization, manually verify the full workflow completes in NX
|
||||
2. **Check mesh quality**: Poor mesh quality after updates can cause solve failures
|
||||
3. **Monitor memory**: Assembly FEMs with many components use significant memory
|
||||
4. **Use Foreground mode**: For multi-subcase solutions, Foreground mode ensures all subcases complete
|
||||
5. **Validate OP2 data**: Check for corrupt results (all zeros, unrealistic magnitudes) before processing
|
||||
6. **Preserve user NX sessions**: NXSessionManager tracks PIDs to avoid closing user's NX instances
|
||||
|
||||
228
docs/06_PROTOCOLS_DETAILED/DASHBOARD_VISUALIZATION.md
Normal file
228
docs/06_PROTOCOLS_DETAILED/DASHBOARD_VISUALIZATION.md
Normal file
@@ -0,0 +1,228 @@
|
||||
# Atomizer Dashboard Visualization Guide
|
||||
|
||||
## Overview
|
||||
|
||||
The Atomizer Dashboard provides real-time visualization of optimization studies with interactive charts, trial history, and study management. It supports two chart libraries:
|
||||
|
||||
- **Recharts** (default): Fast, lightweight, good for real-time updates
|
||||
- **Plotly**: Interactive with zoom, pan, export - better for analysis
|
||||
|
||||
## Starting the Dashboard
|
||||
|
||||
```bash
|
||||
# Quick start (both backend and frontend)
|
||||
python start_dashboard.py
|
||||
|
||||
# Or manually:
|
||||
cd atomizer-dashboard/backend && python -m uvicorn main:app --port 8000
|
||||
cd atomizer-dashboard/frontend && npm run dev
|
||||
```
|
||||
|
||||
Access at: http://localhost:3003
|
||||
|
||||
## Chart Components
|
||||
|
||||
### 1. Pareto Front Plot
|
||||
|
||||
Visualizes the trade-off between objectives in multi-objective optimization.
|
||||
|
||||
**Features:**
|
||||
- 2D scatter plot for 2 objectives
|
||||
- 3D view for 3+ objectives (Plotly only)
|
||||
- Color differentiation: FEA (blue), NN (orange), Pareto (green)
|
||||
- Axis selector for choosing which objectives to display
|
||||
- Hover tooltips with trial details
|
||||
|
||||
**Usage:**
|
||||
- Click points to select trials
|
||||
- Use axis dropdowns to switch objectives
|
||||
- Toggle 2D/3D view (Plotly mode)
|
||||
|
||||
### 2. Parallel Coordinates Plot
|
||||
|
||||
Shows relationships between all design variables and objectives simultaneously.
|
||||
|
||||
**Features:**
|
||||
- Each vertical axis represents a variable or objective
|
||||
- Lines connect values for each trial
|
||||
- Brush filtering: drag on any axis to filter
|
||||
- Color coding by trial source (FEA/NN/Pareto)
|
||||
|
||||
**Usage:**
|
||||
- Drag on axes to create filters
|
||||
- Double-click to reset filters
|
||||
- Hover for trial details
|
||||
|
||||
### 3. Convergence Plot
|
||||
|
||||
Tracks optimization progress over time.
|
||||
|
||||
**Features:**
|
||||
- Scatter points for each trial's objective value
|
||||
- Step line showing best-so-far
|
||||
- Range slider for zooming (Plotly)
|
||||
- FEA vs NN differentiation
|
||||
|
||||
**Metrics Displayed:**
|
||||
- Best value achieved
|
||||
- Current trial value
|
||||
- Total trial count
|
||||
|
||||
### 4. Parameter Importance Chart
|
||||
|
||||
Shows which design variables most influence the objective.
|
||||
|
||||
**Features:**
|
||||
- Horizontal bar chart of correlation coefficients
|
||||
- Color coding: Red (positive), Green (negative)
|
||||
- Sortable by importance or name
|
||||
- Pearson correlation calculation
|
||||
|
||||
**Interpretation:**
|
||||
- Positive correlation: Higher parameter → Higher objective
|
||||
- Negative correlation: Higher parameter → Lower objective
|
||||
- |r| > 0.5: Strong influence
|
||||
|
||||
### 5. Expandable Charts
|
||||
|
||||
All charts support full-screen modal view:
|
||||
|
||||
**Features:**
|
||||
- Click expand icon to open modal
|
||||
- Larger view for detailed analysis
|
||||
- Maintains all interactivity
|
||||
- Close with X or click outside
|
||||
|
||||
## Chart Library Toggle
|
||||
|
||||
Switch between Recharts and Plotly using the header buttons:
|
||||
|
||||
| Feature | Recharts | Plotly |
|
||||
|---------|----------|--------|
|
||||
| Load Speed | Fast | Slower (lazy loaded) |
|
||||
| Interactivity | Basic | Advanced |
|
||||
| Export | Screenshot | PNG/SVG native |
|
||||
| 3D Support | No | Yes |
|
||||
| Real-time Updates | Better | Good |
|
||||
|
||||
**Recommendation:**
|
||||
- Use Recharts during active optimization (real-time)
|
||||
- Switch to Plotly for post-optimization analysis
|
||||
|
||||
## Study Management
|
||||
|
||||
### Study Selection
|
||||
|
||||
- Left sidebar shows all available studies
|
||||
- Click to select and load data
|
||||
- Badge shows study status (running/completed)
|
||||
|
||||
### Metrics Cards
|
||||
|
||||
Top row displays key metrics:
|
||||
- **Trials**: Total completed trials
|
||||
- **Best Value**: Best objective achieved
|
||||
- **Pruned**: Trials pruned by sampler
|
||||
|
||||
### Trial History
|
||||
|
||||
Bottom section shows trial details:
|
||||
- Trial number and objective value
|
||||
- Parameter values (expandable)
|
||||
- Source indicator (FEA/NN)
|
||||
- Sort by performance or chronological
|
||||
|
||||
## Report Viewer
|
||||
|
||||
Access generated study reports:
|
||||
|
||||
1. Click "View Report" button
|
||||
2. Markdown rendered with syntax highlighting
|
||||
3. Supports tables, code blocks, math
|
||||
|
||||
## Console Output
|
||||
|
||||
Real-time log viewer:
|
||||
|
||||
- Shows optimization progress
|
||||
- Error messages highlighted
|
||||
- Auto-scroll to latest
|
||||
- Collapsible panel
|
||||
|
||||
## API Endpoints
|
||||
|
||||
The dashboard uses these REST endpoints:
|
||||
|
||||
```
|
||||
GET /api/optimization/studies # List all studies
|
||||
GET /api/optimization/studies/{id}/status # Study status
|
||||
GET /api/optimization/studies/{id}/history # Trial history
|
||||
GET /api/optimization/studies/{id}/metadata # Study config
|
||||
GET /api/optimization/studies/{id}/pareto # Pareto front
|
||||
GET /api/optimization/studies/{id}/report # Markdown report
|
||||
GET /api/optimization/studies/{id}/console # Log output
|
||||
```
|
||||
|
||||
## WebSocket Updates
|
||||
|
||||
Real-time updates via WebSocket:
|
||||
|
||||
```
|
||||
ws://localhost:8000/api/ws/optimization/{study_id}
|
||||
```
|
||||
|
||||
Events:
|
||||
- `trial_completed`: New trial finished
|
||||
- `trial_pruned`: Trial was pruned
|
||||
- `new_best`: New best value found
|
||||
|
||||
## Performance Optimization
|
||||
|
||||
### For Large Studies (1000+ trials)
|
||||
|
||||
1. Use Recharts for real-time monitoring
|
||||
2. Switch to Plotly for final analysis
|
||||
3. Limit displayed trials in parallel coordinates
|
||||
|
||||
### Bundle Optimization
|
||||
|
||||
The dashboard uses:
|
||||
- `plotly.js-basic-dist` (smaller bundle, ~1MB vs 3.5MB)
|
||||
- Lazy loading for Plotly components
|
||||
- Code splitting (vendor, recharts, plotly chunks)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Charts Not Loading
|
||||
|
||||
1. Check backend is running (port 8000)
|
||||
2. Verify API proxy in vite.config.ts
|
||||
3. Check browser console for errors
|
||||
|
||||
### Slow Performance
|
||||
|
||||
1. Switch to Recharts mode
|
||||
2. Reduce trial history limit
|
||||
3. Close unused browser tabs
|
||||
|
||||
### Missing Data
|
||||
|
||||
1. Verify study.db exists
|
||||
2. Check study has completed trials
|
||||
3. Refresh page after new trials
|
||||
|
||||
## Development
|
||||
|
||||
### Adding New Charts
|
||||
|
||||
1. Create component in `src/components/`
|
||||
2. Add Plotly version in `src/components/plotly/`
|
||||
3. Export from `src/components/plotly/index.ts`
|
||||
4. Add to Dashboard.tsx with toggle logic
|
||||
|
||||
### Styling
|
||||
|
||||
Uses Tailwind CSS with dark theme:
|
||||
- Background: `dark-800`, `dark-900`
|
||||
- Text: `dark-100`, `dark-200`
|
||||
- Accent: `primary-500`, `primary-600`
|
||||
@@ -0,0 +1,278 @@
|
||||
# Protocol 13: Adaptive Multi-Objective Optimization
|
||||
|
||||
## Overview
|
||||
|
||||
Protocol 13 implements an adaptive multi-objective optimization strategy that combines:
|
||||
- **FEA (Finite Element Analysis)** for ground truth simulations
|
||||
- **Neural Network Surrogates** for rapid exploration
|
||||
- **Iterative refinement** with periodic retraining
|
||||
|
||||
This protocol is ideal for expensive simulations where each FEA run takes significant time (minutes to hours), but you need to explore a large design space efficiently.
|
||||
|
||||
## When to Use Protocol 13
|
||||
|
||||
| Scenario | Recommended |
|
||||
|----------|-------------|
|
||||
| FEA takes > 5 minutes per run | Yes |
|
||||
| Need to explore > 100 designs | Yes |
|
||||
| Multi-objective optimization (2-4 objectives) | Yes |
|
||||
| Single objective, fast FEA (< 1 min) | No, use Protocol 10/11 |
|
||||
| Highly nonlinear response surfaces | Yes, with more FEA samples |
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Adaptive Optimization Loop │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Iteration 1: │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ Initial FEA │ -> │ Train NN │ -> │ NN Search │ │
|
||||
│ │ (50-100) │ │ Surrogate │ │ (1000 trials)│ │
|
||||
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
||||
│ │ │
|
||||
│ v │
|
||||
│ Iteration 2+: │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ Validate Top │ -> │ Retrain NN │ -> │ NN Search │ │
|
||||
│ │ NN with FEA │ │ with new data│ │ (1000 trials)│ │
|
||||
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### optimization_config.json
|
||||
|
||||
```json
|
||||
{
|
||||
"study_name": "my_adaptive_study",
|
||||
"protocol": 13,
|
||||
|
||||
"adaptive_settings": {
|
||||
"enabled": true,
|
||||
"initial_fea_trials": 50,
|
||||
"nn_trials_per_iteration": 1000,
|
||||
"fea_validation_per_iteration": 5,
|
||||
"max_iterations": 10,
|
||||
"convergence_threshold": 0.01,
|
||||
"retrain_epochs": 100
|
||||
},
|
||||
|
||||
"objectives": [
|
||||
{
|
||||
"name": "thermal_40_vs_20",
|
||||
"direction": "minimize",
|
||||
"weight": 1.0
|
||||
},
|
||||
{
|
||||
"name": "thermal_60_vs_20",
|
||||
"direction": "minimize",
|
||||
"weight": 0.5
|
||||
},
|
||||
{
|
||||
"name": "manufacturability",
|
||||
"direction": "minimize",
|
||||
"weight": 0.3
|
||||
}
|
||||
],
|
||||
|
||||
"design_variables": [
|
||||
{
|
||||
"name": "rib_thickness",
|
||||
"expression_name": "rib_thickness",
|
||||
"min": 5.0,
|
||||
"max": 15.0,
|
||||
"baseline": 10.0
|
||||
}
|
||||
],
|
||||
|
||||
"surrogate_settings": {
|
||||
"enabled": true,
|
||||
"model_type": "neural_network",
|
||||
"hidden_layers": [128, 64, 32],
|
||||
"learning_rate": 0.001,
|
||||
"batch_size": 32
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Key Parameters
|
||||
|
||||
| Parameter | Description | Recommended |
|
||||
|-----------|-------------|-------------|
|
||||
| `initial_fea_trials` | FEA runs before first NN training | 50-100 |
|
||||
| `nn_trials_per_iteration` | NN-predicted trials per iteration | 500-2000 |
|
||||
| `fea_validation_per_iteration` | Top NN trials validated with FEA | 3-10 |
|
||||
| `max_iterations` | Maximum adaptive iterations | 5-20 |
|
||||
| `convergence_threshold` | Stop if improvement < threshold | 0.01 (1%) |
|
||||
|
||||
## Workflow
|
||||
|
||||
### Phase 1: Initial FEA Sampling
|
||||
|
||||
```python
|
||||
# Generates space-filling Latin Hypercube samples
|
||||
# Runs FEA on each sample
|
||||
# Stores results in Optuna database with source='FEA'
|
||||
```
|
||||
|
||||
### Phase 2: Neural Network Training
|
||||
|
||||
```python
|
||||
# Extracts all FEA trials from database
|
||||
# Normalizes inputs (design variables) to [0, 1]
|
||||
# Trains multi-output neural network
|
||||
# Validates on held-out set (20%)
|
||||
```
|
||||
|
||||
### Phase 3: NN-Accelerated Search
|
||||
|
||||
```python
|
||||
# Uses trained NN as objective function
|
||||
# Runs NSGA-II with 1000+ trials (fast, ~ms per trial)
|
||||
# Identifies Pareto-optimal candidates
|
||||
# Stores predictions with source='NN'
|
||||
```
|
||||
|
||||
### Phase 4: FEA Validation
|
||||
|
||||
```python
|
||||
# Selects top N NN predictions
|
||||
# Runs actual FEA on these candidates
|
||||
# Updates database with ground truth
|
||||
# Checks for improvement
|
||||
```
|
||||
|
||||
### Phase 5: Iteration
|
||||
|
||||
```python
|
||||
# If improved: retrain NN with new FEA data
|
||||
# If converged: stop and report best
|
||||
# Otherwise: continue to next iteration
|
||||
```
|
||||
|
||||
## Output Files
|
||||
|
||||
```
|
||||
studies/my_study/
|
||||
├── 3_results/
|
||||
│ ├── study.db # Optuna database (all trials)
|
||||
│ ├── adaptive_state.json # Current iteration state
|
||||
│ ├── surrogate_model.pt # Trained neural network
|
||||
│ ├── training_history.json # NN training metrics
|
||||
│ └── STUDY_REPORT.md # Generated summary report
|
||||
```
|
||||
|
||||
### adaptive_state.json
|
||||
|
||||
```json
|
||||
{
|
||||
"iteration": 3,
|
||||
"total_fea_count": 103,
|
||||
"total_nn_count": 3000,
|
||||
"best_weighted": 1.456,
|
||||
"best_params": {
|
||||
"rib_thickness": 8.5,
|
||||
"...": "..."
|
||||
},
|
||||
"history": [
|
||||
{"iteration": 1, "fea_count": 50, "nn_count": 1000, "improved": true},
|
||||
{"iteration": 2, "fea_count": 55, "nn_count": 2000, "improved": true},
|
||||
{"iteration": 3, "fea_count": 103, "nn_count": 3000, "improved": false}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Dashboard Integration
|
||||
|
||||
Protocol 13 studies display in the Atomizer dashboard with:
|
||||
|
||||
- **FEA vs NN Differentiation**: Blue circles for FEA, orange crosses for NN
|
||||
- **Pareto Front Highlighting**: Green markers for Pareto-optimal solutions
|
||||
- **Convergence Plot**: Shows optimization progress with best-so-far line
|
||||
- **Parallel Coordinates**: Filter and explore the design space
|
||||
- **Parameter Importance**: Correlation-based sensitivity analysis
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Initial Sampling Strategy
|
||||
|
||||
Use Latin Hypercube Sampling (LHS) for initial FEA trials to ensure good coverage:
|
||||
|
||||
```python
|
||||
sampler = optuna.samplers.LatinHypercubeSampler(seed=42)
|
||||
```
|
||||
|
||||
### 2. Neural Network Architecture
|
||||
|
||||
For most problems, start with:
|
||||
- 2-3 hidden layers
|
||||
- 64-128 neurons per layer
|
||||
- ReLU activation
|
||||
- Adam optimizer with lr=0.001
|
||||
|
||||
### 3. Validation Strategy
|
||||
|
||||
Always validate top NN predictions with FEA before trusting them:
|
||||
- NN predictions can be wrong in unexplored regions
|
||||
- FEA validation provides ground truth
|
||||
- More FEA = more accurate NN (trade-off with time)
|
||||
|
||||
### 4. Convergence Criteria
|
||||
|
||||
Stop when:
|
||||
- No improvement for 2-3 consecutive iterations
|
||||
- Reached FEA budget limit
|
||||
- Objective improvement < 1% threshold
|
||||
|
||||
## Example: M1 Mirror Optimization
|
||||
|
||||
```bash
|
||||
# Start adaptive optimization
|
||||
cd studies/m1_mirror_adaptive_V11
|
||||
python run_optimization.py --start
|
||||
|
||||
# Monitor progress
|
||||
python run_optimization.py --status
|
||||
|
||||
# Generate report
|
||||
python generate_report.py
|
||||
```
|
||||
|
||||
Results after 3 iterations:
|
||||
- 103 FEA trials
|
||||
- 3000 NN trials
|
||||
- Best thermal 40°C vs 20°C: 5.99 nm RMS
|
||||
- Best thermal 60°C vs 20°C: 14.02 nm RMS
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### NN Predictions Don't Match FEA
|
||||
|
||||
- Increase initial FEA samples
|
||||
- Add more hidden layers
|
||||
- Check for outliers in training data
|
||||
- Ensure proper normalization
|
||||
|
||||
### Optimization Not Converging
|
||||
|
||||
- Increase NN trials per iteration
|
||||
- Check objective function implementation
|
||||
- Verify design variable bounds
|
||||
- Consider adding constraints
|
||||
|
||||
### Memory Issues
|
||||
|
||||
- Reduce `nn_trials_per_iteration`
|
||||
- Use batch processing for large datasets
|
||||
- Clear trial cache periodically
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Protocol 11: Multi-Objective NSGA-II](./PROTOCOL_11_MULTI_OBJECTIVE.md)
|
||||
- [Protocol 12: Hybrid FEA/NN](./PROTOCOL_12_HYBRID.md)
|
||||
- [Neural Surrogate Training](../07_DEVELOPMENT/NEURAL_SURROGATE.md)
|
||||
- [Zernike Extractor](./ZERNIKE_EXTRACTOR.md)
|
||||
403
docs/06_PROTOCOLS_DETAILED/ZERNIKE_EXTRACTOR.md
Normal file
403
docs/06_PROTOCOLS_DETAILED/ZERNIKE_EXTRACTOR.md
Normal file
@@ -0,0 +1,403 @@
|
||||
# Zernike Coefficient Extractor
|
||||
|
||||
## Overview
|
||||
|
||||
The Zernike extractor module provides complete wavefront error (WFE) analysis for telescope mirror optimization. It extracts Zernike polynomial coefficients from FEA displacement results and computes RMS metrics used as optimization objectives.
|
||||
|
||||
**Location**: `optimization_engine/extractors/extract_zernike.py`
|
||||
|
||||
---
|
||||
|
||||
## Mathematical Background
|
||||
|
||||
### What are Zernike Polynomials?
|
||||
|
||||
Zernike polynomials are a set of orthogonal functions defined on the unit disk. They are the standard basis for describing optical aberrations because:
|
||||
|
||||
1. **Orthogonality**: Each mode is independent (no cross-talk)
|
||||
2. **Physical meaning**: Each mode corresponds to a recognizable aberration
|
||||
3. **RMS property**: Total RMS² = sum of individual coefficient²
|
||||
|
||||
### Noll Indexing Convention
|
||||
|
||||
We use the Noll indexing scheme (standard in optics):
|
||||
|
||||
| Noll j | n | m | Name | Physical Meaning |
|
||||
|--------|---|----|---------------------|------------------|
|
||||
| 1 | 0 | 0 | Piston | Constant offset (ignored) |
|
||||
| 2 | 1 | 1 | Tilt Y | Pointing error - correctable |
|
||||
| 3 | 1 | -1 | Tilt X | Pointing error - correctable |
|
||||
| 4 | 2 | 0 | Defocus | Focus error - correctable |
|
||||
| 5 | 2 | -2 | Astigmatism 45° | 3rd order aberration |
|
||||
| 6 | 2 | 2 | Astigmatism 0° | 3rd order aberration |
|
||||
| 7 | 3 | -1 | Coma X | 3rd order aberration |
|
||||
| 8 | 3 | 1 | Coma Y | 3rd order aberration |
|
||||
| 9 | 3 | -3 | Trefoil X | Triangular aberration |
|
||||
| 10 | 3 | 3 | Trefoil Y | Triangular aberration |
|
||||
| 11 | 4 | 0 | Primary Spherical | 4th order spherical |
|
||||
| 12-50 | ...| ...| Higher orders | Higher-order aberrations |
|
||||
|
||||
### Zernike Polynomial Formula
|
||||
|
||||
Each Zernike polynomial Z_j(r, θ) is computed as:
|
||||
|
||||
```
|
||||
Z_j(r, θ) = R_n^m(r) × { cos(m·θ) if m ≥ 0
|
||||
{ sin(|m|·θ) if m < 0
|
||||
|
||||
where R_n^m(r) = Σ(s=0 to (n-|m|)/2) [(-1)^s × (n-s)! / (s! × ((n+|m|)/2-s)! × ((n-|m|)/2-s)!)] × r^(n-2s)
|
||||
```
|
||||
|
||||
### Wavefront Error Conversion
|
||||
|
||||
FEA gives surface displacement in mm. We convert to wavefront error in nm:
|
||||
|
||||
```
|
||||
WFE = 2 × displacement × 10⁶ [nm]
|
||||
↑ ↑
|
||||
optical reflection mm → nm
|
||||
```
|
||||
|
||||
The factor of 2 accounts for the optical path difference when light reflects off a surface.
|
||||
|
||||
---
|
||||
|
||||
## Module Structure
|
||||
|
||||
### Files
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `extract_zernike.py` | Core extraction: Zernike fitting, RMS computation, OP2 parsing |
|
||||
| `zernike_helpers.py` | High-level helpers for optimization integration |
|
||||
| `extract_zernike_surface.py` | Surface-based extraction (alternative method) |
|
||||
|
||||
### Key Classes
|
||||
|
||||
#### `ZernikeExtractor`
|
||||
|
||||
Main class for Zernike analysis:
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors import ZernikeExtractor
|
||||
|
||||
extractor = ZernikeExtractor(
|
||||
op2_path="results/model-solution_1.op2",
|
||||
bdf_path="results/model.dat", # Optional, auto-detected
|
||||
displacement_unit="mm", # Unit in OP2 file
|
||||
n_modes=50, # Number of Zernike modes
|
||||
filter_orders=4 # Modes to filter (J1-J4)
|
||||
)
|
||||
|
||||
# Extract single subcase
|
||||
result = extractor.extract_subcase("20")
|
||||
print(f"Filtered RMS: {result['filtered_rms_nm']:.2f} nm")
|
||||
|
||||
# Extract relative metrics (target vs reference)
|
||||
relative = extractor.extract_relative(
|
||||
target_subcase="40",
|
||||
reference_subcase="20"
|
||||
)
|
||||
print(f"Relative RMS (40 vs 20): {relative['relative_filtered_rms_nm']:.2f} nm")
|
||||
|
||||
# Extract all subcases
|
||||
all_results = extractor.extract_all_subcases(reference_subcase="20")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## RMS Metrics Explained
|
||||
|
||||
### Global RMS
|
||||
|
||||
Raw RMS of the entire wavefront error surface:
|
||||
|
||||
```
|
||||
global_rms = sqrt(mean(WFE²))
|
||||
```
|
||||
|
||||
### Filtered RMS (J1-J4 removed)
|
||||
|
||||
RMS after removing correctable aberrations (piston, tip, tilt, defocus):
|
||||
|
||||
```python
|
||||
# Subtract low-order contribution
|
||||
WFE_filtered = WFE - Σ(j=1 to 4) c_j × Z_j(r, θ)
|
||||
filtered_rms = sqrt(mean(WFE_filtered²))
|
||||
```
|
||||
|
||||
**This is typically the primary optimization objective** because:
|
||||
- Piston (J1): Doesn't affect imaging
|
||||
- Tip/Tilt (J2-J3): Corrected by telescope pointing
|
||||
- Defocus (J4): Corrected by focus mechanism
|
||||
|
||||
### Optician Workload (J1-J3 removed)
|
||||
|
||||
RMS for manufacturing assessment - keeps defocus because it requires material removal:
|
||||
|
||||
```python
|
||||
# Subtract only piston and tilt
|
||||
WFE_j1to3 = WFE - Σ(j=1 to 3) c_j × Z_j(r, θ)
|
||||
rms_filter_j1to3 = sqrt(mean(WFE_j1to3²))
|
||||
```
|
||||
|
||||
### Relative RMS Between Subcases
|
||||
|
||||
Measures gravity-induced deformation relative to a reference orientation:
|
||||
|
||||
```python
|
||||
# Compute difference surface
|
||||
ΔWFE = WFE_target - WFE_reference
|
||||
|
||||
# Fit Zernike to difference
|
||||
Δc = zernike_fit(ΔWFE)
|
||||
|
||||
# Filter and compute RMS
|
||||
relative_filtered_rms = sqrt(Σ(j=5 to 50) Δc_j²)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Coefficient-Based vs Surface-Based RMS
|
||||
|
||||
Due to Zernike orthogonality, these two methods are mathematically equivalent:
|
||||
|
||||
### Method 1: Coefficient-Based (Fast)
|
||||
```python
|
||||
# From coefficients directly
|
||||
filtered_rms = sqrt(Σ(j=5 to 50) c_j²)
|
||||
```
|
||||
|
||||
### Method 2: Surface-Based (More accurate for irregular meshes)
|
||||
```python
|
||||
# Reconstruct and subtract low-order surface
|
||||
WFE_low = Σ(j=1 to 4) c_j × Z_j(r, θ)
|
||||
WFE_filtered = WFE - WFE_low
|
||||
filtered_rms = sqrt(mean(WFE_filtered²))
|
||||
```
|
||||
|
||||
The module uses the surface-based method for maximum accuracy with FEA meshes.
|
||||
|
||||
---
|
||||
|
||||
## Usage in Optimization
|
||||
|
||||
### Simple: Single Objective
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors import extract_zernike_filtered_rms
|
||||
|
||||
def objective(trial):
|
||||
# ... run simulation ...
|
||||
|
||||
rms = extract_zernike_filtered_rms(
|
||||
op2_file=sim_dir / "model-solution_1.op2",
|
||||
subcase="20"
|
||||
)
|
||||
return rms
|
||||
```
|
||||
|
||||
### Multi-Subcase: Weighted Sum
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors import ZernikeExtractor
|
||||
|
||||
def objective(trial):
|
||||
# ... run simulation ...
|
||||
|
||||
extractor = ZernikeExtractor(op2_path)
|
||||
|
||||
# Extract relative metrics
|
||||
rel_40_20 = extractor.extract_relative("3", "2")['relative_filtered_rms_nm']
|
||||
rel_60_20 = extractor.extract_relative("4", "2")['relative_filtered_rms_nm']
|
||||
mfg_90 = extractor.extract_relative("1", "2")['relative_rms_filter_j1to3']
|
||||
|
||||
# Weighted objective
|
||||
weighted = (
|
||||
5.0 * (rel_40_20 / 4.0) + # Target: 4 nm
|
||||
5.0 * (rel_60_20 / 10.0) + # Target: 10 nm
|
||||
1.0 * (mfg_90 / 20.0) # Target: 20 nm
|
||||
) / 11.0
|
||||
|
||||
return weighted
|
||||
```
|
||||
|
||||
### Using Helper Classes
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors.zernike_helpers import ZernikeObjectiveBuilder
|
||||
|
||||
builder = ZernikeObjectiveBuilder(
|
||||
op2_finder=lambda: sim_dir / "model-solution_1.op2"
|
||||
)
|
||||
|
||||
builder.add_relative_objective("3", "2", weight=5.0) # 40 vs 20
|
||||
builder.add_relative_objective("4", "2", weight=5.0) # 60 vs 20
|
||||
builder.add_relative_objective("1", "2",
|
||||
metric="relative_rms_filter_j1to3",
|
||||
weight=1.0) # 90 vs 20
|
||||
|
||||
objective = builder.build_weighted_sum()
|
||||
value = objective() # Returns combined metric
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Dictionary Reference
|
||||
|
||||
### `extract_subcase()` Returns:
|
||||
|
||||
| Key | Type | Description |
|
||||
|-----|------|-------------|
|
||||
| `subcase` | str | Subcase identifier |
|
||||
| `global_rms_nm` | float | Global RMS WFE (nm) |
|
||||
| `filtered_rms_nm` | float | Filtered RMS (J1-J4 removed) |
|
||||
| `rms_filter_j1to3` | float | J1-J3 filtered RMS (keeps defocus) |
|
||||
| `n_nodes` | int | Number of nodes analyzed |
|
||||
| `defocus_nm` | float | Defocus magnitude (J4) |
|
||||
| `astigmatism_rms_nm` | float | Combined astigmatism (J5+J6) |
|
||||
| `coma_rms_nm` | float | Combined coma (J7+J8) |
|
||||
| `trefoil_rms_nm` | float | Combined trefoil (J9+J10) |
|
||||
| `spherical_nm` | float | Primary spherical (J11) |
|
||||
|
||||
### `extract_relative()` Returns:
|
||||
|
||||
| Key | Type | Description |
|
||||
|-----|------|-------------|
|
||||
| `target_subcase` | str | Target subcase |
|
||||
| `reference_subcase` | str | Reference subcase |
|
||||
| `relative_global_rms_nm` | float | Global RMS of difference |
|
||||
| `relative_filtered_rms_nm` | float | Filtered RMS of difference |
|
||||
| `relative_rms_filter_j1to3` | float | J1-J3 filtered RMS of difference |
|
||||
| `relative_defocus_nm` | float | Defocus change |
|
||||
| `relative_astigmatism_rms_nm` | float | Astigmatism change |
|
||||
| ... | ... | (all aberrations with `relative_` prefix) |
|
||||
|
||||
---
|
||||
|
||||
## Subcase Mapping
|
||||
|
||||
NX Nastran subcases map to gravity orientations:
|
||||
|
||||
| Subcase ID | Elevation | Purpose |
|
||||
|------------|-----------|---------|
|
||||
| 1 | 90° (zenith) | Polishing/manufacturing orientation |
|
||||
| 2 | 20° | Reference (low elevation) |
|
||||
| 3 | 40° | Mid-range tracking |
|
||||
| 4 | 60° | High-range tracking |
|
||||
|
||||
The **20° orientation is typically used as reference** because:
|
||||
- It represents typical low-elevation observing
|
||||
- Polishing is done at 90°, so we measure change from a tracking position
|
||||
|
||||
---
|
||||
|
||||
## Saving Zernike Coefficients for Surrogate Training
|
||||
|
||||
For neural network training, save all 200 coefficients (50 modes × 4 subcases):
|
||||
|
||||
```python
|
||||
import pandas as pd
|
||||
from optimization_engine.extractors import ZernikeExtractor
|
||||
|
||||
extractor = ZernikeExtractor(op2_path)
|
||||
|
||||
# Get coefficients for all subcases
|
||||
rows = []
|
||||
for j in range(1, 51):
|
||||
row = {'noll_index': j}
|
||||
for subcase, label in [('1', '90deg'), ('2', '20deg'),
|
||||
('3', '40deg'), ('4', '60deg')]:
|
||||
result = extractor.extract_subcase(subcase, include_coefficients=True)
|
||||
row[f'{label}_nm'] = result['coefficients'][j-1]
|
||||
rows.append(row)
|
||||
|
||||
df = pd.DataFrame(rows)
|
||||
df.to_csv(f"zernike_coefficients_trial_{trial_num}.csv", index=False)
|
||||
```
|
||||
|
||||
### CSV Format
|
||||
|
||||
| noll_index | 90deg_nm | 20deg_nm | 40deg_nm | 60deg_nm |
|
||||
|------------|----------|----------|----------|----------|
|
||||
| 1 | 0.05 | 0.03 | 0.04 | 0.04 |
|
||||
| 2 | -1.23 | -0.98 | -1.05 | -1.12 |
|
||||
| ... | ... | ... | ... | ... |
|
||||
| 50 | 0.02 | 0.01 | 0.02 | 0.02 |
|
||||
|
||||
**Note**: These are ABSOLUTE coefficients in nm, not relative RMS values.
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **"No displacement data found in OP2"**
|
||||
- Check that solve completed successfully
|
||||
- Verify OP2 file isn't corrupted or incomplete
|
||||
|
||||
2. **"Subcase 'X' not found"**
|
||||
- List available subcases: `print(extractor.displacements.keys())`
|
||||
- Check subcase numbering in NX simulation setup
|
||||
|
||||
3. **"No valid points inside unit disk"**
|
||||
- Mirror surface nodes may not be properly identified
|
||||
- Check BDF node coordinates
|
||||
|
||||
4. **pyNastran version warning**
|
||||
- `nx version='2506.5' is not supported` - This is just a warning, extraction still works
|
||||
|
||||
---
|
||||
|
||||
## Dependencies
|
||||
|
||||
```python
|
||||
# Required
|
||||
pyNastran >= 1.3.4 # OP2/BDF parsing
|
||||
numpy >= 1.20 # Numerical computations
|
||||
|
||||
# Optional (for visualization)
|
||||
matplotlib # Plotting Zernike surfaces
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
1. **Noll, R. J. (1976)**. "Zernike polynomials and atmospheric turbulence."
|
||||
*Journal of the Optical Society of America*, 66(3), 207-211.
|
||||
|
||||
2. **Born, M. & Wolf, E. (1999)**. *Principles of Optics* (7th ed.).
|
||||
Cambridge University Press. Chapter 9: Aberrations.
|
||||
|
||||
3. **Wyant, J. C. & Creath, K. (1992)**. "Basic Wavefront Aberration Theory
|
||||
for Optical Metrology." *Applied Optics and Optical Engineering*, Vol. XI.
|
||||
|
||||
---
|
||||
|
||||
## Module Exports
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors import (
|
||||
# Main class
|
||||
ZernikeExtractor,
|
||||
|
||||
# Convenience functions
|
||||
extract_zernike_from_op2,
|
||||
extract_zernike_filtered_rms,
|
||||
extract_zernike_relative_rms,
|
||||
|
||||
# Helpers for optimization
|
||||
create_zernike_objective,
|
||||
create_relative_zernike_objective,
|
||||
ZernikeObjectiveBuilder,
|
||||
|
||||
# Low-level utilities
|
||||
compute_zernike_coefficients,
|
||||
compute_rms_metrics,
|
||||
noll_indices,
|
||||
zernike_noll,
|
||||
zernike_name,
|
||||
)
|
||||
```
|
||||
356
docs/06_PROTOCOLS_DETAILED/ZERNIKE_MIRROR_OPTIMIZATION.md
Normal file
356
docs/06_PROTOCOLS_DETAILED/ZERNIKE_MIRROR_OPTIMIZATION.md
Normal file
@@ -0,0 +1,356 @@
|
||||
# Zernike Mirror Optimization Protocol
|
||||
|
||||
## Overview
|
||||
|
||||
This document captures the learnings from the M1 mirror Zernike optimization studies (V1-V9), including the Assembly FEM (AFEM) workflow, subcase handling, and wavefront error metrics.
|
||||
|
||||
## Assembly FEM (AFEM) Structure
|
||||
|
||||
### NX File Organization
|
||||
|
||||
A typical telescope mirror assembly in NX consists of:
|
||||
|
||||
```
|
||||
ASSY_M1.prt # Master assembly part
|
||||
ASSY_M1_assyfem1.afm # Assembly FEM container
|
||||
ASSY_M1_assyfem1_sim1.sim # Simulation file (this is what we solve)
|
||||
M1_Blank.prt # Mirror blank part
|
||||
M1_Blank_fem1.fem # Mirror blank mesh
|
||||
M1_Blank_fem1_i.prt # Idealized geometry for FEM
|
||||
M1_Vertical_Support_Skeleton.prt # Support structure part
|
||||
M1_Vertical_Support_Skeleton_fem1.fem
|
||||
M1_Vertical_Support_Skeleton_fem1_i.prt
|
||||
```
|
||||
|
||||
### Key Relationships
|
||||
|
||||
1. **Assembly Part (.prt)** - Contains the CAD geometry and expressions (design parameters)
|
||||
2. **Assembly FEM (.afm)** - Links component FEMs together, defines connections
|
||||
3. **Simulation (.sim)** - Contains solutions, loads, boundary conditions, subcases
|
||||
4. **Component FEMs (.fem)** - Individual meshes that get assembled
|
||||
|
||||
### Expression Propagation
|
||||
|
||||
Expressions defined in the master `.prt` propagate through the assembly:
|
||||
- Modify expression in `ASSY_M1.prt`
|
||||
- AFEM updates mesh connections automatically
|
||||
- Solve via `.sim` file
|
||||
|
||||
## Multi-Subcase Analysis
|
||||
|
||||
### Telescope Gravity Orientations
|
||||
|
||||
For telescope mirrors, we analyze multiple gravity orientations (subcases):
|
||||
|
||||
| Subcase | Elevation Angle | Purpose |
|
||||
|---------|-----------------|---------|
|
||||
| 1 | 90 deg (zenith) | Polishing orientation - manufacturing reference |
|
||||
| 2 | 20 deg | Low elevation - reference for relative metrics |
|
||||
| 3 | 40 deg | Mid-low elevation |
|
||||
| 4 | 60 deg | Mid-high elevation |
|
||||
|
||||
### Subcase Mapping
|
||||
|
||||
**Important**: NX subcase numbers don't always match angle labels!
|
||||
|
||||
```json
|
||||
"subcase_labels": {
|
||||
"1": "90deg", // Subcase 1 = 90 degrees
|
||||
"2": "20deg", // Subcase 2 = 20 degrees (reference)
|
||||
"3": "40deg", // Subcase 3 = 40 degrees
|
||||
"4": "60deg" // Subcase 4 = 60 degrees
|
||||
}
|
||||
```
|
||||
|
||||
Always verify subcase-to-angle mapping by checking the NX simulation setup.
|
||||
|
||||
## Zernike Wavefront Error Analysis
|
||||
|
||||
### Optical Convention
|
||||
|
||||
For mirror surface deformation to wavefront error:
|
||||
```
|
||||
WFE = 2 * surface_displacement (reflection doubles the path difference)
|
||||
```
|
||||
|
||||
Unit conversion:
|
||||
```python
|
||||
NM_PER_MM = 1e6 # 1 mm displacement = 1e6 nm WFE contribution
|
||||
wfe_nm = 2.0 * displacement_mm * 1e6
|
||||
```
|
||||
|
||||
### Zernike Polynomial Indexing
|
||||
|
||||
We use **Noll indexing** (standard in optics):
|
||||
|
||||
| J | Name | (n,m) | Correctable? |
|
||||
|---|------|-------|--------------|
|
||||
| 1 | Piston | (0,0) | Yes - alignment |
|
||||
| 2 | Tilt X | (1,-1) | Yes - alignment |
|
||||
| 3 | Tilt Y | (1,1) | Yes - alignment |
|
||||
| 4 | Defocus | (2,0) | Yes - focus adjustment |
|
||||
| 5 | Astigmatism 45 | (2,-2) | Partially |
|
||||
| 6 | Astigmatism 0 | (2,2) | Partially |
|
||||
| 7 | Coma X | (3,-1) | No |
|
||||
| 8 | Coma Y | (3,1) | No |
|
||||
| 9 | Trefoil X | (3,-3) | No |
|
||||
| 10 | Trefoil Y | (3,3) | No |
|
||||
| 11 | Spherical | (4,0) | No |
|
||||
|
||||
### RMS Metrics
|
||||
|
||||
| Metric | Filter | Use Case |
|
||||
|--------|--------|----------|
|
||||
| `global_rms_nm` | None | Total surface error |
|
||||
| `filtered_rms_nm` | J1-J4 removed | Uncorrectable error (optimization target) |
|
||||
| `rms_filter_j1to3` | J1-J3 removed | Optician workload (keeps defocus) |
|
||||
|
||||
### Relative Metrics
|
||||
|
||||
For gravity-induced deformation, we compute relative WFE:
|
||||
```
|
||||
WFE_relative = WFE_target_orientation - WFE_reference_orientation
|
||||
```
|
||||
|
||||
This removes the static (manufacturing) shape and isolates gravity effects.
|
||||
|
||||
Example: `rel_filtered_rms_40_vs_20` = filtered RMS at 40 deg relative to 20 deg reference
|
||||
|
||||
## Optimization Objectives
|
||||
|
||||
### Typical M1 Mirror Objectives
|
||||
|
||||
```json
|
||||
"objectives": [
|
||||
{
|
||||
"name": "rel_filtered_rms_40_vs_20",
|
||||
"description": "Gravity-induced WFE at 40 deg vs 20 deg reference",
|
||||
"direction": "minimize",
|
||||
"weight": 5.0,
|
||||
"target": 4.0,
|
||||
"units": "nm"
|
||||
},
|
||||
{
|
||||
"name": "rel_filtered_rms_60_vs_20",
|
||||
"description": "Gravity-induced WFE at 60 deg vs 20 deg reference",
|
||||
"direction": "minimize",
|
||||
"weight": 5.0,
|
||||
"target": 10.0,
|
||||
"units": "nm"
|
||||
},
|
||||
{
|
||||
"name": "mfg_90_optician_workload",
|
||||
"description": "Polishing effort at zenith (J1-J3 filtered)",
|
||||
"direction": "minimize",
|
||||
"weight": 1.0,
|
||||
"target": 20.0,
|
||||
"units": "nm"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
### Weighted Sum Formulation
|
||||
|
||||
```python
|
||||
weighted_objective = sum(weight_i * (value_i / target_i)) / sum(weight_i)
|
||||
```
|
||||
|
||||
Targets normalize different metrics to comparable scales.
|
||||
|
||||
## Design Variables
|
||||
|
||||
### Typical Mirror Support Parameters
|
||||
|
||||
| Parameter | Description | Typical Range |
|
||||
|-----------|-------------|---------------|
|
||||
| `whiffle_min` | Whiffle tree minimum dimension | 35-55 mm |
|
||||
| `whiffle_outer_to_vertical` | Whiffle arm angle | 68-80 deg |
|
||||
| `whiffle_triangle_closeness` | Triangle geometry | 50-65 mm |
|
||||
| `inner_circular_rib_dia` | Rib diameter | 480-620 mm |
|
||||
| `lateral_inner_angle` | Lateral support angle | 25-28.5 deg |
|
||||
| `blank_backface_angle` | Mirror blank geometry | 3.5-5.0 deg |
|
||||
|
||||
### Expression File Format (params.exp)
|
||||
|
||||
```
|
||||
[mm]whiffle_min=42.49
|
||||
[Degrees]whiffle_outer_to_vertical=79.41
|
||||
[mm]inner_circular_rib_dia=582.48
|
||||
```
|
||||
|
||||
## Iteration Folder Structure (V9)
|
||||
|
||||
```
|
||||
study_name/
|
||||
├── 1_setup/
|
||||
│ ├── model/ # Master NX files (NEVER modify)
|
||||
│ └── optimization_config.json
|
||||
├── 2_iterations/
|
||||
│ ├── iter0/ # Trial 0 (0-based to match Optuna)
|
||||
│ │ ├── [all NX files] # Fresh copy from master
|
||||
│ │ ├── params.exp # Expression updates for this trial
|
||||
│ │ └── results/ # Processed outputs
|
||||
│ ├── iter1/
|
||||
│ └── ...
|
||||
└── 3_results/
|
||||
└── study.db # Optuna database
|
||||
```
|
||||
|
||||
### Why 0-Based Iteration Folders?
|
||||
|
||||
Optuna uses 0-based trial numbers. Using `iter{trial.number}` ensures:
|
||||
- Dashboard shows Trial 0 -> corresponds to folder iter0
|
||||
- No confusion when cross-referencing results
|
||||
- Consistent indexing throughout the system
|
||||
|
||||
## Lessons Learned
|
||||
|
||||
### 1. TPE Sampler Seed Issue
|
||||
|
||||
**Problem**: When resuming a study, re-initializing TPESampler with a fixed seed causes the sampler to restart its random sequence, generating duplicate parameters.
|
||||
|
||||
**Solution**: Only set seed for NEW studies:
|
||||
```python
|
||||
if is_new_study:
|
||||
sampler = TPESampler(seed=42, ...)
|
||||
else:
|
||||
sampler = TPESampler(...) # No seed for resume
|
||||
```
|
||||
|
||||
### 2. Code Reuse Protocol
|
||||
|
||||
**Problem**: Embedding 500+ lines of Zernike code in `run_optimization.py` violates DRY principle.
|
||||
|
||||
**Solution**: Use centralized extractors:
|
||||
```python
|
||||
from optimization_engine.extractors import ZernikeExtractor
|
||||
|
||||
extractor = ZernikeExtractor(op2_file)
|
||||
result = extractor.extract_relative("3", "2")
|
||||
rms = result['relative_filtered_rms_nm']
|
||||
```
|
||||
|
||||
### 3. Subcase Numbering
|
||||
|
||||
**Problem**: NX subcase numbers (1,2,3,4) don't match angle labels (20,40,60,90).
|
||||
|
||||
**Solution**: Use explicit mapping in config and translate:
|
||||
```python
|
||||
subcase_labels = {"1": "90deg", "2": "20deg", "3": "40deg", "4": "60deg"}
|
||||
label_to_subcase = {v: k for k, v in subcase_labels.items()}
|
||||
```
|
||||
|
||||
### 4. OP2 Data Validation
|
||||
|
||||
**Problem**: Corrupt OP2 files can have all-zero or unrealistic displacement values.
|
||||
|
||||
**Solution**: Validate before processing:
|
||||
```python
|
||||
unique_values = len(np.unique(disp_z))
|
||||
if unique_values < 10:
|
||||
raise RuntimeError("CORRUPT OP2: insufficient unique values")
|
||||
|
||||
if np.abs(disp_z).max() > 1e6:
|
||||
raise RuntimeError("CORRUPT OP2: unrealistic displacement magnitude")
|
||||
```
|
||||
|
||||
### 5. Reference Subcase for Relative Metrics
|
||||
|
||||
**Problem**: Which orientation to use as reference?
|
||||
|
||||
**Solution**: Use the lowest operational elevation (typically 20 deg) as reference. This makes higher elevations show positive relative WFE as gravity effects increase.
|
||||
|
||||
## ZernikeExtractor API Reference
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```python
|
||||
from optimization_engine.extractors import ZernikeExtractor
|
||||
|
||||
# Create extractor
|
||||
extractor = ZernikeExtractor(
|
||||
op2_path="path/to/results.op2",
|
||||
bdf_path=None, # Auto-detect from same folder
|
||||
displacement_unit="mm",
|
||||
n_modes=50,
|
||||
filter_orders=4
|
||||
)
|
||||
|
||||
# Single subcase
|
||||
result = extractor.extract_subcase("2")
|
||||
# Returns: global_rms_nm, filtered_rms_nm, rms_filter_j1to3, aberrations...
|
||||
|
||||
# Relative between subcases
|
||||
rel = extractor.extract_relative(target_subcase="3", reference_subcase="2")
|
||||
# Returns: relative_filtered_rms_nm, relative_rms_filter_j1to3, ...
|
||||
|
||||
# All subcases with relative metrics
|
||||
all_results = extractor.extract_all_subcases(reference_subcase="2")
|
||||
```
|
||||
|
||||
### Available Metrics
|
||||
|
||||
| Method | Returns |
|
||||
|--------|---------|
|
||||
| `extract_subcase()` | global_rms_nm, filtered_rms_nm, rms_filter_j1to3, defocus_nm, astigmatism_rms_nm, coma_rms_nm, trefoil_rms_nm, spherical_nm |
|
||||
| `extract_relative()` | relative_global_rms_nm, relative_filtered_rms_nm, relative_rms_filter_j1to3, relative aberrations |
|
||||
| `extract_all_subcases()` | Dict of all subcases with both absolute and relative metrics |
|
||||
|
||||
## Configuration Template
|
||||
|
||||
```json
|
||||
{
|
||||
"study_name": "m1_mirror_optimization",
|
||||
|
||||
"design_variables": [
|
||||
{
|
||||
"name": "whiffle_min",
|
||||
"expression_name": "whiffle_min",
|
||||
"min": 35.0,
|
||||
"max": 55.0,
|
||||
"baseline": 40.55,
|
||||
"units": "mm",
|
||||
"enabled": true
|
||||
}
|
||||
],
|
||||
|
||||
"objectives": [
|
||||
{
|
||||
"name": "rel_filtered_rms_40_vs_20",
|
||||
"extractor": "zernike_relative",
|
||||
"extractor_config": {
|
||||
"target_subcase": "3",
|
||||
"reference_subcase": "2",
|
||||
"metric": "relative_filtered_rms_nm"
|
||||
},
|
||||
"direction": "minimize",
|
||||
"weight": 5.0,
|
||||
"target": 4.0
|
||||
}
|
||||
],
|
||||
|
||||
"zernike_settings": {
|
||||
"n_modes": 50,
|
||||
"filter_low_orders": 4,
|
||||
"displacement_unit": "mm",
|
||||
"subcases": ["1", "2", "3", "4"],
|
||||
"subcase_labels": {"1": "90deg", "2": "20deg", "3": "40deg", "4": "60deg"},
|
||||
"reference_subcase": "2"
|
||||
},
|
||||
|
||||
"optimization_settings": {
|
||||
"sampler": "TPE",
|
||||
"seed": 42,
|
||||
"n_startup_trials": 15
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Key Changes |
|
||||
|---------|-------------|
|
||||
| V1-V6 | Initial development, various folder structures |
|
||||
| V7 | HEEDS-style iteration folders, fresh model copies |
|
||||
| V8 | Autonomous NX session management, but had embedded Zernike code |
|
||||
| V9 | Clean ZernikeExtractor integration, fixed sampler seed, 0-based folders |
|
||||
Reference in New Issue
Block a user