## Protocol 13: Adaptive Multi-Objective Optimization - Iterative FEA + Neural Network surrogate workflow - Initial FEA sampling, NN training, NN-accelerated search - FEA validation of top NN predictions, retraining loop - adaptive_state.json tracks iteration history and best values - M1 mirror study (V11) with 103 FEA, 3000 NN trials ## Dashboard Visualization Enhancements - Added Plotly.js interactive charts (parallel coords, Pareto, convergence) - Lazy loading with React.lazy() for performance - Code splitting: plotly.js-basic-dist (~1MB vs 3.5MB) - Chart library toggle (Recharts default, Plotly on-demand) - ExpandableChart component for full-screen modal views - ConsoleOutput component for real-time log viewing ## Documentation - Protocol 13 detailed documentation - Dashboard visualization guide - Plotly components README - Updated run-optimization skill with Mode 5 (adaptive) ## Bug Fixes - Fixed TypeScript errors in dashboard components - Fixed Card component to accept ReactNode title - Removed unused imports across components 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
309 lines
12 KiB
Markdown
309 lines
12 KiB
Markdown
# M1 Mirror Adaptive Surrogate Optimization V11
|
|
|
|
Adaptive neural surrogate optimization with real FEA validation for the M1 telescope mirror support system.
|
|
|
|
**Created**: 2025-12-03
|
|
**Protocol**: Adaptive Surrogate (TPE + FEA Validation Loop)
|
|
**Status**: Ready to Run
|
|
|
|
---
|
|
|
|
## 1. Engineering Problem
|
|
|
|
### 1.1 Objective
|
|
|
|
Minimize gravitational wavefront error (WFE) deformations in a 1.2m telescope primary mirror across operational angles (20-60 deg elevation) and manufacturing orientation (90 deg polishing).
|
|
|
|
### 1.2 Physical System
|
|
|
|
- **Component**: M1 Primary Mirror Assembly with whiffle-tree support
|
|
- **Material**: Zerodur glass ceramic (E=91 GPa, CTE~0)
|
|
- **Loading**: Self-weight gravity at multiple elevation angles
|
|
- **Boundary Conditions**: 18-point whiffle-tree axial support, 3-point lateral support
|
|
- **Analysis Type**: Linear static (Nastran SOL 101) with 4 subcases (20, 40, 60, 90 deg)
|
|
|
|
### 1.3 Optical Workflow
|
|
|
|
```
|
|
Reference: 20 deg (interferometer baseline)
|
|
|
|
Operational Tracking:
|
|
- 40 deg - 20 deg = tracking deformation at 40 deg
|
|
- 60 deg - 20 deg = tracking deformation at 60 deg
|
|
|
|
Manufacturing:
|
|
- 90 deg - 20 deg = polishing correction needed
|
|
```
|
|
|
|
---
|
|
|
|
## 2. Mathematical Formulation
|
|
|
|
### 2.1 Objectives
|
|
|
|
| Objective | Goal | Weight | Formula | Target | Units |
|
|
|-----------|------|--------|---------|--------|-------|
|
|
| Operational 40-20 | minimize | 5.0 | $\text{RMS}_{filt}(Z_{40} - Z_{20})$ | 4.0 | nm |
|
|
| Operational 60-20 | minimize | 5.0 | $\text{RMS}_{filt}(Z_{60} - Z_{20})$ | 10.0 | nm |
|
|
| Manufacturing 90-20 | minimize | 1.0 | $\text{RMS}_{J1-J3}(Z_{90} - Z_{20})$ | 20.0 | nm |
|
|
|
|
Where:
|
|
- $Z_\theta$ = Zernike coefficients at elevation angle $\theta$
|
|
- $\text{RMS}_{filt}$ = RMS with J1-J4 (piston, tip, tilt, defocus) removed
|
|
- $\text{RMS}_{J1-J3}$ = RMS with only J1-J3 (piston, tip, tilt) removed
|
|
|
|
### 2.2 Design Variables
|
|
|
|
| Parameter | Symbol | Bounds | Units | Description |
|
|
|-----------|--------|--------|-------|-------------|
|
|
| lateral_inner_angle | $\alpha_i$ | [25.0, 28.5] | deg | Inner lateral support angle |
|
|
| lateral_outer_angle | $\alpha_o$ | [13.0, 17.0] | deg | Outer lateral support angle |
|
|
| lateral_outer_pivot | $p_o$ | [9.0, 12.0] | mm | Outer pivot position |
|
|
| lateral_inner_pivot | $p_i$ | [9.0, 12.0] | mm | Inner pivot position |
|
|
| lateral_middle_pivot | $p_m$ | [18.0, 23.0] | mm | Middle pivot position |
|
|
| lateral_closeness | $c_l$ | [9.5, 12.5] | mm | Lateral support spacing |
|
|
| whiffle_min | $w_{min}$ | [35.0, 55.0] | mm | Whiffle tree minimum |
|
|
| whiffle_outer_to_vertical | $w_{ov}$ | [68.0, 80.0] | deg | Whiffle outer to vertical |
|
|
| whiffle_triangle_closeness | $w_{tc}$ | [50.0, 65.0] | mm | Whiffle triangle spacing |
|
|
| blank_backface_angle | $\beta$ | [3.5, 5.0] | deg | Mirror blank backface angle |
|
|
| inner_circular_rib_dia | $d_{rib}$ | [480.0, 620.0] | mm | Inner rib diameter |
|
|
|
|
**Design Space**: $\mathbf{x} \in \mathbb{R}^{11}$
|
|
|
|
### 2.3 Weighted Objective
|
|
|
|
$$f(\mathbf{x}) = \frac{w_1 \cdot \frac{O_{40-20}}{t_1} + w_2 \cdot \frac{O_{60-20}}{t_2} + w_3 \cdot \frac{O_{mfg}}{t_3}}{w_1 + w_2 + w_3}$$
|
|
|
|
Where $w_i$ are weights and $t_i$ are target values for normalization.
|
|
|
|
---
|
|
|
|
## 3. Adaptive Optimization Algorithm
|
|
|
|
### 3.1 Configuration
|
|
|
|
| Parameter | Value | Description |
|
|
|-----------|-------|-------------|
|
|
| Algorithm | TPE + FEA Validation | Bayesian optimization with real validation |
|
|
| NN Trials/Iteration | 1000 | Fast surrogate exploration |
|
|
| FEA Trials/Iteration | 5 | Real validation per iteration |
|
|
| Strategy | Hybrid | 70% best + 30% uncertain (exploration) |
|
|
| Convergence | 0.3 nm threshold | Stop if no improvement |
|
|
| Patience | 5 iterations | Iterations without improvement before stopping |
|
|
|
|
### 3.2 Adaptive Loop
|
|
|
|
```
|
|
┌─────────────────────────────────────────────────────────────────────┐
|
|
│ ADAPTIVE OPTIMIZATION LOOP │
|
|
├─────────────────────────────────────────────────────────────────────┤
|
|
│ │
|
|
│ 1. LOAD V10 FEA DATA (90 trials) │
|
|
│ └─ Cross-link: ../m1_mirror_zernike_optimization_V10/ │
|
|
│ │
|
|
│ 2. TRAIN INITIAL SURROGATE │
|
|
│ └─ MLP: 11 inputs → 3 objectives │
|
|
│ └─ Architecture: [128, 256, 256, 128, 64] │
|
|
│ │
|
|
│ 3. ITERATION LOOP: │
|
|
│ │ │
|
|
│ ├─ 3a. SURROGATE EXPLORATION (1000 NN trials) │
|
|
│ │ └─ TPE sampler with MC Dropout uncertainty │
|
|
│ │ │
|
|
│ ├─ 3b. CANDIDATE SELECTION │
|
|
│ │ └─ Hybrid: best weighted + highest uncertainty │
|
|
│ │ │
|
|
│ ├─ 3c. FEA VALIDATION (5 real simulations) │
|
|
│ │ └─ NX Nastran SOL 101 with ZernikeExtractor │
|
|
│ │ └─ Tag trials: source='FEA' │
|
|
│ │ │
|
|
│ ├─ 3d. UPDATE BEST & CHECK IMPROVEMENT │
|
|
│ │ └─ Track no_improvement_count │
|
|
│ │ │
|
|
│ ├─ 3e. RETRAIN SURROGATE │
|
|
│ │ └─ Include new FEA data │
|
|
│ │ │
|
|
│ └─ 3f. CONVERGENCE CHECK │
|
|
│ └─ Stop if patience exceeded │
|
|
│ │
|
|
│ 4. SAVE FINAL RESULTS │
|
|
│ └─ Best FEA-validated design │
|
|
│ │
|
|
└─────────────────────────────────────────────────────────────────────┘
|
|
```
|
|
|
|
---
|
|
|
|
## 4. Result Extraction Methods
|
|
|
|
### 4.1 Relative Filtered RMS Extraction
|
|
|
|
| Attribute | Value |
|
|
|-----------|-------|
|
|
| **Extractor** | `ZernikeExtractor.extract_relative()` |
|
|
| **Module** | `optimization_engine.extractors.extract_zernike` |
|
|
| **Source** | `.op2` file from NX Nastran |
|
|
| **Output** | nm (nanometers) |
|
|
|
|
**Algorithm**:
|
|
1. Parse OP2 displacement results for target and reference subcases
|
|
2. Fit Zernike polynomials (Noll indexing, J1-J50)
|
|
3. Compute difference: $\Delta Z_j = Z_j^{target} - Z_j^{ref}$
|
|
4. Filter low orders (J1-J4 for operational, J1-J3 for manufacturing)
|
|
5. Compute RMS: $\text{RMS} = \sqrt{\sum_{j} \Delta Z_j^2}$
|
|
|
|
**Code**:
|
|
```python
|
|
from optimization_engine.extractors import ZernikeExtractor
|
|
|
|
extractor = ZernikeExtractor(op2_path, displacement_unit='mm', n_modes=50)
|
|
|
|
# Operational objectives
|
|
rel_40 = extractor.extract_relative("3", "2") # 40 deg vs 20 deg
|
|
obj_40_20 = rel_40['relative_filtered_rms_nm']
|
|
|
|
rel_60 = extractor.extract_relative("4", "2") # 60 deg vs 20 deg
|
|
obj_60_20 = rel_60['relative_filtered_rms_nm']
|
|
|
|
# Manufacturing objective
|
|
rel_90 = extractor.extract_relative("1", "2") # 90 deg vs 20 deg
|
|
obj_mfg = rel_90['relative_rms_filter_j1to3']
|
|
```
|
|
|
|
---
|
|
|
|
## 5. Neural Surrogate
|
|
|
|
### 5.1 Architecture
|
|
|
|
```
|
|
Input (11) → Dense(128) → BN → ReLU → Dropout(0.1)
|
|
→ Dense(256) → BN → ReLU → Dropout(0.1)
|
|
→ Dense(256) → BN → ReLU → Dropout(0.1)
|
|
→ Dense(128) → BN → ReLU → Dropout(0.1)
|
|
→ Dense(64) → BN → ReLU → Dropout(0.1)
|
|
→ Dense(3) → Output
|
|
```
|
|
|
|
### 5.2 MC Dropout Uncertainty
|
|
|
|
For candidate selection, MC Dropout provides uncertainty estimates:
|
|
- Enable dropout at inference time
|
|
- Run 30 forward passes
|
|
- Compute mean and std of predictions
|
|
- High std = high uncertainty = exploration candidate
|
|
|
|
### 5.3 Training Configuration
|
|
|
|
| Setting | Value |
|
|
|---------|-------|
|
|
| Optimizer | AdamW |
|
|
| Learning Rate | 0.001 |
|
|
| Weight Decay | 1e-4 |
|
|
| Scheduler | CosineAnnealing |
|
|
| Epochs | 300 |
|
|
| Batch Size | 16 |
|
|
|
|
---
|
|
|
|
## 6. Study File Structure
|
|
|
|
```
|
|
m1_mirror_adaptive_V11/
|
|
│
|
|
├── 1_setup/ # INPUT CONFIGURATION
|
|
│ ├── model/ # NX Model Files (copied from V10)
|
|
│ │ └── [NX files copied on first run]
|
|
│ └── optimization_config.json # Study configuration
|
|
│
|
|
├── 2_iterations/ # FEA iteration folders
|
|
│ └── iter{N}/ # Per-trial working directory
|
|
│
|
|
├── 3_results/ # OUTPUT
|
|
│ ├── study.db # Optuna database (NN trials)
|
|
│ ├── adaptive_state.json # Iteration state
|
|
│ ├── surrogate_*.pt # Model checkpoints
|
|
│ ├── final_results.json # Best results
|
|
│ └── optimization.log # Execution log
|
|
│
|
|
├── run_optimization.py # Main entry point
|
|
├── README.md # This blueprint
|
|
└── STUDY_REPORT.md # Results report (updated as runs)
|
|
```
|
|
|
|
---
|
|
|
|
## 7. Dashboard Integration
|
|
|
|
### Trial Source Differentiation
|
|
|
|
| Source | Tag | Display |
|
|
|--------|-----|---------|
|
|
| V10 FEA | `V10_FEA` | Used for training only |
|
|
| V11 FEA | `FEA` | Blue circles |
|
|
| V11 NN | `NN` | Orange crosses |
|
|
|
|
### Attributes Stored
|
|
|
|
For each trial:
|
|
- `source`: 'FEA' or 'NN'
|
|
- `predicted_40_vs_20`: NN prediction (if NN)
|
|
- `predicted_60_vs_20`: NN prediction (if NN)
|
|
- `predicted_mfg`: NN prediction (if NN)
|
|
|
|
---
|
|
|
|
## 8. Quick Start
|
|
|
|
```bash
|
|
# Navigate to study
|
|
cd studies/m1_mirror_adaptive_V11
|
|
|
|
# Start adaptive optimization
|
|
python run_optimization.py --start
|
|
|
|
# With custom settings
|
|
python run_optimization.py --start --fea-batch 3 --patience 7
|
|
|
|
# Monitor in dashboard
|
|
# FEA trials: Blue circles
|
|
# NN trials: Orange crosses
|
|
```
|
|
|
|
### Command Line Options
|
|
|
|
| Option | Default | Description |
|
|
|--------|---------|-------------|
|
|
| `--start` | - | Required to begin optimization |
|
|
| `--fea-batch` | 5 | FEA validations per iteration |
|
|
| `--nn-trials` | 1000 | NN trials per iteration |
|
|
| `--patience` | 5 | Iterations without improvement |
|
|
| `--strategy` | hybrid | best / uncertain / hybrid |
|
|
|
|
---
|
|
|
|
## 9. Configuration Reference
|
|
|
|
**File**: `1_setup/optimization_config.json`
|
|
|
|
| Section | Key | Description |
|
|
|---------|-----|-------------|
|
|
| `source_study.database` | `../m1_mirror_zernike_optimization_V10/3_results/study.db` | V10 training data |
|
|
| `objectives[]` | 3 objectives | Relative filtered RMS metrics |
|
|
| `adaptive_settings.*` | Iteration config | NN trials, FEA batch, patience |
|
|
| `surrogate_settings.*` | Neural config | Architecture, dropout, MC samples |
|
|
| `dashboard_settings.*` | Visualization | FEA/NN markers and colors |
|
|
|
|
---
|
|
|
|
## 10. References
|
|
|
|
- **Deb et al.** (2002). A fast and elitist multiobjective genetic algorithm: NSGA-II. *IEEE Trans. Evolutionary Computation*.
|
|
- **Noll, R.J.** (1976). Zernike polynomials and atmospheric turbulence. *JOSA*.
|
|
- **Gal, Y. & Ghahramani, Z.** (2016). Dropout as a Bayesian Approximation. *ICML*.
|
|
- **Optuna Documentation**: Tree-structured Parzen Estimator (TPE) sampler.
|
|
|
|
---
|
|
|
|
*Generated by Atomizer Framework. Study cross-links V10 FEA data for surrogate training.*
|