This commit introduces the GNN-based surrogate for Zernike mirror optimization and the M1 mirror study progression from V12 (GNN validation) to V13 (pure NSGA-II). ## GNN Surrogate Module (optimization_engine/gnn/) New module for Graph Neural Network surrogate prediction of mirror deformations: - `polar_graph.py`: PolarMirrorGraph - fixed 3000-node polar grid structure - `zernike_gnn.py`: ZernikeGNN with design-conditioned message passing - `differentiable_zernike.py`: GPU-accelerated Zernike fitting and objectives - `train_zernike_gnn.py`: ZernikeGNNTrainer with multi-task loss - `gnn_optimizer.py`: ZernikeGNNOptimizer for turbo mode (~900k trials/hour) - `extract_displacement_field.py`: OP2 to HDF5 field extraction - `backfill_field_data.py`: Extract fields from existing FEA trials Key innovation: Design-conditioned convolutions that modulate message passing based on structural design parameters, enabling accurate field prediction. ## M1 Mirror Studies ### V12: GNN Field Prediction + FEA Validation - Zernike GNN trained on V10/V11 FEA data (238 samples) - Turbo mode: 5000 GNN predictions → top candidates → FEA validation - Calibration workflow for GNN-to-FEA error correction - Scripts: run_gnn_turbo.py, validate_gnn_best.py, compute_full_calibration.py ### V13: Pure NSGA-II FEA (Ground Truth) - Seeds 217 FEA trials from V11+V12 - Pure multi-objective NSGA-II without any surrogate - Establishes ground-truth Pareto front for GNN accuracy evaluation - Narrowed blank_backface_angle range to [4.0, 5.0] ## Documentation Updates - SYS_14: Added Zernike GNN section with architecture diagrams - CLAUDE.md: Added GNN module reference and quick start - V13 README: Study documentation with seeding strategy 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
530 lines
22 KiB
Markdown
530 lines
22 KiB
Markdown
# M1 Mirror Adaptive Surrogate Optimization V12
|
||
|
||
Adaptive neural-accelerated optimization of telescope primary mirror (M1) support structure using Zernike wavefront error decomposition with **auto-tuned hyperparameters**, **ensemble surrogates**, and **mass constraints**.
|
||
|
||
**Created**: 2024-12-04
|
||
**Protocol**: Protocol 12 (Adaptive Hybrid FEA/Neural with Hyperparameter Tuning)
|
||
**Status**: Running
|
||
**Source Data**: V11 (107 FEA samples)
|
||
|
||
---
|
||
|
||
## 1. Engineering Problem
|
||
|
||
### 1.1 Objective
|
||
|
||
Optimize the telescope primary mirror (M1) whiffle tree and lateral support structure to minimize wavefront error (WFE) across different gravity orientations while maintaining mass under 99 kg.
|
||
|
||
### 1.2 Physical System
|
||
|
||
| Property | Value |
|
||
|----------|-------|
|
||
| **Component** | M1 primary mirror assembly with whiffle tree support |
|
||
| **Material** | Borosilicate glass (mirror blank), steel (support structure) |
|
||
| **Loading** | Gravity at multiple zenith angles (90°, 20°, 40°, 60°) |
|
||
| **Boundary Conditions** | Whiffle tree kinematic mount with lateral supports |
|
||
| **Analysis Type** | Linear static multi-subcase (Nastran SOL 101) |
|
||
| **Subcases** | 4 orientations with different gravity vectors |
|
||
| **Output** | Surface deformation → Zernike polynomial decomposition |
|
||
|
||
### 1.3 Key Improvements in V12
|
||
|
||
| Feature | V11 | V12 |
|
||
|---------|-----|-----|
|
||
| Hyperparameter Tuning | Fixed architecture | Optuna auto-tuning |
|
||
| Model Architecture | Single network | Ensemble of 3 models |
|
||
| Validation | Train/test split | K-fold cross-validation |
|
||
| Mass Constraint | Post-hoc check | Integrated penalty |
|
||
| Convergence | Fixed iterations | Early stopping with patience |
|
||
|
||
---
|
||
|
||
## 2. Mathematical Formulation
|
||
|
||
### 2.1 Objectives
|
||
|
||
| Objective | Goal | Weight | Formula | Units | Target |
|
||
|-----------|------|--------|---------|-------|--------|
|
||
| `rel_filtered_rms_40_vs_20` | minimize | 5.0 | $\sigma_{40/20} = \sqrt{\sum_{j=5}^{50} (Z_j^{40} - Z_j^{20})^2}$ | nm | 4 nm |
|
||
| `rel_filtered_rms_60_vs_20` | minimize | 5.0 | $\sigma_{60/20} = \sqrt{\sum_{j=5}^{50} (Z_j^{60} - Z_j^{20})^2}$ | nm | 10 nm |
|
||
| `mfg_90_optician_workload` | minimize | 1.0 | $\sigma_{90}^{J4+} = \sqrt{\sum_{j=4}^{50} (Z_j^{90} - Z_j^{20})^2}$ | nm | 20 nm |
|
||
|
||
**Weighted Sum Objective**:
|
||
$$J(\mathbf{x}) = \sum_{i=1}^{3} w_i \cdot \frac{f_i(\mathbf{x})}{t_i} + P_{mass}(\mathbf{x})$$
|
||
|
||
Where:
|
||
- $w_i$ = weight for objective $i$
|
||
- $f_i(\mathbf{x})$ = objective value
|
||
- $t_i$ = target value (normalization)
|
||
- $P_{mass}$ = mass constraint penalty
|
||
|
||
### 2.2 Zernike Decomposition
|
||
|
||
The wavefront error $W(r,\theta)$ is decomposed into Noll-indexed Zernike polynomials:
|
||
|
||
$$W(r,\theta) = \sum_{j=1}^{50} Z_j \cdot P_j(r,\theta)$$
|
||
|
||
**WFE from Displacement** (reflection factor of 2):
|
||
$$W_{nm} = 2 \cdot \delta_z \cdot 10^6$$
|
||
|
||
Where $\delta_z$ is the Z-displacement in mm.
|
||
|
||
**Filtered RMS** (excluding alignable terms J1-J4):
|
||
$$\sigma_{filtered} = \sqrt{\sum_{j=5}^{50} Z_j^2}$$
|
||
|
||
**Manufacturing RMS** (excluding J1-J3, keeping defocus J4):
|
||
$$\sigma_{mfg} = \sqrt{\sum_{j=4}^{50} Z_j^2}$$
|
||
|
||
### 2.3 Design Variables (11 Parameters)
|
||
|
||
| Parameter | Symbol | Bounds | Baseline | Units | Description |
|
||
|-----------|--------|--------|----------|-------|-------------|
|
||
| `lateral_inner_angle` | $\alpha_{in}$ | [25, 28.5] | 26.79 | deg | Inner lateral support angle |
|
||
| `lateral_outer_angle` | $\alpha_{out}$ | [13, 17] | 14.64 | deg | Outer lateral support angle |
|
||
| `lateral_outer_pivot` | $p_{out}$ | [9, 12] | 10.40 | mm | Outer pivot offset |
|
||
| `lateral_inner_pivot` | $p_{in}$ | [9, 12] | 10.07 | mm | Inner pivot offset |
|
||
| `lateral_middle_pivot` | $p_{mid}$ | [18, 23] | 20.73 | mm | Middle pivot offset |
|
||
| `lateral_closeness` | $c_{lat}$ | [9.5, 12.5] | 11.02 | mm | Lateral support spacing |
|
||
| `whiffle_min` | $w_{min}$ | [35, 55] | 40.55 | mm | Whiffle tree minimum |
|
||
| `whiffle_outer_to_vertical` | $\theta_w$ | [68, 80] | 75.67 | deg | Outer whiffle angle |
|
||
| `whiffle_triangle_closeness` | $c_w$ | [50, 65] | 60.00 | mm | Whiffle triangle spacing |
|
||
| `blank_backface_angle` | $\beta$ | [4.0, 5.0] | 4.23 | deg | Mirror backface angle (mass driver) |
|
||
| `inner_circular_rib_dia` | $D_{rib}$ | [480, 620] | 534.00 | mm | Inner rib diameter |
|
||
|
||
**Design Space**:
|
||
$$\mathbf{x} = [\alpha_{in}, \alpha_{out}, p_{out}, p_{in}, p_{mid}, c_{lat}, w_{min}, \theta_w, c_w, \beta, D_{rib}]^T \in \mathbb{R}^{11}$$
|
||
|
||
### 2.4 Mass Constraint
|
||
|
||
| Constraint | Type | Formula | Threshold | Handling |
|
||
|------------|------|---------|-----------|----------|
|
||
| `mass_limit` | upper_bound | $m(\mathbf{x}) \leq m_{max}$ | 99 kg | Penalty in objective |
|
||
|
||
**Penalty Function**:
|
||
$$P_{mass}(\mathbf{x}) = \begin{cases}
|
||
0 & \text{if } m \leq 99 \\
|
||
100 \cdot (m - 99) & \text{if } m > 99
|
||
\end{cases}$$
|
||
|
||
**Mass Estimation** (from `blank_backface_angle`):
|
||
$$\hat{m}(\beta) = 105 - 15 \cdot (\beta - 4.0) \text{ kg}$$
|
||
|
||
---
|
||
|
||
## 3. Optimization Algorithm
|
||
|
||
### 3.1 Adaptive Hybrid Strategy
|
||
|
||
| Parameter | Value | Description |
|
||
|-----------|-------|-------------|
|
||
| Algorithm | Adaptive Hybrid | FEA + Neural Surrogate |
|
||
| Surrogate | Tuned Ensemble (3 models) | Auto-tuned architecture |
|
||
| Sampler | TPE | Tree-structured Parzen Estimator |
|
||
| Max Iterations | 100 | Adaptive loop iterations |
|
||
| FEA Batch Size | 5 | Real FEA evaluations per iteration |
|
||
| NN Trials | 1000 | Surrogate evaluations per iteration |
|
||
| Patience | 5 | Early stopping threshold |
|
||
| Convergence | 0.3 nm | Objective improvement threshold |
|
||
|
||
### 3.2 Hyperparameter Tuning
|
||
|
||
| Setting | Value |
|
||
|---------|-------|
|
||
| Tuning Trials | 30 |
|
||
| Cross-Validation | 5-fold |
|
||
| Search Space | Hidden dims, dropout, learning rate |
|
||
| Ensemble Size | 3 models |
|
||
| MC Dropout Samples | 30 |
|
||
|
||
**Tuned Architecture**:
|
||
```
|
||
Input(11) → Linear(128) → ReLU → Dropout →
|
||
Linear(256) → ReLU → Dropout →
|
||
Linear(256) → ReLU → Dropout →
|
||
Linear(128) → ReLU → Dropout →
|
||
Linear(64) → ReLU → Linear(4)
|
||
```
|
||
|
||
### 3.3 Adaptive Loop Flow
|
||
|
||
```
|
||
┌─────────────────────────────────────────────────────────────────────────┐
|
||
│ ADAPTIVE ITERATION k │
|
||
├─────────────────────────────────────────────────────────────────────────┤
|
||
│ │
|
||
│ 1. SURROGATE EXPLORATION (1000 trials) │
|
||
│ ├── Sample 11 design variables via TPE │
|
||
│ ├── Predict objectives with ensemble (mean + uncertainty) │
|
||
│ └── Select top candidates (exploitation) + diverse (exploration) │
|
||
│ │
|
||
│ 2. FEA VALIDATION (5 trials) │
|
||
│ ├── Run NX Nastran SOL 101 (4 subcases) │
|
||
│ ├── Extract Zernike coefficients from OP2 │
|
||
│ ├── Compute relative filtered RMS │
|
||
│ └── Store in Optuna database │
|
||
│ │
|
||
│ 3. SURROGATE RETRAINING │
|
||
│ ├── Load all FEA data from database │
|
||
│ ├── Retrain ensemble with new data │
|
||
│ └── Update uncertainty estimates │
|
||
│ │
|
||
│ 4. CONVERGENCE CHECK │
|
||
│ ├── Δbest < 0.3 nm for patience=5 iterations? │
|
||
│ └── If converged → STOP, else → next iteration │
|
||
│ │
|
||
└─────────────────────────────────────────────────────────────────────────┘
|
||
```
|
||
|
||
---
|
||
|
||
## 4. Simulation Pipeline
|
||
|
||
### 4.1 FEA Trial Execution Flow
|
||
|
||
```
|
||
┌─────────────────────────────────────────────────────────────────────────┐
|
||
│ FEA TRIAL n EXECUTION │
|
||
├─────────────────────────────────────────────────────────────────────────┤
|
||
│ │
|
||
│ 1. CANDIDATE SELECTION │
|
||
│ Hybrid strategy: 70% exploitation (best NN predictions) │
|
||
│ 30% exploration (uncertain regions) │
|
||
│ │
|
||
│ 2. NX PARAMETER UPDATE │
|
||
│ Module: optimization_engine/nx_solver.py │
|
||
│ Target Part: M1_Blank.prt (and related components) │
|
||
│ Action: Update 11 expressions with new design values │
|
||
│ │
|
||
│ 3. NX SIMULATION (Nastran SOL 101 - 4 Subcases) │
|
||
│ Module: optimization_engine/solve_simulation.py │
|
||
│ Input: ASSY_M1_assyfem1_sim1.sim │
|
||
│ Subcases: │
|
||
│ 1 = 90° zenith (polishing/manufacturing) │
|
||
│ 2 = 20° zenith (reference) │
|
||
│ 3 = 40° zenith (operational target 1) │
|
||
│ 4 = 60° zenith (operational target 2) │
|
||
│ Output: .dat, .op2, .f06 │
|
||
│ │
|
||
│ 4. ZERNIKE EXTRACTION │
|
||
│ Module: optimization_engine/extractors/extract_zernike.py │
|
||
│ a. Read node coordinates from BDF/DAT │
|
||
│ b. Read Z-displacements from OP2 for each subcase │
|
||
│ c. Compute RELATIVE displacement (target - reference) │
|
||
│ d. Convert to WFE: W = 2 × Δδz × 10⁶ nm │
|
||
│ e. Fit 50 Zernike coefficients via least-squares │
|
||
│ f. Compute filtered RMS (exclude J1-J4) │
|
||
│ │
|
||
│ 5. MASS EXTRACTION │
|
||
│ Module: optimization_engine/extractors/extract_mass_from_expression │
|
||
│ Expression: p173 (CAD mass property) │
|
||
│ │
|
||
│ 6. OBJECTIVE COMPUTATION │
|
||
│ rel_filtered_rms_40_vs_20 ← Zernike RMS (subcase 3 - 2) │
|
||
│ rel_filtered_rms_60_vs_20 ← Zernike RMS (subcase 4 - 2) │
|
||
│ mfg_90_optician_workload ← Zernike RMS J4+ (subcase 1 - 2) │
|
||
│ │
|
||
│ 7. WEIGHTED OBJECTIVE + MASS PENALTY │
|
||
│ J = Σ (weight × objective / target) + mass_penalty │
|
||
│ │
|
||
│ 8. STORE IN DATABASE │
|
||
│ Optuna SQLite: 3_results/study.db │
|
||
│ User attrs: trial_source='fea', mass_kg, all Zernike coefficients │
|
||
│ │
|
||
└─────────────────────────────────────────────────────────────────────────┘
|
||
```
|
||
|
||
### 4.2 Subcase Configuration
|
||
|
||
| Subcase | Zenith Angle | Gravity Direction | Role |
|
||
|---------|--------------|-------------------|------|
|
||
| 1 | 90° | Horizontal | Manufacturing/polishing reference |
|
||
| 2 | 20° | Near-vertical | Operational reference (baseline) |
|
||
| 3 | 40° | Mid-elevation | Operational target 1 |
|
||
| 4 | 60° | Low-elevation | Operational target 2 |
|
||
|
||
---
|
||
|
||
## 5. Result Extraction Methods
|
||
|
||
### 5.1 Zernike WFE Extraction
|
||
|
||
| Attribute | Value |
|
||
|-----------|-------|
|
||
| **Extractor** | `ZernikeExtractor` |
|
||
| **Module** | `optimization_engine.extractors.extract_zernike` |
|
||
| **Method** | `extract_relative()` |
|
||
| **Geometry Source** | `.dat` (BDF format, auto-detected) |
|
||
| **Displacement Source** | `.op2` (OP2 binary) |
|
||
| **Output** | 50 Zernike coefficients + RMS metrics per subcase pair |
|
||
|
||
**Algorithm**:
|
||
|
||
1. Load node coordinates $(X_i, Y_i)$ from BDF
|
||
2. Load Z-displacements $\delta_{z,i}$ from OP2 for each subcase
|
||
3. Compute relative displacement (node-by-node):
|
||
$$\Delta\delta_{z,i} = \delta_{z,i}^{target} - \delta_{z,i}^{reference}$$
|
||
4. Convert to WFE:
|
||
$$W_i = 2 \cdot \Delta\delta_{z,i} \cdot 10^6 \text{ nm}$$
|
||
5. Fit Zernike coefficients via least-squares:
|
||
$$\min_{\mathbf{Z}} \| \mathbf{W} - \mathbf{P} \mathbf{Z} \|^2$$
|
||
6. Compute filtered RMS:
|
||
$$\sigma_{filtered} = \sqrt{\sum_{j=5}^{50} Z_j^2}$$
|
||
|
||
**Code**:
|
||
```python
|
||
from optimization_engine.extractors import ZernikeExtractor
|
||
|
||
extractor = ZernikeExtractor(op2_file, bdf_file)
|
||
result = extractor.extract_relative(
|
||
target_subcase="3", # 40 deg
|
||
reference_subcase="2", # 20 deg
|
||
displacement_unit="mm"
|
||
)
|
||
filtered_rms = result['relative_filtered_rms_nm'] # nm
|
||
```
|
||
|
||
### 5.2 Mass Extraction
|
||
|
||
| Attribute | Value |
|
||
|-----------|-------|
|
||
| **Extractor** | `extract_mass_from_expression` |
|
||
| **Module** | `optimization_engine.extractors` |
|
||
| **Expression** | `p173` (CAD mass property) |
|
||
| **Output** | kg |
|
||
|
||
**Code**:
|
||
```python
|
||
from optimization_engine.extractors import extract_mass_from_expression
|
||
|
||
mass_kg = extract_mass_from_expression(model_file, expression_name="p173")
|
||
```
|
||
|
||
---
|
||
|
||
## 6. Neural Acceleration (Tuned Ensemble Surrogate)
|
||
|
||
### 6.1 Configuration
|
||
|
||
| Setting | Value | Description |
|
||
|---------|-------|-------------|
|
||
| `enabled` | `true` | Neural surrogate active |
|
||
| `model_type` | `TunedEnsembleSurrogate` | Ensemble of tuned networks |
|
||
| `ensemble_size` | 3 | Number of models in ensemble |
|
||
| `hidden_dims` | `[128, 256, 256, 128, 64]` | Auto-tuned architecture |
|
||
| `dropout` | 0.1 | Regularization |
|
||
| `learning_rate` | 0.001 | Adam optimizer |
|
||
| `batch_size` | 16 | Mini-batch size |
|
||
| `mc_dropout_samples` | 30 | Monte Carlo uncertainty |
|
||
|
||
### 6.2 Hyperparameter Tuning
|
||
|
||
| Parameter | Search Space |
|
||
|-----------|--------------|
|
||
| `n_layers` | [3, 4, 5, 6] |
|
||
| `hidden_dim` | [64, 128, 256, 512] |
|
||
| `dropout` | [0.0, 0.1, 0.2, 0.3] |
|
||
| `learning_rate` | [1e-4, 1e-3, 1e-2] |
|
||
| `batch_size` | [8, 16, 32, 64] |
|
||
|
||
**Tuning Objective**:
|
||
$$\mathcal{L}_{tune} = \frac{1}{K} \sum_{k=1}^{K} MSE_{val}^{(k)}$$
|
||
|
||
Using 5-fold cross-validation.
|
||
|
||
### 6.3 Surrogate Model
|
||
|
||
**Input**: $\mathbf{x} = [11 \text{ design variables}]^T \in \mathbb{R}^{11}$
|
||
|
||
**Output**: $\hat{\mathbf{y}} = [4 \text{ objectives/constraints}]^T \in \mathbb{R}^{4}$
|
||
- `rel_filtered_rms_40_vs_20` (nm)
|
||
- `rel_filtered_rms_60_vs_20` (nm)
|
||
- `mfg_90_optician_workload` (nm)
|
||
- `mass_kg` (kg)
|
||
|
||
**Ensemble Prediction**:
|
||
$$\hat{y} = \frac{1}{M} \sum_{m=1}^{M} f_m(\mathbf{x})$$
|
||
|
||
**Uncertainty Quantification** (MC Dropout):
|
||
$$\sigma_y^2 = \frac{1}{T} \sum_{t=1}^{T} f_{dropout}^{(t)}(\mathbf{x})^2 - \hat{y}^2$$
|
||
|
||
### 6.4 Training Data Location
|
||
|
||
```
|
||
m1_mirror_adaptive_V12/
|
||
├── 2_iterations/
|
||
│ ├── iter_001/ # Iteration 1 working files
|
||
│ ├── iter_002/
|
||
│ └── ...
|
||
├── 3_results/
|
||
│ ├── study.db # Optuna database (all trials)
|
||
│ ├── optimization.log # Detailed log
|
||
│ ├── surrogate_best.pt # Best tuned model weights
|
||
│ └── tuning_results.json # Hyperparameter tuning history
|
||
```
|
||
|
||
### 6.5 Expected Performance
|
||
|
||
| Metric | Value |
|
||
|--------|-------|
|
||
| Source Data | V11: 107 FEA samples |
|
||
| FEA time per trial | 10-15 min |
|
||
| Neural time per trial | ~5 ms |
|
||
| Speedup | ~120,000x |
|
||
| Expected R² | > 0.90 (after tuning) |
|
||
| Uncertainty Coverage | ~95% (ensemble + MC dropout) |
|
||
|
||
---
|
||
|
||
## 7. Study File Structure
|
||
|
||
```
|
||
m1_mirror_adaptive_V12/
|
||
│
|
||
├── 1_setup/ # INPUT CONFIGURATION
|
||
│ ├── model/ → symlink to V11 # NX Model Files
|
||
│ │ ├── ASSY_M1.prt # Top-level assembly
|
||
│ │ ├── M1_Blank.prt # Mirror blank (expressions)
|
||
│ │ ├── ASSY_M1_assyfem1.afm # Assembly FEM
|
||
│ │ ├── ASSY_M1_assyfem1_sim1.sim # Simulation file
|
||
│ │ └── *-solution_1.op2 # Results (generated)
|
||
│ │
|
||
│ └── optimization_config.json # Study configuration
|
||
│
|
||
├── 2_iterations/ # WORKING DIRECTORY
|
||
│ ├── iter_001/ # Iteration 1 model copy
|
||
│ ├── iter_002/
|
||
│ └── ...
|
||
│
|
||
├── 3_results/ # OUTPUT (auto-generated)
|
||
│ ├── study.db # Optuna SQLite database
|
||
│ ├── optimization.log # Structured log
|
||
│ ├── surrogate_best.pt # Trained ensemble weights
|
||
│ ├── tuning_results.json # Hyperparameter tuning
|
||
│ └── convergence.json # Iteration history
|
||
│
|
||
├── run_optimization.py # Main entry point
|
||
├── final_validation.py # FEA validation of best NN trials
|
||
├── README.md # This blueprint
|
||
└── STUDY_REPORT.md # Results report (updated during run)
|
||
```
|
||
|
||
---
|
||
|
||
## 8. Results Location
|
||
|
||
After optimization, results are stored in `3_results/`:
|
||
|
||
| File | Description | Format |
|
||
|------|-------------|--------|
|
||
| `study.db` | Optuna database with all trials (FEA + NN) | SQLite |
|
||
| `optimization.log` | Detailed execution log | Text |
|
||
| `surrogate_best.pt` | Tuned ensemble model weights | PyTorch |
|
||
| `tuning_results.json` | Hyperparameter search history | JSON |
|
||
| `convergence.json` | Best value per iteration | JSON |
|
||
|
||
### 8.1 Trial Identification
|
||
|
||
Trials are tagged with source:
|
||
- **FEA trials**: `trial.user_attrs['trial_source'] = 'fea'`
|
||
- **NN trials**: `trial.user_attrs['trial_source'] = 'nn'`
|
||
|
||
**Dashboard Visualization**:
|
||
- FEA trials: Blue circles
|
||
- NN trials: Orange crosses
|
||
|
||
### 8.2 Results Report
|
||
|
||
See [STUDY_REPORT.md](STUDY_REPORT.md) for:
|
||
- Optimization progress and convergence
|
||
- Best designs found (FEA-validated)
|
||
- Surrogate model accuracy (R², MAE)
|
||
- Pareto trade-off analysis
|
||
- Engineering recommendations
|
||
|
||
---
|
||
|
||
## 9. Quick Start
|
||
|
||
### Launch Optimization
|
||
|
||
```bash
|
||
cd studies/m1_mirror_adaptive_V12
|
||
|
||
# Start with default settings (uses V11 FEA data)
|
||
python run_optimization.py --start
|
||
|
||
# Custom tuning parameters
|
||
python run_optimization.py --start --tune-trials 30 --ensemble-size 3 --fea-batch 5 --patience 5
|
||
|
||
# Tune hyperparameters only (no FEA)
|
||
python run_optimization.py --tune-only
|
||
```
|
||
|
||
### Command Line Options
|
||
|
||
| Option | Default | Description |
|
||
|--------|---------|-------------|
|
||
| `--start` | - | Start adaptive optimization |
|
||
| `--tune-only` | - | Only tune hyperparameters, no optimization |
|
||
| `--tune-trials` | 30 | Number of hyperparameter tuning trials |
|
||
| `--ensemble-size` | 3 | Number of models in ensemble |
|
||
| `--fea-batch` | 5 | FEA evaluations per iteration |
|
||
| `--patience` | 5 | Early stopping patience |
|
||
|
||
### Monitor Progress
|
||
|
||
```bash
|
||
# View log
|
||
tail -f 3_results/optimization.log
|
||
|
||
# Check database
|
||
sqlite3 3_results/study.db "SELECT COUNT(*) FROM trials WHERE state='COMPLETE'"
|
||
|
||
# Launch Optuna dashboard
|
||
optuna-dashboard sqlite:///3_results/study.db --port 8081
|
||
```
|
||
|
||
### Dashboard Access
|
||
|
||
| Dashboard | URL | Purpose |
|
||
|-----------|-----|---------|
|
||
| **Atomizer Dashboard** | http://localhost:3000 | Real-time monitoring |
|
||
| **Optuna Dashboard** | http://localhost:8081 | Trial history |
|
||
|
||
---
|
||
|
||
## 10. Configuration Reference
|
||
|
||
**File**: `1_setup/optimization_config.json`
|
||
|
||
| Section | Key | Description |
|
||
|---------|-----|-------------|
|
||
| `design_variables[]` | 11 parameters | All lateral/whiffle/blank params |
|
||
| `objectives[]` | 3 WFE metrics | Relative filtered RMS |
|
||
| `constraints[]` | mass_limit | Upper bound 99 kg |
|
||
| `zernike_settings.n_modes` | 50 | Zernike polynomial count |
|
||
| `zernike_settings.filter_low_orders` | 4 | Exclude J1-J4 |
|
||
| `zernike_settings.subcases` | [1,2,3,4] | Zenith angles |
|
||
| `adaptive_settings.max_iterations` | 100 | Loop limit |
|
||
| `adaptive_settings.surrogate_trials_per_iter` | 1000 | NN trials |
|
||
| `adaptive_settings.fea_batch_size` | 5 | FEA per iteration |
|
||
| `adaptive_settings.patience` | 5 | Early stopping |
|
||
| `surrogate_settings.ensemble_size` | 3 | Model ensemble |
|
||
| `surrogate_settings.mc_dropout_samples` | 30 | Uncertainty samples |
|
||
|
||
---
|
||
|
||
## 11. References
|
||
|
||
- **Deb, K. et al.** (2002). A fast and elitist multiobjective genetic algorithm: NSGA-II. *IEEE TEC*.
|
||
- **Noll, R.J.** (1976). Zernike polynomials and atmospheric turbulence. *JOSA*.
|
||
- **Wilson, R.N.** (2004). *Reflecting Telescope Optics I*. Springer.
|
||
- **Snoek, J. et al.** (2012). Practical Bayesian optimization of machine learning algorithms. *NeurIPS*.
|
||
- **Gal, Y. & Ghahramani, Z.** (2016). Dropout as a Bayesian approximation. *ICML*.
|
||
- **pyNastran Documentation**: BDF/OP2 parsing for FEA post-processing.
|
||
- **Optuna Documentation**: Hyperparameter optimization framework.
|
||
|
||
---
|
||
|
||
*Atomizer V12: Where adaptive AI meets precision optics.*
|