This commit introduces the GNN-based surrogate for Zernike mirror optimization and the M1 mirror study progression from V12 (GNN validation) to V13 (pure NSGA-II). ## GNN Surrogate Module (optimization_engine/gnn/) New module for Graph Neural Network surrogate prediction of mirror deformations: - `polar_graph.py`: PolarMirrorGraph - fixed 3000-node polar grid structure - `zernike_gnn.py`: ZernikeGNN with design-conditioned message passing - `differentiable_zernike.py`: GPU-accelerated Zernike fitting and objectives - `train_zernike_gnn.py`: ZernikeGNNTrainer with multi-task loss - `gnn_optimizer.py`: ZernikeGNNOptimizer for turbo mode (~900k trials/hour) - `extract_displacement_field.py`: OP2 to HDF5 field extraction - `backfill_field_data.py`: Extract fields from existing FEA trials Key innovation: Design-conditioned convolutions that modulate message passing based on structural design parameters, enabling accurate field prediction. ## M1 Mirror Studies ### V12: GNN Field Prediction + FEA Validation - Zernike GNN trained on V10/V11 FEA data (238 samples) - Turbo mode: 5000 GNN predictions → top candidates → FEA validation - Calibration workflow for GNN-to-FEA error correction - Scripts: run_gnn_turbo.py, validate_gnn_best.py, compute_full_calibration.py ### V13: Pure NSGA-II FEA (Ground Truth) - Seeds 217 FEA trials from V11+V12 - Pure multi-objective NSGA-II without any surrogate - Establishes ground-truth Pareto front for GNN accuracy evaluation - Narrowed blank_backface_angle range to [4.0, 5.0] ## Documentation Updates - SYS_14: Added Zernike GNN section with architecture diagrams - CLAUDE.md: Added GNN module reference and quick start - V13 README: Study documentation with seeding strategy 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
M1 Mirror Pure NSGA-II FEA Optimization V13
Pure multi-objective FEA optimization with NSGA-II sampler for the M1 telescope mirror support system.
Created: 2025-12-09 Protocol: Pure NSGA-II Multi-Objective (No Neural Surrogate) Status: Running
1. Purpose
V13 runs pure FEA optimization without any neural surrogate to establish ground-truth Pareto front. This serves as:
- Baseline for evaluating GNN/MLP surrogate accuracy
- Ground truth Pareto front for comparison
- Validation data for future surrogate training
Key Difference from V11/V12
| Aspect | V11 (Adaptive MLP) | V12 (GNN + Validation) | V13 (Pure FEA) |
|---|---|---|---|
| Surrogate | MLP (4-layer) | Zernike GNN | None |
| Sampler | TPE | NSGA-II | NSGA-II |
| Trials/hour | ~100 NN + 5 FEA | ~5000 GNN + 5 FEA | 6-7 FEA |
| Purpose | Fast exploration | Field prediction | Ground truth |
2. Seeding Strategy
V13 seeds from all prior FEA data in V11 and V12:
V11 (107 FEA trials) + V12 (131 FEA trials) = 238 total
│
┌──────────┴──────────┐
│ Parameter Filter │
│ (blank_backface │
│ 4.0-5.0 range) │
└──────────┬──────────┘
│
217 trials seeded into V13
Why 21 Trials Were Skipped
V13 config uses blank_backface_angle: [4.0, 5.0] (intentionally narrower).
Trials from V10/V11 with blank_backface_angle < 4.0 (range was 3.5-5.0) were rejected by Optuna.
3. Mathematical Formulation
3.1 Objectives (Same as V11/V12)
| Objective | Goal | Formula | Target | Units |
|---|---|---|---|---|
rel_filtered_rms_40_vs_20 |
minimize | RMS_filt(Z_40 - Z_20) | 4.0 | nm |
rel_filtered_rms_60_vs_20 |
minimize | RMS_filt(Z_60 - Z_20) | 10.0 | nm |
mfg_90_optician_workload |
minimize | RMS_J1-J3(Z_90 - Z_20) | 20.0 | nm |
3.2 Design Variables (11)
| Parameter | Bounds | Units |
|---|---|---|
| lateral_inner_angle | [25.0, 28.5] | deg |
| lateral_outer_angle | [13.0, 17.0] | deg |
| lateral_outer_pivot | [9.0, 12.0] | mm |
| lateral_inner_pivot | [9.0, 12.0] | mm |
| lateral_middle_pivot | [18.0, 23.0] | mm |
| lateral_closeness | [9.5, 12.5] | mm |
| whiffle_min | [35.0, 55.0] | mm |
| whiffle_outer_to_vertical | [68.0, 80.0] | deg |
| whiffle_triangle_closeness | [50.0, 65.0] | mm |
| blank_backface_angle | [4.0, 5.0] | deg |
| inner_circular_rib_dia | [480.0, 620.0] | mm |
4. NSGA-II Configuration
sampler = NSGAIISampler(
population_size=50,
crossover=SBXCrossover(eta=15),
mutation=PolynomialMutation(eta=20),
seed=42
)
NSGA-II performs true multi-objective optimization:
- Non-dominated sorting for Pareto ranking
- Crowding distance for diversity preservation
- No scalarization - preserves full Pareto front
5. Study Structure
m1_mirror_adaptive_V13/
├── 1_setup/
│ ├── model/ # NX model files (from V11)
│ └── optimization_config.json # Study config
├── 2_iterations/
│ └── iter{N}/ # FEA working directories
│ ├── *.prt, *.fem, *.sim # NX files
│ ├── params.exp # Parameter expressions
│ └── *solution_1.op2 # Results
├── 3_results/
│ └── study.db # Optuna database
├── run_optimization.py # Main entry point
└── README.md # This file
6. Usage
# Start fresh optimization
python run_optimization.py --start --trials 55
# Resume after interruption (Windows update, etc.)
python run_optimization.py --start --trials 35 --resume
# Check status
python run_optimization.py --status
Expected Runtime
- ~8-10 min per FEA trial
- 55 trials ≈ 7-8 hours overnight
7. Trial Sources in Database
| Source Tag | Count | Description |
|---|---|---|
V11_FEA |
5 | V11-only FEA trials |
V11_V10_FEA |
81 | V11 trials inherited from V10 |
V12_FEA |
41 | V12-only FEA trials |
V12_V10_FEA |
90 | V12 trials inherited from V10 |
FEA |
10+ | New V13 FEA trials |
Query trial sources:
SELECT value_json, COUNT(*)
FROM trial_user_attributes
WHERE key = 'source'
GROUP BY value_json;
8. Post-Processing
Extract Pareto Front
import optuna
study = optuna.load_study(
study_name="m1_mirror_V13_nsga2",
storage="sqlite:///3_results/study.db"
)
# Get Pareto-optimal trials
pareto = study.best_trials
# Print Pareto front
for t in pareto:
print(f"Trial {t.number}: {t.values}")
Compare to GNN Predictions
# Load V13 FEA Pareto front
# Load GNN predictions from V12
# Compute error: |GNN - FEA| / FEA
9. Results (To Be Updated)
| Metric | Value |
|---|---|
| Seeded trials | 217 |
| New FEA trials | TBD |
| Pareto front size | TBD |
| Best rel_rms_40 | TBD |
| Best rel_rms_60 | TBD |
| Best mfg_90 | TBD |
10. Cross-References
- V10:
../m1_mirror_zernike_optimization_V10/- Original LHS sampling - V11:
../m1_mirror_adaptive_V11/- MLP adaptive surrogate - V12:
../m1_mirror_adaptive_V12/- GNN field prediction
Generated by Atomizer Framework. Pure NSGA-II for ground-truth Pareto optimization.