Major improvements to telescope mirror optimization workflow: Assembly FEM Workflow (solve_simulation.py): - Fixed multi-part assembly FEM update sequence - Use ImportFromFile() for reliable expression updates - Add DuplicateNodesCheckBuilder with MergeOccurrenceNodes=True - Switch to Foreground solve mode for multi-subcase solutions - Add detailed logging and diagnostics for node merge operations Zernike RMS Calculation: - CRITICAL FIX: Use correct surface-based RMS formula - Global RMS = sqrt(mean(W^2)) from actual WFE values - Filtered RMS = sqrt(mean(W_residual^2)) after removing low-order fit - This matches zernike_Post_Script_NX.py (optical standard) - Previous WRONG formula was: sqrt(sum(coeffs^2)) - Add compute_rms_filter_j1to3() for optician workload metric Subcase Mapping: - Fix subcase mapping to match NX model: - Subcase 1 = 90 deg (polishing orientation) - Subcase 2 = 20 deg (reference) - Subcase 3 = 40 deg - Subcase 4 = 60 deg New Study: M1 Mirror Zernike Optimization - Full optimization config with 11 design variables - 3 objectives: rel_filtered_rms_40_vs_20, rel_filtered_rms_60_vs_20, mfg_90_optician_workload - Neural surrogate support for accelerated optimization Documentation: - Update ZERNIKE_INTEGRATION.md with correct RMS formula - Update ASSEMBLY_FEM_WORKFLOW.md with expression import and node merge details - Add reference scripts from original zernike_Post_Script_NX.py 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
4.0 KiB
4.0 KiB
M1 Mirror Zernike Optimization Dashboard
Study Overview
Objective: Optimize telescope primary mirror (M1) support structure to minimize wavefront error across different gravity orientations.
Method: Hybrid FEA + Neural Network acceleration using Zernike polynomial decomposition.
Quick Start
1. Prepare Model Files
Copy your NX model files to:
studies/m1_mirror_zernike_optimization/1_setup/model/
Required files:
ASSY_M1.prt(or your assembly name)ASSY_M1_assyfem1.afmASSY_M1_assyfem1_sim1.sim- Associated
.femand_i.prtfiles
2. Run FEA Trials (Build Training Data)
cd studies/m1_mirror_zernike_optimization
python run_optimization.py --run --trials 40
This will:
- Run ~40 FEA trials (10-15 min each = ~8-10 hours)
- Extract 50 Zernike coefficients for each subcase (20/40/60/90 deg)
- Store all data in Optuna database
3. Train Neural Surrogate
python run_optimization.py --train-surrogate
Trains MLP to predict 200 outputs (50 coefficients x 4 subcases) from design variables.
4. Run Neural-Accelerated Optimization
python run_optimization.py --run --trials 1000 --enable-nn
1000 trials in ~seconds!
5. View Results
Optuna Dashboard:
optuna-dashboard sqlite:///2_results/study.db --port 8081
Design Variables
| Variable | Range | Baseline | Units | Status |
|---|---|---|---|---|
| whiffle_min | 35-55 | 40.55 | mm | Enabled |
| whiffle_outer_to_vertical | 68-80 | 75.67 | deg | Enabled |
| inner_circular_rib_dia | 480-620 | 534.00 | mm | Enabled |
| whiffle_triangle_closeness | 50-65 | 60.00 | mm | Disabled |
| blank_backface_angle | 3.5-5.0 | 4.23 | deg | Disabled |
| lateral_inner_angle | 25-28.5 | 26.79 | deg | Disabled |
| lateral_outer_angle | 13-17 | 14.64 | deg | Disabled |
| lateral_outer_pivot | 9-12 | 10.40 | mm | Disabled |
| lateral_inner_pivot | 9-12 | 10.07 | mm | Disabled |
| lateral_middle_pivot | 18-23 | 20.73 | mm | Disabled |
| lateral_closeness | 9.5-12.5 | 11.02 | mm | Disabled |
Edit 1_setup/optimization_config.json to enable/disable variables.
Objectives
| Objective | Weight | Target | Description |
|---|---|---|---|
| rel_filtered_rms_40_vs_20 | 5 | 4 nm | WFE at 40° relative to 20° reference |
| rel_filtered_rms_60_vs_20 | 5 | 10 nm | WFE at 60° relative to 20° reference |
| mfg_90_optician_workload | 1 | 20 nm | Polishing workload at 90° orientation |
Strategy: Weighted sum minimization (normalized by targets)
Neural Surrogate Architecture
- Input: Design variables (3-11 depending on enabled)
- Output: 200 values (50 Zernike coefficients × 4 subcases)
- Architecture: MLP with 4 layers, 128 hidden units
- Training: ~40 FEA samples, 200 epochs
File Structure
m1_mirror_zernike_optimization/
├── 1_setup/
│ ├── optimization_config.json # Configuration
│ └── model/ # NX model files (add yours here)
├── 2_results/
│ ├── study.db # Optuna database
│ └── zernike_surrogate/ # Trained neural model
│ └── checkpoint_best.pt
├── run_optimization.py # Main script
└── DASHBOARD.md # This file
Commands Reference
# Run FEA optimization
python run_optimization.py --run --trials 40
# Train neural surrogate
python run_optimization.py --train-surrogate
# Run with neural acceleration
python run_optimization.py --run --trials 1000 --enable-nn
# Check status
python run_optimization.py --status
# Launch Optuna dashboard
optuna-dashboard sqlite:///2_results/study.db --port 8081
Tips
- Start small: Run 5-10 FEA trials first to verify everything works
- Check Zernike extraction: Verify OP2 has correct subcases (20/40/60/90)
- Enable variables gradually: Start with 3, add more after initial exploration
- Neural validation: After finding good neural designs, verify top candidates with FEA