Files
Atomizer/studies/m1_mirror_adaptive_V13
Antoine 01a7d7d121 docs: Complete M1 mirror optimization campaign V11-V15
## M1 Mirror Campaign Summary
- V11-V15 optimization campaign completed (~1,400 FEA evaluations)
- Best design: V14 Trial #725 with Weighted Sum = 121.72
- V15 NSGA-II confirmed V14 TPE found optimal solution
- Campaign improved from WS=129.33 (V11) to WS=121.72 (V14): -5.9%

## Key Results
- 40° tracking: 5.99 nm (target 4.0 nm)
- 60° tracking: 13.10 nm (target 10.0 nm)
- Manufacturing: 26.28 nm (target 20.0 nm)
- Targets not achievable within current design space

## Documentation Added
- V15 STUDY_REPORT.md: Detailed NSGA-II results analysis
- M1_MIRROR_CAMPAIGN_SUMMARY.md: Full V11-V15 campaign overview
- Updated CLAUDE.md, ATOMIZER_CONTEXT.md with NXSolver patterns
- Updated 01_CHEATSHEET.md with --resume guidance
- Updated OP_01_CREATE_STUDY.md with FEARunner template

## Studies Added
- m1_mirror_adaptive_V13: TPE validation (291 trials)
- m1_mirror_adaptive_V14: TPE intensive (785 trials, BEST)
- m1_mirror_adaptive_V15: NSGA-II exploration (126 new FEA)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-16 14:55:23 -05:00
..

M1 Mirror Pure NSGA-II FEA Optimization V13

Pure multi-objective FEA optimization with NSGA-II sampler for the M1 telescope mirror support system.

Created: 2025-12-09 Protocol: Pure NSGA-II Multi-Objective (No Neural Surrogate) Status: Running


1. Purpose

V13 runs pure FEA optimization without any neural surrogate to establish ground-truth Pareto front. This serves as:

  1. Baseline for evaluating GNN/MLP surrogate accuracy
  2. Ground truth Pareto front for comparison
  3. Validation data for future surrogate training

Key Difference from V11/V12

Aspect V11 (Adaptive MLP) V12 (GNN + Validation) V13 (Pure FEA)
Surrogate MLP (4-layer) Zernike GNN None
Sampler TPE NSGA-II NSGA-II
Trials/hour ~100 NN + 5 FEA ~5000 GNN + 5 FEA 6-7 FEA
Purpose Fast exploration Field prediction Ground truth

2. Seeding Strategy

V13 seeds from all prior FEA data in V11 and V12:

V11 (107 FEA trials) + V12 (131 FEA trials) = 238 total
                                               │
                                    ┌──────────┴──────────┐
                                    │  Parameter Filter   │
                                    │  (blank_backface    │
                                    │   4.0-5.0 range)    │
                                    └──────────┬──────────┘
                                               │
                                    217 trials seeded into V13

Why 21 Trials Were Skipped

V13 config uses blank_backface_angle: [4.0, 5.0] (intentionally narrower). Trials from V10/V11 with blank_backface_angle < 4.0 (range was 3.5-5.0) were rejected by Optuna.


3. Mathematical Formulation

3.1 Objectives (Same as V11/V12)

Objective Goal Formula Target Units
rel_filtered_rms_40_vs_20 minimize RMS_filt(Z_40 - Z_20) 4.0 nm
rel_filtered_rms_60_vs_20 minimize RMS_filt(Z_60 - Z_20) 10.0 nm
mfg_90_optician_workload minimize RMS_J1-J3(Z_90 - Z_20) 20.0 nm

3.2 Design Variables (11)

Parameter Bounds Units
lateral_inner_angle [25.0, 28.5] deg
lateral_outer_angle [13.0, 17.0] deg
lateral_outer_pivot [9.0, 12.0] mm
lateral_inner_pivot [9.0, 12.0] mm
lateral_middle_pivot [18.0, 23.0] mm
lateral_closeness [9.5, 12.5] mm
whiffle_min [35.0, 55.0] mm
whiffle_outer_to_vertical [68.0, 80.0] deg
whiffle_triangle_closeness [50.0, 65.0] mm
blank_backface_angle [4.0, 5.0] deg
inner_circular_rib_dia [480.0, 620.0] mm

4. NSGA-II Configuration

sampler = NSGAIISampler(
    population_size=50,
    crossover=SBXCrossover(eta=15),
    mutation=PolynomialMutation(eta=20),
    seed=42
)

NSGA-II performs true multi-objective optimization:

  • Non-dominated sorting for Pareto ranking
  • Crowding distance for diversity preservation
  • No scalarization - preserves full Pareto front

5. Study Structure

m1_mirror_adaptive_V13/
├── 1_setup/
│   ├── model/                    # NX model files (from V11)
│   └── optimization_config.json  # Study config
├── 2_iterations/
│   └── iter{N}/                  # FEA working directories
│       ├── *.prt, *.fem, *.sim   # NX files
│       ├── params.exp            # Parameter expressions
│       └── *solution_1.op2       # Results
├── 3_results/
│   └── study.db                  # Optuna database
├── run_optimization.py           # Main entry point
└── README.md                     # This file

6. Usage

# Start fresh optimization
python run_optimization.py --start --trials 55

# Resume after interruption (Windows update, etc.)
python run_optimization.py --start --trials 35 --resume

# Check status
python run_optimization.py --status

Expected Runtime

  • ~8-10 min per FEA trial
  • 55 trials ≈ 7-8 hours overnight

7. Trial Sources in Database

Source Tag Count Description
V11_FEA 5 V11-only FEA trials
V11_V10_FEA 81 V11 trials inherited from V10
V12_FEA 41 V12-only FEA trials
V12_V10_FEA 90 V12 trials inherited from V10
FEA 10+ New V13 FEA trials

Query trial sources:

SELECT value_json, COUNT(*)
FROM trial_user_attributes
WHERE key = 'source'
GROUP BY value_json;

8. Post-Processing

Extract Pareto Front

import optuna

study = optuna.load_study(
    study_name="m1_mirror_V13_nsga2",
    storage="sqlite:///3_results/study.db"
)

# Get Pareto-optimal trials
pareto = study.best_trials

# Print Pareto front
for t in pareto:
    print(f"Trial {t.number}: {t.values}")

Compare to GNN Predictions

# Load V13 FEA Pareto front
# Load GNN predictions from V12
# Compute error: |GNN - FEA| / FEA

9. Results (To Be Updated)

Metric Value
Seeded trials 217
New FEA trials TBD
Pareto front size TBD
Best rel_rms_40 TBD
Best rel_rms_60 TBD
Best mfg_90 TBD

10. Cross-References

  • V10: ../m1_mirror_zernike_optimization_V10/ - Original LHS sampling
  • V11: ../m1_mirror_adaptive_V11/ - MLP adaptive surrogate
  • V12: ../m1_mirror_adaptive_V12/ - GNN field prediction

Generated by Atomizer Framework. Pure NSGA-II for ground-truth Pareto optimization.