Added JSON Schema: - optimization_engine/schemas/atomizer_spec_v2.json Migrated 28 studies to AtomizerSpec v2.0 format: - Drone_Gimbal studies (1) - M1_Mirror studies (21) - M2_Mirror studies (2) - SheetMetal_Bracket studies (4) Each atomizer_spec.json is the unified configuration containing: - Design variables with bounds and expressions - Extractors (standard and custom) - Objectives and constraints - Optimization algorithm settings - Canvas layout information
M1 Mirror Surrogate Turbo Optimization
GNN Surrogate Training + Iterative Turbo Optimization
Train a Graph Neural Network on ~1000 FEA trials from V6-V12, then use it to accelerate optimization with iterative FEA validation.
Strategy Overview
V6-V9: ~794 trials (Z-only extraction)
│
├─── Re-extract with ZernikeOPD ────┐
│ │
V11-V12: ~200+ trials (ZernikeOPD) │
│ │
└─────────────── Aggregate ──────────┘
│
▼
Train GNN (~1000 samples)
│
▼
┌──────────────────────────────┐
│ TURBO LOOP │
├──────────────────────────────┤
│ 1. GNN: 5000 trials (fast) │
│ 2. Select 5 diverse designs │
│ 3. FEA: Validate 5 (~25 min) │
│ 4. Update if accuracy drops │
│ 5. Check convergence │
└──────────────────────────────┘
│
▼
Best design (WS < 280?)
Why This Approach?
- V6-V9 data is valuable but used wrong extraction method (Z-only Zernike)
- Re-extraction unifies all data under ZernikeOPD with extract_relative()
- ~1000 samples is 3x more training data than typical single-study GNN
- Turbo mode accelerates exploration: 5000 GNN predictions in ~2 min vs ~8 hours FEA
- Iterative validation ensures GNN predictions match reality
Source Studies
| Study | Trials | Original Method | Action |
|---|---|---|---|
| V6 | 196 | Z-only | Re-extract |
| V7 | 199 | Z-only | Re-extract |
| V8 | 199 | Z-only | Re-extract |
| V9 | 200 | Z-only | Re-extract |
| V11 | 199 | ZernikeOPD | Use directly |
| V12 | TBD | ZernikeOPD | Use directly |
V10 excluded - abandoned study with potential quality issues.
Configuration
Design Variables (10 total)
| Variable | Min | Max | Baseline | Units |
|---|---|---|---|---|
| Whiffle (4) | ||||
| whiffle_min | 30.0 | 72.0 | 61.92 | mm |
| whiffle_outer_to_vertical | 70.0 | 85.0 | 81.75 | mm |
| whiffle_triangle_closeness | 65.0 | 120.0 | 65.54 | mm |
| blank_backface_angle | 4.2 | 4.5 | 4.42 | deg |
| Lateral (6) | ||||
| lateral_inner_angle | 25.0 | 30.0 | 29.90 | deg |
| lateral_outer_angle | 11.0 | 17.0 | 11.76 | deg |
| lateral_outer_pivot | 7.0 | 12.0 | 11.13 | deg |
| lateral_inner_pivot | 5.0 | 12.0 | 6.52 | deg |
| lateral_middle_pivot | 15.0 | 27.0 | 22.39 | deg |
| lateral_closeness | 7.0 | 12.0 | 8.69 | deg |
Turbo Settings
| Setting | Value |
|---|---|
| GNN trials per iteration | 5000 |
| FEA validations per iteration | 5 |
| Max iterations | 20 |
| Convergence patience | 2 iterations |
| GNN accuracy threshold | R² > 0.95 |
Weighted Sum Formula
WS = 5*(40-20 RMS) + 5*(60-20 RMS) + 3*(MFG 90) + 1*(mass)
Usage
Phase 1: Re-extract V6-V9
cd studies/M1_Mirror/m1_mirror_surrogate_turbo
python scripts/01_reextract_studies.py
# Or single study:
python scripts/01_reextract_studies.py --study V7
Time estimate: 2-3 hours for ~794 trials
Phase 2: Aggregate Training Data
python scripts/02_aggregate_gnn_data.py
# Custom train/val split:
python scripts/02_aggregate_gnn_data.py --train-ratio 0.85
Time estimate: 30 minutes
Phase 3: Train GNN
python scripts/03_train_gnn.py --epochs 200
# Quick test:
python scripts/03_train_gnn.py --epochs 50 --quick
Time estimate: 2-4 hours (GPU), longer on CPU
Phase 4: Run Turbo Optimization
python run_turbo_optimization.py
# GNN-only exploration (no FEA):
python run_turbo_optimization.py --no-fea
# Custom settings:
python run_turbo_optimization.py --gnn-trials 10000 --fea-per-iter 3
Time estimate: ~45 min per iteration (with FEA)
Directory Structure
m1_mirror_surrogate_turbo/
├── 1_setup/
│ ├── optimization_config.json # Full config with all 10 variables
│ ├── source_studies.json # Metadata about source studies
│ └── model/ # NX model files (V11 best)
├── 2_iterations/
│ └── iter{N}/ # FEA validation working directories
├── 3_results/
│ ├── reextracted/ # Re-extracted V6-V9 data
│ │ ├── V6/
│ │ ├── V7/
│ │ ├── V8/
│ │ └── V9/
│ ├── gnn_data/ # Aggregated training data
│ │ ├── train/
│ │ ├── val/
│ │ └── dataset_meta.json
│ ├── checkpoints/ # GNN model saves
│ │ ├── best_model.pt
│ │ └── training_history.json
│ ├── turbo_logs/ # Per-iteration results
│ │ └── iter_0000.json
│ └── study.db # Track validation trials
├── scripts/
│ ├── 01_reextract_studies.py # Batch re-extraction
│ ├── 02_aggregate_gnn_data.py # Data aggregation
│ └── 03_train_gnn.py # Training wrapper
├── run_turbo_optimization.py # Main turbo loop
├── README.md
└── STUDY_REPORT.md
Expected Outcomes
- GNN with R² > 0.95 for all subcase predictions
- Find design with WS < 280 (improvement over V11's 284.19)
- GNN predictions within 10% of FEA validation
- Validated best design ready for manufacturing
Current Baseline (V11 Trial 190)
| Metric | Value |
|---|---|
| Weighted Sum | 284.19 |
| 40-20 RMS | 6.50 nm |
| 60-20 RMS | 14.08 nm |
| MFG 90 | 28.89 nm |
| Mass | 94.66 kg |
Success Criteria
- GNN validation R² > 0.95 for all subcases
- Find design with WS < 280
- GNN-predicted objectives within 10% of FEA
- Complete at least 3 turbo iterations
Created: 2025-12-23 Atomizer Framework