Added JSON Schema: - optimization_engine/schemas/atomizer_spec_v2.json Migrated 28 studies to AtomizerSpec v2.0 format: - Drone_Gimbal studies (1) - M1_Mirror studies (21) - M2_Mirror studies (2) - SheetMetal_Bracket studies (4) Each atomizer_spec.json is the unified configuration containing: - Design variables with bounds and expressions - Extractors (standard and custom) - Objectives and constraints - Optimization algorithm settings - Canvas layout information
M1 Mirror Turbo V1 - Self-Improving Surrogate Optimization
Overview
This study uses a self-improving surrogate optimization approach:
- Pre-trained MLP surrogate explores 5000 designs per iteration (~1 min)
- Top 5 diverse candidates validated with actual FEA (~25 min)
- Surrogate retrains on new FEA data (improves accuracy)
- Repeat until convergence or FEA budget exhausted
Key Innovation: The surrogate learns from its own mistakes. Each FEA validation teaches the model about regions where it was wrong.
FEA Budget
| Parameter | Value |
|---|---|
| Max FEA validations | 100 |
| FEA per iteration | 5 |
| Surrogate trials per iter | 5000 |
| Estimated iterations | ~20 |
| Time per iteration | ~30 min (5 FEA + surrogate + retrain) |
| Total estimated time | ~10 hours |
Study Configuration
Design Variables (10 total)
| Variable | Min | Max | Baseline | Precision |
|---|---|---|---|---|
| whiffle_min | 30.0 | 72.0 | 61.92 mm | 0.1 mm |
| whiffle_outer_to_vertical | 70.0 | 85.0 | 81.75 mm | 0.1 mm |
| whiffle_triangle_closeness | 65.0 | 120.0 | 65.54 mm | 0.1 mm |
| blank_backface_angle | 4.2 | 4.5 | 4.42 deg | 0.01 deg |
| lateral_inner_angle | 25.0 | 30.0 | 29.90 deg | 0.1 deg |
| lateral_outer_angle | 11.0 | 17.0 | 11.76 deg | 0.1 deg |
| lateral_outer_pivot | 7.0 | 12.0 | 11.13 deg | 0.1 deg |
| lateral_inner_pivot | 5.0 | 12.0 | 6.52 deg | 0.1 deg |
| lateral_middle_pivot | 15.0 | 27.0 | 22.39 deg | 0.1 deg |
| lateral_closeness | 7.0 | 12.0 | 8.69 deg | 0.1 deg |
Note: Parameters are quantized to machining precision (0.1mm / 0.1 deg) - no point optimizing beyond what can be manufactured.
Objectives (4 total)
| Objective | Weight | Target | Description |
|---|---|---|---|
| rel_filtered_rms_40_vs_20 | 5.0 | 4.0 nm | WFE at 40 deg relative to 20 deg |
| rel_filtered_rms_60_vs_20 | 5.0 | 10.0 nm | WFE at 60 deg relative to 20 deg |
| mfg_90_optician_workload | 3.0 | 20.0 nm | Manufacturing deformation at 90 deg |
| mass_kg | 1.0 | - | Total blank mass |
Weighted Sum: WS = 5*(40-20) + 5*(60-20) + 3*(MFG) + 1*(mass)
Constraint
- blank_mass <= 105 kg (hard constraint with 1e6 penalty)
Baseline
- Source: V11 trial 190
- Weighted Sum: 284.19
- Mass: 94.66 kg
Directory Structure
m1_mirror_turbo_V1/
├── 1_setup/
│ ├── optimization_config.json # Full configuration
│ └── model/ # NX model files (from V11)
│ ├── ASSY_M1.prt
│ ├── ASSY_M1_assyfem1.afm
│ ├── ASSY_M1_assyfem1_sim1.sim
│ ├── M1_Blank.prt
│ ├── M1_Blank_fem1.fem
│ ├── M1_Blank_fem1_i.prt # CRITICAL: Idealized part
│ └── ...
├── 2_iterations/ # FEA validation runs only
│ ├── iter1/
│ ├── iter2/
│ └── ...
├── 3_results/
│ ├── checkpoints/
│ │ └── best_model.pt # Current surrogate model
│ ├── gnn_data/
│ │ └── fea_samples.json # All FEA samples for training
│ ├── turbo_logs/ # Per-iteration logs
│ │ ├── iter_0001.json
│ │ └── ...
│ ├── best_design_archive/ # Archived best designs
│ ├── study.db # Optuna format (for dashboard)
│ └── study_custom.db # Custom SQLite (detailed turbo data)
├── scripts/
│ ├── init_training_data.py # Initialize from previous data
│ ├── check_checkpoint.py # Debug checkpoint contents
│ └── backfill_optuna.py # Convert trials to Optuna format
├── run_optimization.py # Main optimization script
├── README.md # This file
└── STUDY_REPORT.md # Results summary (updated during run)
How to Run
Prerequisites
- NX must be installed and licensed
- Conda environment
atomizeractivated - Initial surrogate model in place (already copied from surrogate_turbo)
Quick Start
cd studies/M1_Mirror/m1_mirror_turbo_V1
# Run with defaults (100 FEA max, 5 per iteration)
python run_optimization.py --start
# Limit FEA budget
python run_optimization.py --start --max-fea 50
# Resume interrupted run
python run_optimization.py --start --resume
# Single FEA test (baseline)
python run_optimization.py --test
Command Line Options
| Option | Default | Description |
|---|---|---|
--start |
- | Start optimization (required) |
--max-fea |
100 | Maximum total FEA validations |
--surrogate-trials |
5000 | Surrogate trials per iteration |
--fea-per-iter |
5 | FEA validations per iteration |
--patience |
3 | Stop after N iterations without improvement |
--resume |
False | Resume from previous run |
--test |
- | Run single FEA test on baseline |
Algorithm Details
Self-Improving Loop
INITIALIZE:
- Load pre-trained surrogate (R²=0.88, trained on ~1100 FEA)
- Load previous FEA params for diversity checking
REPEAT until converged or 100 FEA:
1. SURROGATE EXPLORE (~1 min)
├─ Run 5000 Optuna TPE trials with surrogate
├─ Quantize all predictions to machining precision
└─ Find Pareto-optimal candidates
2. SELECT DIVERSE CANDIDATES
├─ Sort by weighted sum
├─ Select top 5 that are:
│ ├─ At least 15% different from each other
│ └─ At least 7.5% different from ALL previous FEA
└─ This ensures exploration, not just exploitation
3. FEA VALIDATE (~25 min for 5 candidates)
├─ For each candidate:
│ ├─ Create iteration folder
│ ├─ Update NX expressions
│ ├─ Run Nastran solver
│ ├─ Extract ZernikeOPD objectives
│ └─ Log prediction error
└─ Add results to training data
4. RETRAIN SURROGATE (~2 min)
├─ Combine all FEA samples
├─ Retrain MLP for 100 epochs
├─ Save new checkpoint
└─ Reload improved model
5. CHECK CONVERGENCE
├─ Track best feasible WS
├─ If improved: reset patience counter
└─ If no improvement for 3 iterations: STOP
Diversity Enforcement
The MIN_CANDIDATE_DISTANCE = 0.15 ensures:
- Candidates differ by at least 15% of each parameter's range
- In 10D space, this means significant exploration
- Previous FEA has stricter check (7.5%) to avoid redundant tests
Prediction Error Tracking
Each FEA logs:
- Predicted WS from surrogate
- Actual WS from FEA
- Error absolute and percentage
High errors indicate regions where surrogate needs more data.
Initial Surrogate
The initial model was trained on ~1100 FEA samples from V6-V12:
| Study | Samples | Variables |
|---|---|---|
| V6 | 196 | 4 whiffle |
| V7 | 206 | 4 whiffle |
| V8 | 199 | 6 lateral |
| V9 | 200 | 4 whiffle |
| V11 | 199 | 10 (all) |
| V12 | 140 | 10 (all) |
Initial R² = 0.88 (validated on held-out data)
Expected Outcomes
Optimistic
- Find design with WS < 280 (improvement over V11's 284.19)
- Surrogate R² improves to >0.93 with additional data
- Converge in ~50 FEA (half the budget)
Conservative
- Use full 100 FEA budget
- Marginal improvement (~1-2% better than V11)
- Surrogate accuracy plateaus
Key Insight
Even if we don't beat V11, we gain:
- A much better trained surrogate for future studies
- Understanding of where the design space is sensitive
- Confidence that V11 result is near-optimal
Monitoring Progress
During Run
Watch the console output for:
# TURBO ITERATION 5 (FEA 25/100)
[SURROGATE] Running 5000 trials...
Best surrogate WS: 282.15
Selected 5 diverse candidates for FEA:
1. Predicted WS=282.15
2. Predicted WS=283.42
...
[FEA] Running 5 validations...
Prediction error: 2.34 (0.8%) - predicted 282.15, actual 284.49
*** NEW BEST FEA! WS=283.21 (was 284.19) ***
[RETRAIN] Retraining surrogate with 364 samples...
Reloaded model with R2=0.8934
After Run
Check 3_results/:
optimization_summary.json- Final resultsturbo_logs/- Per-iteration detailsstudy.db- All trials in SQLite
Query best results:
SELECT iter_num, weighted_sum, prediction_error
FROM trials
WHERE is_feasible = 1
ORDER BY weighted_sum
LIMIT 10;
Troubleshooting
"No trained model found"
The initial model wasn't copied. Run:
copy ..\m1_mirror_surrogate_turbo\3_results\checkpoints\best_model.pt 3_results\checkpoints\
NX Session Issues
If NX fails to start:
- Check license server is accessible
- Ensure no other NX sessions are conflicting
- Try
--testfirst to verify single FEA works
Low Diversity Warning
If you see "Only found 2/5 diverse candidates":
- The design space may be well-explored already
- Consider reducing
MIN_CANDIDATE_DISTANCEin code - Or increase
--surrogate-trialsto find more diverse options
Dashboard Integration
The turbo study logs FEA-validated trials to an Optuna-format database (study.db) for dashboard visibility. This allows:
- Viewing only FEA-validated trials (not NN-only)
- Standard Optuna dashboard features (convergence, parameter importance)
- Prediction error tracking via user attributes
Dual Database Structure
| Database | Format | Purpose |
|---|---|---|
study.db |
Optuna | Dashboard visibility (FEA trials only) |
study_custom.db |
Custom SQLite | Detailed turbo data (iteration logs, prediction errors) |
Backfilling Existing Data
If you ran the optimization before dashboard integration was added, use:
python scripts/backfill_optuna.py
This converts trials from study_custom.db to Optuna format in study.db.
Related Studies
| Study | Purpose | Best WS |
|---|---|---|
| V6-V9 | Initial exploration | ~290-310 |
| V11 | ZernikeOPD + whiffle | 284.19 |
| V12 | All 10 variables | ~285 |
| Turbo V1 | Self-improving surrogate | 282.05 ✓ |
Results Summary
- Best WS: 282.05 (trial #28)
- Improvement vs V11: 0.75%
- FEA Budget Used: 45/100
- Turbo Iterations: 9
See STUDY_REPORT.md for detailed results and analysis.
Author
Atomizer - 2025-12-24