## Protocol 13: Adaptive Multi-Objective Optimization - Iterative FEA + Neural Network surrogate workflow - Initial FEA sampling, NN training, NN-accelerated search - FEA validation of top NN predictions, retraining loop - adaptive_state.json tracks iteration history and best values - M1 mirror study (V11) with 103 FEA, 3000 NN trials ## Dashboard Visualization Enhancements - Added Plotly.js interactive charts (parallel coords, Pareto, convergence) - Lazy loading with React.lazy() for performance - Code splitting: plotly.js-basic-dist (~1MB vs 3.5MB) - Chart library toggle (Recharts default, Plotly on-demand) - ExpandableChart component for full-screen modal views - ConsoleOutput component for real-time log viewing ## Documentation - Protocol 13 detailed documentation - Dashboard visualization guide - Plotly components README - Updated run-optimization skill with Mode 5 (adaptive) ## Bug Fixes - Fixed TypeScript errors in dashboard components - Fixed Card component to accept ReactNode title - Removed unused imports across components 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
8.5 KiB
8.5 KiB
Protocol 13: Adaptive Multi-Objective Optimization
Overview
Protocol 13 implements an adaptive multi-objective optimization strategy that combines:
- FEA (Finite Element Analysis) for ground truth simulations
- Neural Network Surrogates for rapid exploration
- Iterative refinement with periodic retraining
This protocol is ideal for expensive simulations where each FEA run takes significant time (minutes to hours), but you need to explore a large design space efficiently.
When to Use Protocol 13
| Scenario | Recommended |
|---|---|
| FEA takes > 5 minutes per run | Yes |
| Need to explore > 100 designs | Yes |
| Multi-objective optimization (2-4 objectives) | Yes |
| Single objective, fast FEA (< 1 min) | No, use Protocol 10/11 |
| Highly nonlinear response surfaces | Yes, with more FEA samples |
Architecture
┌─────────────────────────────────────────────────────────────────┐
│ Adaptive Optimization Loop │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Iteration 1: │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Initial FEA │ -> │ Train NN │ -> │ NN Search │ │
│ │ (50-100) │ │ Surrogate │ │ (1000 trials)│ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ │ │
│ v │
│ Iteration 2+: │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Validate Top │ -> │ Retrain NN │ -> │ NN Search │ │
│ │ NN with FEA │ │ with new data│ │ (1000 trials)│ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
Configuration
optimization_config.json
{
"study_name": "my_adaptive_study",
"protocol": 13,
"adaptive_settings": {
"enabled": true,
"initial_fea_trials": 50,
"nn_trials_per_iteration": 1000,
"fea_validation_per_iteration": 5,
"max_iterations": 10,
"convergence_threshold": 0.01,
"retrain_epochs": 100
},
"objectives": [
{
"name": "thermal_40_vs_20",
"direction": "minimize",
"weight": 1.0
},
{
"name": "thermal_60_vs_20",
"direction": "minimize",
"weight": 0.5
},
{
"name": "manufacturability",
"direction": "minimize",
"weight": 0.3
}
],
"design_variables": [
{
"name": "rib_thickness",
"expression_name": "rib_thickness",
"min": 5.0,
"max": 15.0,
"baseline": 10.0
}
],
"surrogate_settings": {
"enabled": true,
"model_type": "neural_network",
"hidden_layers": [128, 64, 32],
"learning_rate": 0.001,
"batch_size": 32
}
}
Key Parameters
| Parameter | Description | Recommended |
|---|---|---|
initial_fea_trials |
FEA runs before first NN training | 50-100 |
nn_trials_per_iteration |
NN-predicted trials per iteration | 500-2000 |
fea_validation_per_iteration |
Top NN trials validated with FEA | 3-10 |
max_iterations |
Maximum adaptive iterations | 5-20 |
convergence_threshold |
Stop if improvement < threshold | 0.01 (1%) |
Workflow
Phase 1: Initial FEA Sampling
# Generates space-filling Latin Hypercube samples
# Runs FEA on each sample
# Stores results in Optuna database with source='FEA'
Phase 2: Neural Network Training
# Extracts all FEA trials from database
# Normalizes inputs (design variables) to [0, 1]
# Trains multi-output neural network
# Validates on held-out set (20%)
Phase 3: NN-Accelerated Search
# Uses trained NN as objective function
# Runs NSGA-II with 1000+ trials (fast, ~ms per trial)
# Identifies Pareto-optimal candidates
# Stores predictions with source='NN'
Phase 4: FEA Validation
# Selects top N NN predictions
# Runs actual FEA on these candidates
# Updates database with ground truth
# Checks for improvement
Phase 5: Iteration
# If improved: retrain NN with new FEA data
# If converged: stop and report best
# Otherwise: continue to next iteration
Output Files
studies/my_study/
├── 3_results/
│ ├── study.db # Optuna database (all trials)
│ ├── adaptive_state.json # Current iteration state
│ ├── surrogate_model.pt # Trained neural network
│ ├── training_history.json # NN training metrics
│ └── STUDY_REPORT.md # Generated summary report
adaptive_state.json
{
"iteration": 3,
"total_fea_count": 103,
"total_nn_count": 3000,
"best_weighted": 1.456,
"best_params": {
"rib_thickness": 8.5,
"...": "..."
},
"history": [
{"iteration": 1, "fea_count": 50, "nn_count": 1000, "improved": true},
{"iteration": 2, "fea_count": 55, "nn_count": 2000, "improved": true},
{"iteration": 3, "fea_count": 103, "nn_count": 3000, "improved": false}
]
}
Dashboard Integration
Protocol 13 studies display in the Atomizer dashboard with:
- FEA vs NN Differentiation: Blue circles for FEA, orange crosses for NN
- Pareto Front Highlighting: Green markers for Pareto-optimal solutions
- Convergence Plot: Shows optimization progress with best-so-far line
- Parallel Coordinates: Filter and explore the design space
- Parameter Importance: Correlation-based sensitivity analysis
Best Practices
1. Initial Sampling Strategy
Use Latin Hypercube Sampling (LHS) for initial FEA trials to ensure good coverage:
sampler = optuna.samplers.LatinHypercubeSampler(seed=42)
2. Neural Network Architecture
For most problems, start with:
- 2-3 hidden layers
- 64-128 neurons per layer
- ReLU activation
- Adam optimizer with lr=0.001
3. Validation Strategy
Always validate top NN predictions with FEA before trusting them:
- NN predictions can be wrong in unexplored regions
- FEA validation provides ground truth
- More FEA = more accurate NN (trade-off with time)
4. Convergence Criteria
Stop when:
- No improvement for 2-3 consecutive iterations
- Reached FEA budget limit
- Objective improvement < 1% threshold
Example: M1 Mirror Optimization
# Start adaptive optimization
cd studies/m1_mirror_adaptive_V11
python run_optimization.py --start
# Monitor progress
python run_optimization.py --status
# Generate report
python generate_report.py
Results after 3 iterations:
- 103 FEA trials
- 3000 NN trials
- Best thermal 40°C vs 20°C: 5.99 nm RMS
- Best thermal 60°C vs 20°C: 14.02 nm RMS
Troubleshooting
NN Predictions Don't Match FEA
- Increase initial FEA samples
- Add more hidden layers
- Check for outliers in training data
- Ensure proper normalization
Optimization Not Converging
- Increase NN trials per iteration
- Check objective function implementation
- Verify design variable bounds
- Consider adding constraints
Memory Issues
- Reduce
nn_trials_per_iteration - Use batch processing for large datasets
- Clear trial cache periodically