Files
Atomizer/docs/protocols/system/SYS_11_MULTI_OBJECTIVE.md
Antoine 602560c46a feat: Add MLP surrogate with Turbo Mode for 100x faster optimization
Neural Acceleration (MLP Surrogate):
- Add run_nn_optimization.py with hybrid FEA/NN workflow
- MLP architecture: 4-layer (64->128->128->64) with BatchNorm/Dropout
- Three workflow modes:
  - --all: Sequential export->train->optimize->validate
  - --hybrid-loop: Iterative Train->NN->Validate->Retrain cycle
  - --turbo: Aggressive single-best validation (RECOMMENDED)
- Turbo mode: 5000 NN trials + 50 FEA validations in ~12 minutes
- Separate nn_study.db to avoid overloading dashboard

Performance Results (bracket_pareto_3obj study):
- NN prediction errors: mass 1-5%, stress 1-4%, stiffness 5-15%
- Found minimum mass designs at boundary (angle~30deg, thick~30mm)
- 100x speedup vs pure FEA exploration

Protocol Operating System:
- Add .claude/skills/ with Bootstrap, Cheatsheet, Context Loader
- Add docs/protocols/ with operations (OP_01-06) and system (SYS_10-14)
- Update SYS_14_NEURAL_ACCELERATION.md with MLP Turbo Mode docs

NX Automation:
- Add optimization_engine/hooks/ for NX CAD/CAE automation
- Add study_wizard.py for guided study creation
- Fix FEM mesh update: load idealized part before UpdateFemodel()

New Study:
- bracket_pareto_3obj: 3-objective Pareto (mass, stress, stiffness)
- 167 FEA trials + 5000 NN trials completed
- Demonstrates full hybrid workflow

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-06 20:01:59 -05:00

339 lines
8.6 KiB
Markdown

# SYS_11: Multi-Objective Support
<!--
PROTOCOL: Multi-Objective Optimization Support
LAYER: System
VERSION: 1.0
STATUS: Active (MANDATORY)
LAST_UPDATED: 2025-12-05
PRIVILEGE: user
LOAD_WITH: []
-->
## Overview
**ALL** optimization engines in Atomizer **MUST** support both single-objective and multi-objective optimization without requiring code changes. This protocol ensures system robustness and prevents runtime failures when handling Pareto optimization.
**Key Requirement**: Code must work with both `study.best_trial` (single) and `study.best_trials` (multi) APIs.
---
## When to Use
| Trigger | Action |
|---------|--------|
| 2+ objectives defined in config | Use NSGA-II sampler |
| "pareto", "multi-objective" mentioned | Load this protocol |
| "tradeoff", "competing goals" | Suggest multi-objective approach |
| "minimize X AND maximize Y" | Configure as multi-objective |
---
## Quick Reference
**Single vs. Multi-Objective API**:
| Operation | Single-Objective | Multi-Objective |
|-----------|-----------------|-----------------|
| Best trial | `study.best_trial` | `study.best_trials[0]` |
| Best params | `study.best_params` | `trial.params` |
| Best value | `study.best_value` | `trial.values` (tuple) |
| Direction | `direction='minimize'` | `directions=['minimize', 'maximize']` |
| Sampler | TPE, CMA-ES, GP | NSGA-II (mandatory) |
---
## The Problem This Solves
Previously, optimization components only supported single-objective. When used with multi-objective studies:
1. Trials run successfully
2. Trials saved to database
3. **CRASH** when compiling results
- `study.best_trial` raises RuntimeError
- No tracking files generated
- Silent failures
**Root Cause**: Optuna has different APIs:
```python
# Single-Objective (works)
study.best_trial # Returns Trial object
study.best_params # Returns dict
study.best_value # Returns float
# Multi-Objective (RAISES RuntimeError)
study.best_trial # ❌ RuntimeError
study.best_params # ❌ RuntimeError
study.best_value # ❌ RuntimeError
study.best_trials # ✓ Returns LIST of Pareto-optimal trials
```
---
## Solution Pattern
### 1. Always Check Study Type
```python
is_multi_objective = len(study.directions) > 1
```
### 2. Use Conditional Access
```python
if is_multi_objective:
best_trials = study.best_trials
if best_trials:
# Select representative trial (e.g., first Pareto solution)
representative_trial = best_trials[0]
best_params = representative_trial.params
best_value = representative_trial.values # Tuple
best_trial_num = representative_trial.number
else:
best_params = {}
best_value = None
best_trial_num = None
else:
# Single-objective: safe to use standard API
best_params = study.best_params
best_value = study.best_value
best_trial_num = study.best_trial.number
```
### 3. Return Rich Metadata
Always include in results:
```python
{
'best_params': best_params,
'best_value': best_value, # float or tuple
'best_trial': best_trial_num,
'is_multi_objective': is_multi_objective,
'pareto_front_size': len(study.best_trials) if is_multi_objective else 1,
}
```
---
## Implementation Checklist
When creating or modifying any optimization component:
- [ ] **Study Creation**: Support `directions` parameter
```python
if len(objectives) > 1:
directions = [obj['type'] for obj in objectives] # ['minimize', 'maximize']
study = optuna.create_study(directions=directions, ...)
else:
study = optuna.create_study(direction='minimize', ...)
```
- [ ] **Result Compilation**: Check `len(study.directions) > 1`
- [ ] **Best Trial Access**: Use conditional logic
- [ ] **Logging**: Print Pareto front size for multi-objective
- [ ] **Reports**: Handle tuple objectives in visualization
- [ ] **Testing**: Test with BOTH single and multi-objective cases
---
## Configuration
**Multi-Objective Config Example**:
```json
{
"objectives": [
{
"name": "stiffness",
"type": "maximize",
"description": "Structural stiffness (N/mm)",
"unit": "N/mm"
},
{
"name": "mass",
"type": "minimize",
"description": "Total mass (kg)",
"unit": "kg"
}
],
"optimization_settings": {
"sampler": "NSGAIISampler",
"n_trials": 50
}
}
```
**Objective Function Return Format**:
```python
# Single-objective: return float
def objective_single(trial):
# ... compute ...
return objective_value # float
# Multi-objective: return tuple
def objective_multi(trial):
# ... compute ...
return (stiffness, mass) # tuple of floats
```
---
## Semantic Directions
Use semantic direction values - no negative tricks:
```python
# ✅ CORRECT: Semantic directions
objectives = [
{"name": "stiffness", "type": "maximize"},
{"name": "mass", "type": "minimize"}
]
# Return: (stiffness, mass) - both positive values
# ❌ WRONG: Negative trick
def objective(trial):
return (-stiffness, mass) # Don't negate to fake maximize
```
Optuna handles directions correctly when you specify `directions=['maximize', 'minimize']`.
---
## Testing Protocol
Before marking any optimization component complete:
### Test 1: Single-Objective
```python
# Config with 1 objective
directions = None # or ['minimize']
# Run optimization
# Verify: completes without errors
```
### Test 2: Multi-Objective
```python
# Config with 2+ objectives
directions = ['minimize', 'minimize']
# Run optimization
# Verify: completes without errors
# Verify: ALL tracking files generated
```
### Test 3: Verify Outputs
- `2_results/study.db` exists
- `2_results/intelligent_optimizer/` has tracking files
- `2_results/optimization_summary.json` exists
- No RuntimeError in logs
---
## NSGA-II Configuration
For multi-objective optimization, use NSGA-II:
```python
import optuna
from optuna.samplers import NSGAIISampler
sampler = NSGAIISampler(
population_size=50, # Pareto front population
mutation_prob=None, # Auto-computed
crossover_prob=0.9, # Recombination rate
swapping_prob=0.5, # Gene swapping probability
seed=42 # Reproducibility
)
study = optuna.create_study(
directions=['maximize', 'minimize'],
sampler=sampler,
study_name="multi_objective_study",
storage="sqlite:///study.db"
)
```
---
## Pareto Front Handling
### Accessing Pareto Solutions
```python
if is_multi_objective:
pareto_trials = study.best_trials
print(f"Found {len(pareto_trials)} Pareto-optimal solutions")
for trial in pareto_trials:
print(f"Trial {trial.number}: {trial.values}")
print(f" Params: {trial.params}")
```
### Selecting Representative Solution
```python
# Option 1: First Pareto solution
representative = study.best_trials[0]
# Option 2: Weighted selection
def weighted_selection(trials, weights):
best_score = float('inf')
best_trial = None
for trial in trials:
score = sum(w * v for w, v in zip(weights, trial.values))
if score < best_score:
best_score = score
best_trial = trial
return best_trial
# Option 3: Knee point (maximum distance from ideal line)
# Requires more complex computation
```
---
## Troubleshooting
| Symptom | Cause | Solution |
|---------|-------|----------|
| RuntimeError on `best_trial` | Multi-objective study using single API | Use conditional check pattern |
| Empty Pareto front | No feasible solutions | Check constraints, relax if needed |
| Only 1 Pareto solution | Objectives not conflicting | Verify objectives are truly competing |
| NSGA-II with single objective | Wrong config | Use TPE/CMA-ES for single-objective |
---
## Cross-References
- **Depends On**: None (mandatory for all)
- **Used By**: All optimization components
- **Integrates With**:
- [SYS_10_IMSO](./SYS_10_IMSO.md) (selects NSGA-II for multi-objective)
- [SYS_13_DASHBOARD_TRACKING](./SYS_13_DASHBOARD_TRACKING.md) (Pareto visualization)
- **See Also**: [OP_04_ANALYZE_RESULTS](../operations/OP_04_ANALYZE_RESULTS.md) for Pareto analysis
---
## Implementation Files
Files that implement this protocol:
- `optimization_engine/intelligent_optimizer.py` - `_compile_results()` method
- `optimization_engine/study_continuation.py` - Result handling
- `optimization_engine/hybrid_study_creator.py` - Study creation
Files requiring this protocol:
- [ ] `optimization_engine/study_continuation.py`
- [ ] `optimization_engine/hybrid_study_creator.py`
- [ ] `optimization_engine/intelligent_setup.py`
- [ ] `optimization_engine/llm_optimization_runner.py`
---
## Version History
| Version | Date | Changes |
|---------|------|---------|
| 1.0 | 2025-11-20 | Initial release, mandatory for all engines |