- Add validation framework (config, model, results, study validators) - Add Claude Code skills (create-study, run-optimization, generate-report, troubleshoot, analyze-model) - Add Atomizer Dashboard (React frontend + FastAPI backend) - Reorganize docs into structured directories (00-09) - Add neural surrogate modules and training infrastructure - Add multi-objective optimization support 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
5.2 KiB
Protocol 11: Multi-Objective Optimization Support
Status: MANDATORY Applies To: ALL optimization studies Last Updated: 2025-11-21
Overview
ALL optimization engines in Atomizer MUST support both single-objective and multi-objective optimization without requiring code changes. This is a critical requirement that prevents runtime failures.
The Problem
Previously, IntelligentOptimizer (Protocol 10) only supported single-objective optimization. When used with multi-objective studies, it would:
- Successfully run all trials
- Save trials to the Optuna database (
study.db) - CRASH when trying to compile results, causing:
- No intelligent optimizer tracking files (confidence_history.json, strategy_transitions.json)
- No optimization_summary.json
- No final reports
- Silent failures that are hard to debug
The Root Cause
Optuna has different APIs for single vs. multi-objective studies:
Single-Objective
study.best_trial # Returns single Trial object
study.best_params # Returns dict of parameters
study.best_value # Returns float
Multi-Objective
study.best_trials # Returns LIST of Pareto-optimal trials
study.best_params # ❌ RAISES RuntimeError
study.best_value # ❌ RAISES RuntimeError
study.best_trial # ❌ RAISES RuntimeError
The Solution
1. Always Check Study Type
is_multi_objective = len(study.directions) > 1
2. Use Conditional Access Patterns
if is_multi_objective:
best_trials = study.best_trials
if best_trials:
# Select representative trial (e.g., first Pareto solution)
representative_trial = best_trials[0]
best_params = representative_trial.params
best_value = representative_trial.values # Tuple
best_trial_num = representative_trial.number
else:
best_params = {}
best_value = None
best_trial_num = None
else:
# Single-objective: safe to use standard API
best_params = study.best_params
best_value = study.best_value
best_trial_num = study.best_trial.number
3. Return Rich Metadata
Always include in results:
{
'best_params': best_params,
'best_value': best_value, # float or tuple
'best_trial': best_trial_num,
'is_multi_objective': is_multi_objective,
'pareto_front_size': len(study.best_trials) if is_multi_objective else 1,
# ... other fields
}
Implementation Checklist
When creating or modifying any optimization component:
-
Study Creation: Support
directionsparameterif directions: study = optuna.create_study(directions=directions, ...) else: study = optuna.create_study(direction='minimize', ...) -
Result Compilation: Check
len(study.directions) > 1 -
Best Trial Access: Use conditional logic (single vs. multi)
-
Logging: Print Pareto front size for multi-objective
-
Reports: Handle tuple objectives in visualization
-
Testing: Test with BOTH single and multi-objective cases
Files Fixed
- ✅
optimization_engine/intelligent_optimizer.py_compile_results()method_run_fallback_optimization()method
Files That Need Review
Check these files for similar issues:
optimization_engine/study_continuation.py(lines 96, 259-260)optimization_engine/hybrid_study_creator.py(line 468)optimization_engine/intelligent_setup.py(line 606)optimization_engine/llm_optimization_runner.py(line 384)
Testing Protocol
Before marking any optimization study as complete:
-
Single-Objective Test
directions=None # or ['minimize'] # Should complete without errors -
Multi-Objective Test
directions=['minimize', 'minimize'] # Should complete without errors # Should generate ALL tracking files -
Verify Outputs
2_results/study.dbexists2_results/intelligent_optimizer/has tracking files2_results/optimization_summary.jsonexists- No RuntimeError in logs
Design Principle
"Write Once, Run Anywhere"
Any optimization component should:
- Accept both single and multi-objective problems
- Automatically detect the study type
- Handle result compilation appropriately
- Never raise RuntimeError due to API misuse
Example: Bracket Study
The bracket_stiffness_optimization study is multi-objective:
- Objective 1: Maximize stiffness (minimize -stiffness)
- Objective 2: Minimize mass
- Constraint: mass ≤ 0.2 kg
This study exposed the bug because:
directions = ["minimize", "minimize"] # Multi-objective
After the fix, it should:
- Run all 50 trials successfully
- Generate Pareto front with multiple solutions
- Save all intelligent optimizer tracking files
- Create complete reports with tuple objectives
Future Work
- Add explicit validation in
IntelligentOptimizer.__init__()to warn about common mistakes - Create helper function
get_best_solution(study)that handles both cases - Add unit tests for multi-objective support in all optimizers
Remember: Multi-objective support is NOT optional. It's a core requirement for production-ready optimization engines.