feat: Add Adaptive Method Selector for intelligent optimization strategy

The AMS analyzes optimization problems and recommends the best method:
- ProblemProfiler: Static analysis of config (dimensions, objectives, constraints)
- EarlyMetricsCollector: Dynamic analysis from FEA trials (smoothness, correlations)
- AdaptiveMethodSelector: Rule-based scoring for method recommendations
- RuntimeAdvisor: Mid-run monitoring for method pivots

Key features:
- Analyzes problem characteristics (n_variables, n_objectives, constraints)
- Computes response smoothness and variable sensitivity from trial data
- Recommends TURBO, HYBRID_LOOP, PURE_FEA, or GNN_FIELD
- Provides confidence scores and suggested parameters
- CLI: python -m optimization_engine.method_selector <config> [db]

Documentation:
- Add SYS_15_METHOD_SELECTOR.md protocol
- Update CLAUDE.md with new system protocol reference

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
Antoine
2025-12-07 05:51:49 -05:00
parent 602560c46a
commit 3e9488d9f0
3 changed files with 1210 additions and 1 deletions

View File

@@ -48,6 +48,7 @@ The Protocol Operating System (POS) provides layered documentation:
| 12 | Extractor Library | Any extraction, "displacement", "stress" |
| 13 | Dashboard | "dashboard", "real-time", monitoring |
| 14 | Neural Acceleration | >50 trials, "neural", "surrogate" |
| 15 | Method Selector | "which method", "recommend", "turbo vs" |
**Full specs**: `docs/protocols/system/SYS_{N}_{NAME}.md`
@@ -72,7 +73,7 @@ Atomizer/
├── .claude/skills/ # LLM skills (Bootstrap + Core + Modules)
├── docs/protocols/ # Protocol Operating System
│ ├── operations/ # OP_01 - OP_06
│ ├── system/ # SYS_10 - SYS_14
│ ├── system/ # SYS_10 - SYS_15
│ └── extensions/ # EXT_01 - EXT_04
├── optimization_engine/ # Core Python modules
│ └── extractors/ # Physics extraction library

View File

@@ -0,0 +1,326 @@
# SYS_15: Adaptive Method Selector
<!--
PROTOCOL: Adaptive Method Selector
LAYER: System
VERSION: 1.0
STATUS: Active
LAST_UPDATED: 2025-12-06
PRIVILEGE: user
LOAD_WITH: [SYS_10_IMSO, SYS_11_MULTI_OBJECTIVE, SYS_14_NEURAL_ACCELERATION]
-->
## Overview
The **Adaptive Method Selector (AMS)** analyzes optimization problems and recommends the best method (turbo, hybrid_loop, pure_fea, etc.) based on:
1. **Static Analysis**: Problem characteristics from config (dimensionality, objectives, constraints)
2. **Dynamic Analysis**: Early FEA trial metrics (smoothness, correlations, feasibility)
3. **Runtime Monitoring**: Continuous optimization performance assessment
**Key Value**: Eliminates guesswork in choosing optimization strategies by providing data-driven recommendations.
---
## When to Use
| Trigger | Action |
|---------|--------|
| Starting a new optimization | Run method selector first |
| "which method", "recommend" mentioned | Suggest method selector |
| Unsure between turbo/hybrid/fea | Use method selector |
| > 20 FEA trials completed | Re-run for updated recommendation |
---
## Quick Reference
### CLI Usage
```bash
python -m optimization_engine.method_selector <config_path> [db_path]
```
**Examples**:
```bash
# Config-only analysis (before any FEA trials)
python -m optimization_engine.method_selector 1_setup/optimization_config.json
# Full analysis with FEA data
python -m optimization_engine.method_selector 1_setup/optimization_config.json 2_results/study.db
```
### Python API
```python
from optimization_engine.method_selector import AdaptiveMethodSelector
selector = AdaptiveMethodSelector()
recommendation = selector.recommend("1_setup/optimization_config.json", "2_results/study.db")
print(recommendation.method) # 'turbo', 'hybrid_loop', 'pure_fea', 'gnn_field'
print(recommendation.confidence) # 0.0 - 1.0
print(recommendation.parameters) # {'nn_trials': 5000, 'batch_size': 100, ...}
print(recommendation.reasoning) # Explanation string
print(recommendation.alternatives) # Other methods with scores
```
---
## Available Methods
| Method | Description | Best For |
|--------|-------------|----------|
| **TURBO** | Aggressive NN exploration with single-best FEA validation | Low-dimensional, smooth responses |
| **HYBRID_LOOP** | Iterative train→predict→validate→retrain cycle | Moderate complexity, uncertain landscape |
| **PURE_FEA** | Traditional FEA-only optimization | High-dimensional, complex physics |
| **GNN_FIELD** | Graph neural network for field prediction | Need full field visualization |
---
## Selection Criteria
### Static Factors (from config)
| Factor | Favors TURBO | Favors HYBRID_LOOP | Favors PURE_FEA |
|--------|--------------|---------------------|-----------------|
| **n_variables** | ≤5 | 5-10 | >10 |
| **n_objectives** | 1-3 | 2-4 | Any |
| **n_constraints** | ≤3 | 3-5 | >5 |
| **FEA budget** | >50 trials | 30-50 trials | <30 trials |
### Dynamic Factors (from FEA trials)
| Factor | Measurement | Impact |
|--------|-------------|--------|
| **Response smoothness** | Lipschitz constant estimate | Smooth → NN works well |
| **Variable sensitivity** | Correlation with objectives | High correlation → easier to learn |
| **Feasibility rate** | % of valid designs | Low feasibility → need more exploration |
| **Objective correlations** | Pairwise correlations | Strong correlations → simpler landscape |
---
## Architecture
```
┌───────────────────────────────────────────────────────────────────┐
│ AdaptiveMethodSelector │
├───────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────────┐ ┌──────────────────────┐ │
│ │ ProblemProfiler │ │ EarlyMetricsCollector │ │
│ │ (static analysis)│ │ (dynamic analysis) │ │
│ └────────┬─────────┘ └──────────┬────────────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌────────────────────────────────────────────────┐ │
│ │ _score_methods() │ │
│ │ (rule-based scoring with weighted factors) │ │
│ └──────────────────────┬─────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌────────────────────────────────────────────────┐ │
│ │ MethodRecommendation │ │
│ │ method, confidence, parameters, reasoning │ │
│ └────────────────────────────────────────────────┘ │
│ │
│ ┌──────────────────┐ │
│ │ RuntimeAdvisor │ ← Monitors during optimization │
│ │ (pivot advisor) │ │
│ └──────────────────┘ │
│ │
└───────────────────────────────────────────────────────────────────┘
```
---
## Components
### 1. ProblemProfiler
Extracts static problem characteristics from `optimization_config.json`:
```python
@dataclass
class ProblemProfile:
n_variables: int
variable_names: List[str]
variable_bounds: Dict[str, Tuple[float, float]]
n_objectives: int
objective_names: List[str]
n_constraints: int
fea_time_estimate: float
max_fea_trials: int
is_multi_objective: bool
has_constraints: bool
```
### 2. EarlyMetricsCollector
Computes metrics from first N FEA trials in `study.db`:
```python
@dataclass
class EarlyMetrics:
n_trials_analyzed: int
objective_means: Dict[str, float]
objective_stds: Dict[str, float]
coefficient_of_variation: Dict[str, float]
objective_correlations: Dict[str, float]
variable_objective_correlations: Dict[str, Dict[str, float]]
feasibility_rate: float
response_smoothness: float # 0-1, higher = better for NN
variable_sensitivity: Dict[str, float]
```
### 3. AdaptiveMethodSelector
Main entry point that combines static + dynamic analysis:
```python
selector = AdaptiveMethodSelector(min_trials=20)
recommendation = selector.recommend(config_path, db_path)
```
### 4. RuntimeAdvisor
Monitors optimization progress and suggests pivots:
```python
advisor = RuntimeAdvisor()
pivot_advice = advisor.assess(db_path, config_path, current_method="turbo")
if pivot_advice.should_pivot:
print(f"Consider switching to {pivot_advice.recommended_method}")
print(f"Reason: {pivot_advice.reason}")
```
---
## Example Output
```
======================================================================
OPTIMIZATION METHOD ADVISOR
======================================================================
Problem Profile:
Variables: 2 (support_angle, tip_thickness)
Objectives: 3 (mass, stress, stiffness)
Constraints: 1
Max FEA budget: ~72 trials
----------------------------------------------------------------------
RECOMMENDED: TURBO
Confidence: 100%
Reason: low-dimensional design space; sufficient FEA budget; smooth landscape (79%)
Suggested parameters:
--nn-trials: 5000
--batch-size: 100
--retrain-every: 10
--epochs: 150
Alternatives:
- hybrid_loop (75%): uncertain landscape - hybrid adapts; adequate budget for iterations
- pure_fea (50%): default recommendation
======================================================================
```
---
## Parameter Recommendations
The selector suggests optimal parameters based on problem characteristics:
| Parameter | Low-D (≤3 vars) | Medium-D (4-6 vars) | High-D (>6 vars) |
|-----------|-----------------|---------------------|------------------|
| `--nn-trials` | 5000 | 10000 | 20000 |
| `--batch-size` | 100 | 100 | 200 |
| `--retrain-every` | 10 | 15 | 20 |
| `--epochs` | 150 | 200 | 300 |
---
## Scoring Algorithm
Each method receives a score based on weighted factors:
```python
# TURBO scoring
turbo_score = 50 # base score
turbo_score += 30 if n_variables <= 5 else -20 # dimensionality
turbo_score += 25 if smoothness > 0.7 else -10 # response smoothness
turbo_score += 20 if fea_budget > 50 else -15 # budget
turbo_score += 15 if feasibility > 0.8 else -5 # feasibility
turbo_score = max(0, min(100, turbo_score)) # clamp 0-100
# Similar for HYBRID_LOOP, PURE_FEA, GNN_FIELD
```
---
## Integration with run_optimization.py
The method selector can be integrated into the optimization workflow:
```python
# At start of optimization
from optimization_engine.method_selector import recommend_method
recommendation = recommend_method(config_path, db_path)
print(f"Recommended method: {recommendation.method}")
print(f"Parameters: {recommendation.parameters}")
# Ask user confirmation
if user_confirms:
if recommendation.method == 'turbo':
os.system(f"python run_nn_optimization.py --turbo "
f"--nn-trials {recommendation.parameters['nn_trials']} "
f"--batch-size {recommendation.parameters['batch_size']}")
```
---
## Troubleshooting
| Symptom | Cause | Solution |
|---------|-------|----------|
| "Insufficient trials" | < 20 FEA trials | Run more FEA trials first |
| Low confidence score | Conflicting signals | Try hybrid_loop as safe default |
| PURE_FEA recommended | High dimensionality | Consider dimension reduction |
| GNN_FIELD recommended | Need field visualization | Set up atomizer-field |
---
## Cross-References
- **Depends On**:
- [SYS_10_IMSO](./SYS_10_IMSO.md) for optimization framework
- [SYS_14_NEURAL_ACCELERATION](./SYS_14_NEURAL_ACCELERATION.md) for neural methods
- **Used By**: [OP_02_RUN_OPTIMIZATION](../operations/OP_02_RUN_OPTIMIZATION.md)
- **See Also**: [modules/method-selection.md](../../.claude/skills/modules/method-selection.md)
---
## Implementation Files
```
optimization_engine/
└── method_selector.py # Complete AMS implementation
├── ProblemProfiler # Static config analysis
├── EarlyMetricsCollector # Dynamic FEA metrics
├── AdaptiveMethodSelector # Main recommendation engine
├── RuntimeAdvisor # Mid-run pivot advisor
└── recommend_method() # Convenience function
```
---
## Version History
| Version | Date | Changes |
|---------|------|---------|
| 1.0 | 2025-12-06 | Initial implementation with 4 methods |

View File

@@ -0,0 +1,882 @@
"""
Adaptive Method Selector for Atomizer Optimization
This module provides intelligent method selection based on:
1. Problem characteristics (static analysis from config)
2. Early exploration metrics (dynamic analysis from first N trials)
3. Runtime performance metrics (continuous monitoring)
Classes:
- ProblemProfiler: Analyzes optimization config to extract problem characteristics
- EarlyMetricsCollector: Computes metrics from initial FEA trials
- AdaptiveMethodSelector: Recommends optimization method and parameters
- RuntimeAdvisor: Monitors optimization and suggests pivots
Usage:
from optimization_engine.method_selector import AdaptiveMethodSelector
selector = AdaptiveMethodSelector()
recommendation = selector.recommend(config_path)
print(recommendation['method']) # 'turbo', 'hybrid_loop', 'pure_fea', etc.
"""
import json
import numpy as np
from pathlib import Path
from typing import Dict, List, Optional, Any, Tuple
from dataclasses import dataclass, field, asdict
from enum import Enum
import sqlite3
from datetime import datetime
class OptimizationMethod(Enum):
"""Available optimization methods."""
PURE_FEA = "pure_fea"
HYBRID_LOOP = "hybrid_loop"
TURBO = "turbo"
GNN_FIELD = "gnn_field"
@dataclass
class ProblemProfile:
"""Static problem characteristics extracted from config."""
# Design space
n_variables: int = 0
variable_names: List[str] = field(default_factory=list)
variable_bounds: Dict[str, Tuple[float, float]] = field(default_factory=dict)
variable_types: Dict[str, str] = field(default_factory=dict) # 'continuous', 'discrete', 'categorical'
design_space_volume: float = 0.0 # Product of all ranges
# Objectives
n_objectives: int = 0
objective_names: List[str] = field(default_factory=list)
objective_goals: Dict[str, str] = field(default_factory=dict) # 'minimize', 'maximize'
# Constraints
n_constraints: int = 0
constraint_types: List[str] = field(default_factory=list) # 'less_than', 'greater_than', 'equal'
# Budget estimates
fea_time_estimate: float = 300.0 # seconds per FEA run
total_budget_hours: float = 8.0
max_fea_trials: int = 0 # Computed from budget
# Complexity indicators
is_multi_objective: bool = False
has_constraints: bool = False
expected_nonlinearity: str = "unknown" # 'low', 'medium', 'high', 'unknown'
# Neural acceleration hints
nn_enabled_in_config: bool = False
min_training_points: int = 50
def to_dict(self) -> dict:
return asdict(self)
@dataclass
class EarlyMetrics:
"""Metrics computed from initial FEA exploration."""
n_trials_analyzed: int = 0
# Objective statistics
objective_means: Dict[str, float] = field(default_factory=dict)
objective_stds: Dict[str, float] = field(default_factory=dict)
objective_ranges: Dict[str, Tuple[float, float]] = field(default_factory=dict)
coefficient_of_variation: Dict[str, float] = field(default_factory=dict) # std/mean
# Correlation analysis
objective_correlations: Dict[str, float] = field(default_factory=dict) # pairwise
variable_objective_correlations: Dict[str, Dict[str, float]] = field(default_factory=dict)
# Feasibility
feasibility_rate: float = 1.0
n_feasible: int = 0
n_infeasible: int = 0
# Pareto analysis (multi-objective)
pareto_front_size: int = 0
pareto_growth_rate: float = 0.0 # New Pareto points per trial
# Response smoothness (NN suitability)
response_smoothness: float = 0.5 # 0-1, higher = smoother
lipschitz_estimate: Dict[str, float] = field(default_factory=dict)
# Variable sensitivity
variable_sensitivity: Dict[str, float] = field(default_factory=dict) # Variance-based
most_sensitive_variable: str = ""
# Clustering
design_clustering: str = "unknown" # 'clustered', 'scattered', 'unknown'
# NN fit quality (if trained)
nn_accuracy: Optional[float] = None # R² or similar
nn_mean_error: Optional[Dict[str, float]] = None
def to_dict(self) -> dict:
return asdict(self)
@dataclass
class RuntimeMetrics:
"""Metrics collected during optimization runtime."""
timestamp: str = ""
trials_completed: int = 0
# Performance
fea_time_mean: float = 0.0
fea_time_std: float = 0.0
fea_failure_rate: float = 0.0
# Progress
pareto_size: int = 0
pareto_growth_rate: float = 0.0
best_objectives: Dict[str, float] = field(default_factory=dict)
improvement_rate: float = 0.0 # Best objective improvement per trial
# NN performance (if using hybrid/turbo)
nn_accuracy: Optional[float] = None
nn_accuracy_trend: str = "stable" # 'improving', 'stable', 'declining'
nn_predictions_count: int = 0
# Exploration vs exploitation
exploration_ratio: float = 0.5 # How much of design space explored
def to_dict(self) -> dict:
return asdict(self)
@dataclass
class MethodRecommendation:
"""Output from the method selector."""
method: str
confidence: float # 0-1
parameters: Dict[str, Any] = field(default_factory=dict)
reasoning: str = ""
alternatives: List[Dict[str, Any]] = field(default_factory=list)
warnings: List[str] = field(default_factory=list)
def to_dict(self) -> dict:
return asdict(self)
class ProblemProfiler:
"""Analyzes optimization config to extract problem characteristics."""
def __init__(self):
self.profile = ProblemProfile()
def analyze(self, config: dict) -> ProblemProfile:
"""
Analyze optimization config and return problem profile.
Args:
config: Loaded optimization_config.json dict
Returns:
ProblemProfile with extracted characteristics
"""
profile = ProblemProfile()
# Extract design variables
design_vars = config.get('design_variables', [])
profile.n_variables = len(design_vars)
profile.variable_names = [v['parameter'] for v in design_vars]
volume = 1.0
for var in design_vars:
name = var['parameter']
bounds = var.get('bounds', [0, 1])
profile.variable_bounds[name] = (bounds[0], bounds[1])
profile.variable_types[name] = var.get('type', 'continuous')
volume *= (bounds[1] - bounds[0])
profile.design_space_volume = volume
# Extract objectives
objectives = config.get('objectives', [])
profile.n_objectives = len(objectives)
profile.objective_names = [o['name'] for o in objectives]
profile.objective_goals = {o['name']: o.get('goal', 'minimize') for o in objectives}
profile.is_multi_objective = profile.n_objectives > 1
# Extract constraints
constraints = config.get('constraints', [])
profile.n_constraints = len(constraints)
profile.constraint_types = [c.get('type', 'less_than') for c in constraints]
profile.has_constraints = profile.n_constraints > 0
# Budget estimates
opt_settings = config.get('optimization_settings', {})
profile.fea_time_estimate = opt_settings.get('timeout_per_trial', 300)
profile.total_budget_hours = opt_settings.get('budget_hours', 8)
if profile.fea_time_estimate > 0:
profile.max_fea_trials = int(
(profile.total_budget_hours * 3600) / profile.fea_time_estimate
)
# Neural acceleration config
nn_config = config.get('neural_acceleration', {})
profile.nn_enabled_in_config = nn_config.get('enabled', False)
profile.min_training_points = nn_config.get('min_training_points', 50)
# Infer nonlinearity from physics type
sim_config = config.get('simulation', {})
analysis_types = sim_config.get('analysis_types', [])
if 'modal' in analysis_types or 'frequency' in str(analysis_types).lower():
profile.expected_nonlinearity = 'medium'
elif 'nonlinear' in str(analysis_types).lower():
profile.expected_nonlinearity = 'high'
else:
profile.expected_nonlinearity = 'low' # Static linear
self.profile = profile
return profile
def analyze_from_file(self, config_path: Path) -> ProblemProfile:
"""Load config from file and analyze."""
with open(config_path) as f:
config = json.load(f)
return self.analyze(config)
class EarlyMetricsCollector:
"""Computes metrics from initial FEA exploration trials."""
def __init__(self, min_trials: int = 20):
self.min_trials = min_trials
self.metrics = EarlyMetrics()
def collect(self, db_path: Path, objective_names: List[str],
variable_names: List[str], constraints: List[dict] = None) -> EarlyMetrics:
"""
Collect metrics from study database.
Args:
db_path: Path to study.db
objective_names: List of objective column names
variable_names: List of design variable names
constraints: List of constraint definitions from config
Returns:
EarlyMetrics with computed statistics
"""
metrics = EarlyMetrics()
if not db_path.exists():
return metrics
# Load data from Optuna database
conn = sqlite3.connect(str(db_path))
cursor = conn.cursor()
try:
# Get completed trials from Optuna database
# Note: Optuna stores params in trial_params and objectives in trial_values
cursor.execute("""
SELECT trial_id FROM trials
WHERE state = 'COMPLETE'
""")
completed_trials = cursor.fetchall()
metrics.n_trials_analyzed = len(completed_trials)
if metrics.n_trials_analyzed < self.min_trials:
conn.close()
return metrics
# Extract trial data from trial_params and trial_values tables
trial_data = []
for (trial_id,) in completed_trials:
values = {}
# Get parameters
cursor.execute("""
SELECT param_name, param_value FROM trial_params
WHERE trial_id = ?
""", (trial_id,))
for name, value in cursor.fetchall():
try:
values[name] = float(value) if value is not None else None
except:
pass
# Get objectives from trial_values
cursor.execute("""
SELECT objective, value FROM trial_values
WHERE trial_id = ?
""", (trial_id,))
for idx, value in cursor.fetchall():
if idx < len(objective_names):
values[objective_names[idx]] = float(value) if value is not None else None
if values:
trial_data.append(values)
if not trial_data:
conn.close()
return metrics
# Compute objective statistics
for obj_name in objective_names:
obj_values = [t.get(obj_name) for t in trial_data if t.get(obj_name) is not None]
if obj_values:
metrics.objective_means[obj_name] = np.mean(obj_values)
metrics.objective_stds[obj_name] = np.std(obj_values)
metrics.objective_ranges[obj_name] = (min(obj_values), max(obj_values))
if metrics.objective_means[obj_name] != 0:
metrics.coefficient_of_variation[obj_name] = (
metrics.objective_stds[obj_name] /
abs(metrics.objective_means[obj_name])
)
# Compute correlations between objectives
if len(objective_names) >= 2:
for i, obj1 in enumerate(objective_names):
for obj2 in objective_names[i+1:]:
vals1 = [t.get(obj1) for t in trial_data]
vals2 = [t.get(obj2) for t in trial_data]
# Filter out None values
paired = [(v1, v2) for v1, v2 in zip(vals1, vals2)
if v1 is not None and v2 is not None]
if len(paired) > 5:
v1, v2 = zip(*paired)
corr = np.corrcoef(v1, v2)[0, 1]
metrics.objective_correlations[f"{obj1}_vs_{obj2}"] = corr
# Compute variable-objective correlations (sensitivity)
for var_name in variable_names:
metrics.variable_objective_correlations[var_name] = {}
var_values = [t.get(var_name) for t in trial_data]
for obj_name in objective_names:
obj_values = [t.get(obj_name) for t in trial_data]
paired = [(v, o) for v, o in zip(var_values, obj_values)
if v is not None and o is not None]
if len(paired) > 5:
v, o = zip(*paired)
corr = abs(np.corrcoef(v, o)[0, 1])
metrics.variable_objective_correlations[var_name][obj_name] = corr
# Compute overall variable sensitivity (average absolute correlation)
for var_name in variable_names:
correlations = list(metrics.variable_objective_correlations.get(var_name, {}).values())
if correlations:
metrics.variable_sensitivity[var_name] = np.mean(correlations)
if metrics.variable_sensitivity:
metrics.most_sensitive_variable = max(
metrics.variable_sensitivity,
key=metrics.variable_sensitivity.get
)
# Estimate response smoothness
# Higher CV suggests rougher landscape
avg_cv = np.mean(list(metrics.coefficient_of_variation.values())) if metrics.coefficient_of_variation else 0.5
metrics.response_smoothness = max(0, min(1, 1 - avg_cv))
# Feasibility analysis
if constraints:
n_feasible = 0
for trial in trial_data:
feasible = True
for constraint in constraints:
c_name = constraint.get('name')
c_type = constraint.get('type', 'less_than')
threshold = constraint.get('threshold')
value = trial.get(c_name)
if value is not None and threshold is not None:
if c_type == 'less_than' and value > threshold:
feasible = False
elif c_type == 'greater_than' and value < threshold:
feasible = False
if feasible:
n_feasible += 1
metrics.n_feasible = n_feasible
metrics.n_infeasible = len(trial_data) - n_feasible
metrics.feasibility_rate = n_feasible / len(trial_data) if trial_data else 1.0
conn.close()
except Exception as e:
print(f"Warning: Error collecting metrics: {e}")
conn.close()
self.metrics = metrics
return metrics
def estimate_nn_suitability(self) -> float:
"""
Estimate how suitable the problem is for neural network acceleration.
Returns:
Score from 0-1, higher = more suitable
"""
score = 0.5 # Base score
# Smooth response is good for NN
score += 0.2 * self.metrics.response_smoothness
# High feasibility is good
score += 0.1 * self.metrics.feasibility_rate
# Enough training data
if self.metrics.n_trials_analyzed >= 50:
score += 0.1
if self.metrics.n_trials_analyzed >= 100:
score += 0.1
return min(1.0, max(0.0, score))
class AdaptiveMethodSelector:
"""
Recommends optimization method based on problem characteristics and metrics.
The selector uses a scoring system to rank methods:
- Each method starts with a base score
- Scores are adjusted based on problem characteristics
- Early metrics further refine the recommendation
"""
def __init__(self):
self.profiler = ProblemProfiler()
self.metrics_collector = EarlyMetricsCollector()
# Method base scores (can be tuned based on historical performance)
self.base_scores = {
OptimizationMethod.PURE_FEA: 0.5,
OptimizationMethod.HYBRID_LOOP: 0.6,
OptimizationMethod.TURBO: 0.7,
OptimizationMethod.GNN_FIELD: 0.4,
}
def recommend(self, config: dict, db_path: Path = None,
early_metrics: EarlyMetrics = None) -> MethodRecommendation:
"""
Generate method recommendation.
Args:
config: Optimization config dict
db_path: Optional path to existing study.db for early metrics
early_metrics: Pre-computed early metrics (optional)
Returns:
MethodRecommendation with method, confidence, and parameters
"""
# Profile the problem
profile = self.profiler.analyze(config)
# Collect early metrics if database exists
if db_path and db_path.exists() and early_metrics is None:
early_metrics = self.metrics_collector.collect(
db_path,
profile.objective_names,
profile.variable_names,
config.get('constraints', [])
)
# Score each method
scores = self._score_methods(profile, early_metrics)
# Sort by score
ranked = sorted(scores.items(), key=lambda x: x[1]['score'], reverse=True)
# Build recommendation
best_method, best_info = ranked[0]
recommendation = MethodRecommendation(
method=best_method.value,
confidence=min(1.0, best_info['score']),
parameters=self._get_parameters(best_method, profile, early_metrics),
reasoning=best_info['reason'],
alternatives=[
{
'method': m.value,
'confidence': min(1.0, info['score']),
'reason': info['reason']
}
for m, info in ranked[1:3]
],
warnings=self._get_warnings(profile, early_metrics)
)
return recommendation
def _score_methods(self, profile: ProblemProfile,
metrics: EarlyMetrics = None) -> Dict[OptimizationMethod, Dict]:
"""Score each method based on problem characteristics."""
scores = {}
for method in OptimizationMethod:
score = self.base_scores[method]
reasons = []
# === TURBO MODE ===
if method == OptimizationMethod.TURBO:
# Good for: low-dimensional, smooth, sufficient budget
if profile.n_variables <= 5:
score += 0.15
reasons.append("low-dimensional design space")
elif profile.n_variables > 10:
score -= 0.2
reasons.append("high-dimensional (may struggle)")
if profile.max_fea_trials >= 50:
score += 0.1
reasons.append("sufficient FEA budget")
else:
score -= 0.15
reasons.append("limited FEA budget")
if metrics and metrics.response_smoothness > 0.7:
score += 0.15
reasons.append(f"smooth landscape ({metrics.response_smoothness:.0%})")
elif metrics and metrics.response_smoothness < 0.4:
score -= 0.2
reasons.append(f"rough landscape ({metrics.response_smoothness:.0%})")
if metrics and metrics.nn_accuracy and metrics.nn_accuracy > 0.9:
score += 0.1
reasons.append(f"excellent NN fit ({metrics.nn_accuracy:.0%})")
# === HYBRID LOOP ===
elif method == OptimizationMethod.HYBRID_LOOP:
# Good for: moderate complexity, unknown landscape, need safety
if 3 <= profile.n_variables <= 10:
score += 0.1
reasons.append("moderate dimensionality")
if metrics and 0.4 < metrics.response_smoothness < 0.8:
score += 0.1
reasons.append("uncertain landscape - hybrid adapts")
if profile.has_constraints and metrics and metrics.feasibility_rate < 0.9:
score += 0.1
reasons.append("constrained problem - safer approach")
if profile.max_fea_trials >= 30:
score += 0.05
reasons.append("adequate budget for iterations")
# === PURE FEA ===
elif method == OptimizationMethod.PURE_FEA:
# Good for: small budget, highly nonlinear, rough landscape
if profile.max_fea_trials < 30:
score += 0.2
reasons.append("limited budget - no NN overhead")
if metrics and metrics.response_smoothness < 0.3:
score += 0.2
reasons.append("rough landscape - NN unreliable")
if profile.expected_nonlinearity == 'high':
score += 0.15
reasons.append("highly nonlinear physics")
if metrics and metrics.feasibility_rate < 0.5:
score += 0.1
reasons.append("many infeasible designs - need accurate FEA")
# === GNN FIELD ===
elif method == OptimizationMethod.GNN_FIELD:
# Good for: high-dimensional, need field visualization
if profile.n_variables > 10:
score += 0.2
reasons.append("high-dimensional - GNN handles well")
# GNN is more advanced, only recommend if specifically needed
if profile.n_variables <= 5:
score -= 0.1
reasons.append("simple problem - MLP sufficient")
# Compile reason string
reason = "; ".join(reasons) if reasons else "default recommendation"
scores[method] = {'score': score, 'reason': reason}
return scores
def _get_parameters(self, method: OptimizationMethod,
profile: ProblemProfile,
metrics: EarlyMetrics = None) -> Dict[str, Any]:
"""Generate recommended parameters for the selected method."""
params = {}
if method == OptimizationMethod.TURBO:
# Scale NN trials based on dimensionality
base_nn_trials = 5000
if profile.n_variables <= 2:
nn_trials = base_nn_trials
elif profile.n_variables <= 5:
nn_trials = base_nn_trials * 2
else:
nn_trials = base_nn_trials * 3
params = {
'nn_trials': nn_trials,
'batch_size': 100,
'retrain_every': 10,
'epochs': 150 if metrics and metrics.n_trials_analyzed > 100 else 200
}
elif method == OptimizationMethod.HYBRID_LOOP:
params = {
'iterations': 5,
'nn_trials_per_iter': 500,
'validate_per_iter': 5,
'epochs': 300
}
elif method == OptimizationMethod.PURE_FEA:
# Choose sampler based on objectives
if profile.is_multi_objective:
sampler = 'NSGAIISampler'
else:
sampler = 'TPESampler'
params = {
'sampler': sampler,
'n_trials': min(100, profile.max_fea_trials),
'timeout_per_trial': profile.fea_time_estimate
}
elif method == OptimizationMethod.GNN_FIELD:
params = {
'model_type': 'parametric_gnn',
'initial_fea_trials': 100,
'nn_trials': 10000,
'epochs': 200
}
return params
def _get_warnings(self, profile: ProblemProfile,
metrics: EarlyMetrics = None) -> List[str]:
"""Generate warnings about potential issues."""
warnings = []
if profile.n_variables > 10:
warnings.append(
f"High-dimensional problem ({profile.n_variables} variables) - "
"consider dimensionality reduction or Latin Hypercube sampling"
)
if profile.max_fea_trials < 20:
warnings.append(
f"Very limited FEA budget ({profile.max_fea_trials} trials) - "
"neural acceleration may not have enough training data"
)
if metrics and metrics.feasibility_rate < 0.5:
warnings.append(
f"Low feasibility rate ({metrics.feasibility_rate:.0%}) - "
"consider relaxing constraints or narrowing design space"
)
if metrics and metrics.response_smoothness < 0.3:
warnings.append(
f"Rough objective landscape detected - "
"neural surrogate may have high prediction errors"
)
return warnings
class RuntimeAdvisor:
"""
Monitors optimization runtime and suggests method pivots.
Call check_pivot() periodically during optimization to get
suggestions for method changes.
"""
def __init__(self, check_interval: int = 10):
"""
Args:
check_interval: Check for pivots every N trials
"""
self.check_interval = check_interval
self.history: List[RuntimeMetrics] = []
self.pivot_suggestions: List[Dict] = []
def update(self, metrics: RuntimeMetrics):
"""Add new runtime metrics to history."""
metrics.timestamp = datetime.now().isoformat()
self.history.append(metrics)
def check_pivot(self, current_method: str) -> Optional[Dict]:
"""
Check if a method pivot should be suggested.
Args:
current_method: Currently running method
Returns:
Pivot suggestion dict or None
"""
if len(self.history) < 2:
return None
latest = self.history[-1]
previous = self.history[-2]
suggestion = None
# Check 1: NN accuracy declining
if latest.nn_accuracy_trend == 'declining':
if current_method == 'turbo':
suggestion = {
'suggest_pivot': True,
'from': current_method,
'to': 'hybrid_loop',
'reason': 'NN accuracy declining - switch to hybrid for more frequent retraining',
'urgency': 'medium'
}
# Check 2: Pareto front stagnating
if latest.pareto_growth_rate < 0.01 and previous.pareto_growth_rate < 0.01:
suggestion = {
'suggest_pivot': True,
'from': current_method,
'to': 'increase_exploration',
'reason': 'Pareto front stagnating - consider increasing exploration',
'urgency': 'low'
}
# Check 3: High FEA failure rate
if latest.fea_failure_rate > 0.2:
if current_method in ['turbo', 'hybrid_loop']:
suggestion = {
'suggest_pivot': True,
'from': current_method,
'to': 'pure_fea',
'reason': f'High FEA failure rate ({latest.fea_failure_rate:.0%}) - NN exploring invalid regions',
'urgency': 'high'
}
# Check 4: Diminishing returns
if latest.improvement_rate < 0.001 and latest.trials_completed > 100:
suggestion = {
'suggest_pivot': True,
'from': current_method,
'to': 'stop_early',
'reason': 'Diminishing returns - consider stopping optimization',
'urgency': 'low'
}
if suggestion:
self.pivot_suggestions.append(suggestion)
return suggestion
def get_summary(self) -> Dict:
"""Get summary of runtime performance."""
if not self.history:
return {}
latest = self.history[-1]
return {
'trials_completed': latest.trials_completed,
'pareto_size': latest.pareto_size,
'fea_time_mean': latest.fea_time_mean,
'fea_failure_rate': latest.fea_failure_rate,
'nn_accuracy': latest.nn_accuracy,
'pivot_suggestions_count': len(self.pivot_suggestions)
}
def print_recommendation(rec: MethodRecommendation, profile: ProblemProfile = None):
"""Pretty-print a method recommendation."""
print("\n" + "=" * 70)
print(" OPTIMIZATION METHOD ADVISOR")
print("=" * 70)
if profile:
print("\nProblem Profile:")
print(f" Variables: {profile.n_variables} ({', '.join(profile.variable_names)})")
print(f" Objectives: {profile.n_objectives} ({', '.join(profile.objective_names)})")
print(f" Constraints: {profile.n_constraints}")
print(f" Max FEA budget: ~{profile.max_fea_trials} trials")
print("\n" + "-" * 70)
print(f"\n RECOMMENDED: {rec.method.upper()}")
print(f" Confidence: {rec.confidence:.0%}")
print(f" Reason: {rec.reasoning}")
print("\n Suggested parameters:")
for key, value in rec.parameters.items():
print(f" --{key.replace('_', '-')}: {value}")
if rec.alternatives:
print("\n Alternatives:")
for alt in rec.alternatives:
print(f" - {alt['method']} ({alt['confidence']:.0%}): {alt['reason']}")
if rec.warnings:
print("\n Warnings:")
for warning in rec.warnings:
print(f" ! {warning}")
print("\n" + "=" * 70)
# Convenience function for quick use
def recommend_method(config_path: Path, db_path: Path = None) -> MethodRecommendation:
"""
Quick method recommendation from config file.
Args:
config_path: Path to optimization_config.json
db_path: Optional path to existing study.db
Returns:
MethodRecommendation
"""
with open(config_path) as f:
config = json.load(f)
selector = AdaptiveMethodSelector()
return selector.recommend(config, db_path)
if __name__ == "__main__":
# Test with a sample config
import sys
if len(sys.argv) > 1:
config_path = Path(sys.argv[1])
db_path = Path(sys.argv[2]) if len(sys.argv) > 2 else None
rec = recommend_method(config_path, db_path)
# Also get profile for display
with open(config_path) as f:
config = json.load(f)
profiler = ProblemProfiler()
profile = profiler.analyze(config)
print_recommendation(rec, profile)
else:
print("Usage: python method_selector.py <config_path> [db_path]")