- SYS_12: Add extractor library updates - SYS_15: Add method selector documentation updates - method_selector.py: Minor improvements to method selection logic 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
443 lines
17 KiB
Markdown
443 lines
17 KiB
Markdown
# SYS_15: Adaptive Method Selector
|
||
|
||
<!--
|
||
PROTOCOL: Adaptive Method Selector
|
||
LAYER: System
|
||
VERSION: 2.0
|
||
STATUS: Active
|
||
LAST_UPDATED: 2025-12-07
|
||
PRIVILEGE: user
|
||
LOAD_WITH: [SYS_10_IMSO, SYS_11_MULTI_OBJECTIVE, SYS_14_NEURAL_ACCELERATION]
|
||
-->
|
||
|
||
## Overview
|
||
|
||
The **Adaptive Method Selector (AMS)** analyzes optimization problems and recommends the best method (turbo, hybrid_loop, pure_fea, etc.) based on:
|
||
|
||
1. **Static Analysis**: Problem characteristics from config (dimensionality, objectives, constraints)
|
||
2. **Dynamic Analysis**: Early FEA trial metrics (smoothness, correlations, feasibility)
|
||
3. **NN Quality Assessment**: Relative accuracy thresholds comparing NN error to problem variability
|
||
4. **Runtime Monitoring**: Continuous optimization performance assessment
|
||
|
||
**Key Value**: Eliminates guesswork in choosing optimization strategies by providing data-driven recommendations with relative accuracy thresholds.
|
||
|
||
---
|
||
|
||
## When to Use
|
||
|
||
| Trigger | Action |
|
||
|---------|--------|
|
||
| Starting a new optimization | Run method selector first |
|
||
| "which method", "recommend" mentioned | Suggest method selector |
|
||
| Unsure between turbo/hybrid/fea | Use method selector |
|
||
| > 20 FEA trials completed | Re-run for updated recommendation |
|
||
|
||
---
|
||
|
||
## Quick Reference
|
||
|
||
### CLI Usage
|
||
|
||
```bash
|
||
python -m optimization_engine.method_selector <config_path> [db_path]
|
||
```
|
||
|
||
**Examples**:
|
||
```bash
|
||
# Config-only analysis (before any FEA trials)
|
||
python -m optimization_engine.method_selector 1_setup/optimization_config.json
|
||
|
||
# Full analysis with FEA data
|
||
python -m optimization_engine.method_selector 1_setup/optimization_config.json 2_results/study.db
|
||
```
|
||
|
||
### Python API
|
||
|
||
```python
|
||
from optimization_engine.method_selector import AdaptiveMethodSelector
|
||
|
||
selector = AdaptiveMethodSelector()
|
||
recommendation = selector.recommend("1_setup/optimization_config.json", "2_results/study.db")
|
||
|
||
print(recommendation.method) # 'turbo', 'hybrid_loop', 'pure_fea', 'gnn_field'
|
||
print(recommendation.confidence) # 0.0 - 1.0
|
||
print(recommendation.parameters) # {'nn_trials': 5000, 'batch_size': 100, ...}
|
||
print(recommendation.reasoning) # Explanation string
|
||
print(recommendation.alternatives) # Other methods with scores
|
||
```
|
||
|
||
---
|
||
|
||
## Available Methods
|
||
|
||
| Method | Description | Best For |
|
||
|--------|-------------|----------|
|
||
| **TURBO** | Aggressive NN exploration with single-best FEA validation | Low-dimensional, smooth responses |
|
||
| **HYBRID_LOOP** | Iterative train→predict→validate→retrain cycle | Moderate complexity, uncertain landscape |
|
||
| **PURE_FEA** | Traditional FEA-only optimization | High-dimensional, complex physics |
|
||
| **GNN_FIELD** | Graph neural network for field prediction | Need full field visualization |
|
||
|
||
---
|
||
|
||
## Selection Criteria
|
||
|
||
### Static Factors (from config)
|
||
|
||
| Factor | Favors TURBO | Favors HYBRID_LOOP | Favors PURE_FEA |
|
||
|--------|--------------|---------------------|-----------------|
|
||
| **n_variables** | ≤5 | 5-10 | >10 |
|
||
| **n_objectives** | 1-3 | 2-4 | Any |
|
||
| **n_constraints** | ≤3 | 3-5 | >5 |
|
||
| **FEA budget** | >50 trials | 30-50 trials | <30 trials |
|
||
|
||
### Dynamic Factors (from FEA trials)
|
||
|
||
| Factor | Measurement | Impact |
|
||
|--------|-------------|--------|
|
||
| **Response smoothness** | Lipschitz constant estimate | Smooth → NN works well |
|
||
| **Variable sensitivity** | Correlation with objectives | High correlation → easier to learn |
|
||
| **Feasibility rate** | % of valid designs | Low feasibility → need more exploration |
|
||
| **Objective correlations** | Pairwise correlations | Strong correlations → simpler landscape |
|
||
|
||
---
|
||
|
||
## NN Quality Assessment
|
||
|
||
The method selector uses **relative accuracy thresholds** to assess NN suitability. Instead of absolute error limits, it compares NN error to the problem's natural variability (coefficient of variation).
|
||
|
||
### Core Concept
|
||
|
||
```
|
||
NN Suitability = f(nn_error / coefficient_of_variation)
|
||
|
||
If nn_error >> CV → NN is unreliable (not learning, just noise)
|
||
If nn_error ≈ CV → NN captures the trend (hybrid recommended)
|
||
If nn_error << CV → NN is excellent (turbo viable)
|
||
```
|
||
|
||
### Physics-Based Classification
|
||
|
||
Objectives are classified by their expected predictability:
|
||
|
||
| Objective Type | Examples | Max Expected Error | CV Ratio Limit |
|
||
|----------------|----------|-------------------|----------------|
|
||
| **Linear** | mass, volume | 2% | 0.5 |
|
||
| **Smooth** | frequency, avg stress | 5% | 1.0 |
|
||
| **Nonlinear** | max stress, stiffness | 10% | 2.0 |
|
||
| **Chaotic** | contact, buckling | 20% | 3.0 |
|
||
|
||
### CV Ratio Interpretation
|
||
|
||
The **CV Ratio** = NN Error / (Coefficient of Variation × 100):
|
||
|
||
| CV Ratio | Quality | Interpretation |
|
||
|----------|---------|----------------|
|
||
| < 0.5 | ✓ Great | NN captures physics much better than noise |
|
||
| 0.5 - 1.0 | ✓ Good | NN adds significant value for exploration |
|
||
| 1.0 - 2.0 | ~ OK | NN is marginal, use with validation |
|
||
| > 2.0 | ✗ Poor | NN not learning effectively, use FEA |
|
||
|
||
### Method Recommendations Based on Quality
|
||
|
||
| Turbo Suitability | Hybrid Suitability | Recommendation |
|
||
|-------------------|--------------------|-----------------------|
|
||
| > 80% | any | **TURBO** - trust NN fully |
|
||
| 50-80% | > 50% | **TURBO** with monitoring |
|
||
| < 50% | > 50% | **HYBRID_LOOP** - verify periodically |
|
||
| < 30% | < 50% | **PURE_FEA** or retrain first |
|
||
|
||
### Data Sources
|
||
|
||
NN quality metrics are collected from:
|
||
1. `validation_report.json` - FEA validation results
|
||
2. `turbo_report.json` - Turbo mode validation history
|
||
3. `study.db` - Trial `nn_error_percent` user attributes
|
||
|
||
---
|
||
|
||
## Architecture
|
||
|
||
```
|
||
┌─────────────────────────────────────────────────────────────────────────┐
|
||
│ AdaptiveMethodSelector │
|
||
├─────────────────────────────────────────────────────────────────────────┤
|
||
│ │
|
||
│ ┌─────────────────┐ ┌────────────────────┐ ┌───────────────────┐ │
|
||
│ │ ProblemProfiler │ │EarlyMetricsCollector│ │ NNQualityAssessor │ │
|
||
│ │(static analysis)│ │ (dynamic analysis) │ │ (NN accuracy) │ │
|
||
│ └───────┬─────────┘ └─────────┬──────────┘ └─────────┬─────────┘ │
|
||
│ │ │ │ │
|
||
│ ▼ ▼ ▼ │
|
||
│ ┌─────────────────────────────────────────────────────────────────┐ │
|
||
│ │ _score_methods() │ │
|
||
│ │ (rule-based scoring with static + dynamic + NN factors) │ │
|
||
│ └───────────────────────────────┬─────────────────────────────────┘ │
|
||
│ │ │
|
||
│ ▼ │
|
||
│ ┌─────────────────────────────────────────────────────────────────┐ │
|
||
│ │ MethodRecommendation │ │
|
||
│ │ method, confidence, parameters, reasoning, warnings │ │
|
||
│ └─────────────────────────────────────────────────────────────────┘ │
|
||
│ │
|
||
│ ┌──────────────────┐ │
|
||
│ │ RuntimeAdvisor │ ← Monitors during optimization │
|
||
│ │ (pivot advisor) │ │
|
||
│ └──────────────────┘ │
|
||
│ │
|
||
└─────────────────────────────────────────────────────────────────────────┘
|
||
```
|
||
|
||
---
|
||
|
||
## Components
|
||
|
||
### 1. ProblemProfiler
|
||
|
||
Extracts static problem characteristics from `optimization_config.json`:
|
||
|
||
```python
|
||
@dataclass
|
||
class ProblemProfile:
|
||
n_variables: int
|
||
variable_names: List[str]
|
||
variable_bounds: Dict[str, Tuple[float, float]]
|
||
n_objectives: int
|
||
objective_names: List[str]
|
||
n_constraints: int
|
||
fea_time_estimate: float
|
||
max_fea_trials: int
|
||
is_multi_objective: bool
|
||
has_constraints: bool
|
||
```
|
||
|
||
### 2. EarlyMetricsCollector
|
||
|
||
Computes metrics from first N FEA trials in `study.db`:
|
||
|
||
```python
|
||
@dataclass
|
||
class EarlyMetrics:
|
||
n_trials_analyzed: int
|
||
objective_means: Dict[str, float]
|
||
objective_stds: Dict[str, float]
|
||
coefficient_of_variation: Dict[str, float]
|
||
objective_correlations: Dict[str, float]
|
||
variable_objective_correlations: Dict[str, Dict[str, float]]
|
||
feasibility_rate: float
|
||
response_smoothness: float # 0-1, higher = better for NN
|
||
variable_sensitivity: Dict[str, float]
|
||
```
|
||
|
||
### 3. NNQualityAssessor
|
||
|
||
Assesses NN surrogate quality relative to problem complexity:
|
||
|
||
```python
|
||
@dataclass
|
||
class NNQualityMetrics:
|
||
has_nn_data: bool = False
|
||
n_validations: int = 0
|
||
nn_errors: Dict[str, float] # Absolute % error per objective
|
||
cv_ratios: Dict[str, float] # nn_error / (CV * 100) per objective
|
||
expected_errors: Dict[str, float] # Physics-based threshold
|
||
overall_quality: float # 0-1, based on absolute thresholds
|
||
turbo_suitability: float # 0-1, based on CV ratios
|
||
hybrid_suitability: float # 0-1, more lenient threshold
|
||
objective_types: Dict[str, str] # 'linear', 'smooth', 'nonlinear', 'chaotic'
|
||
```
|
||
|
||
### 4. AdaptiveMethodSelector
|
||
|
||
Main entry point that combines static + dynamic + NN quality analysis:
|
||
|
||
```python
|
||
selector = AdaptiveMethodSelector(min_trials=20)
|
||
recommendation = selector.recommend(config_path, db_path, results_dir=results_dir)
|
||
|
||
# Access last NN quality for display
|
||
print(f"Turbo suitability: {selector.last_nn_quality.turbo_suitability:.0%}")
|
||
```
|
||
|
||
### 5. RuntimeAdvisor
|
||
|
||
Monitors optimization progress and suggests pivots:
|
||
|
||
```python
|
||
advisor = RuntimeAdvisor()
|
||
pivot_advice = advisor.assess(db_path, config_path, current_method="turbo")
|
||
|
||
if pivot_advice.should_pivot:
|
||
print(f"Consider switching to {pivot_advice.recommended_method}")
|
||
print(f"Reason: {pivot_advice.reason}")
|
||
```
|
||
|
||
---
|
||
|
||
## Example Output
|
||
|
||
```
|
||
======================================================================
|
||
OPTIMIZATION METHOD ADVISOR
|
||
======================================================================
|
||
|
||
Problem Profile:
|
||
Variables: 2 (support_angle, tip_thickness)
|
||
Objectives: 3 (mass, stress, stiffness)
|
||
Constraints: 1
|
||
Max FEA budget: ~72 trials
|
||
|
||
NN Quality Assessment:
|
||
Validations analyzed: 10
|
||
|
||
| Objective | NN Error | CV | Ratio | Type | Quality |
|
||
|---------------|----------|--------|-------|------------|---------|
|
||
| mass | 3.7% | 16.0% | 0.23 | linear | ✓ Great |
|
||
| stress | 2.0% | 7.7% | 0.26 | nonlinear | ✓ Great |
|
||
| stiffness | 7.8% | 38.9% | 0.20 | nonlinear | ✓ Great |
|
||
|
||
Overall Quality: 22%
|
||
Turbo Suitability: 77%
|
||
Hybrid Suitability: 88%
|
||
|
||
----------------------------------------------------------------------
|
||
|
||
RECOMMENDED: TURBO
|
||
Confidence: 100%
|
||
Reason: low-dimensional design space; sufficient FEA budget; smooth landscape (79%); good NN quality (77%)
|
||
|
||
Suggested parameters:
|
||
--nn-trials: 5000
|
||
--batch-size: 100
|
||
--retrain-every: 10
|
||
--epochs: 150
|
||
|
||
Alternatives:
|
||
- hybrid_loop (90%): uncertain landscape - hybrid adapts; NN adds value with periodic retraining
|
||
- pure_fea (50%): default recommendation
|
||
|
||
Warnings:
|
||
! mass: NN error (3.7%) above expected (2%) - consider retraining or using hybrid mode
|
||
|
||
======================================================================
|
||
```
|
||
|
||
---
|
||
|
||
## Parameter Recommendations
|
||
|
||
The selector suggests optimal parameters based on problem characteristics:
|
||
|
||
| Parameter | Low-D (≤3 vars) | Medium-D (4-6 vars) | High-D (>6 vars) |
|
||
|-----------|-----------------|---------------------|------------------|
|
||
| `--nn-trials` | 5000 | 10000 | 20000 |
|
||
| `--batch-size` | 100 | 100 | 200 |
|
||
| `--retrain-every` | 10 | 15 | 20 |
|
||
| `--epochs` | 150 | 200 | 300 |
|
||
|
||
---
|
||
|
||
## Scoring Algorithm
|
||
|
||
Each method receives a score based on weighted factors:
|
||
|
||
```python
|
||
# TURBO scoring
|
||
turbo_score = 50 # base score
|
||
turbo_score += 30 if n_variables <= 5 else -20 # dimensionality
|
||
turbo_score += 25 if smoothness > 0.7 else -10 # response smoothness
|
||
turbo_score += 20 if fea_budget > 50 else -15 # budget
|
||
turbo_score += 15 if feasibility > 0.8 else -5 # feasibility
|
||
turbo_score = max(0, min(100, turbo_score)) # clamp 0-100
|
||
|
||
# Similar for HYBRID_LOOP, PURE_FEA, GNN_FIELD
|
||
```
|
||
|
||
---
|
||
|
||
## Integration with run_optimization.py
|
||
|
||
The method selector can be integrated into the optimization workflow:
|
||
|
||
```python
|
||
# At start of optimization
|
||
from optimization_engine.method_selector import recommend_method
|
||
|
||
recommendation = recommend_method(config_path, db_path)
|
||
print(f"Recommended method: {recommendation.method}")
|
||
print(f"Parameters: {recommendation.parameters}")
|
||
|
||
# Ask user confirmation
|
||
if user_confirms:
|
||
if recommendation.method == 'turbo':
|
||
os.system(f"python run_nn_optimization.py --turbo "
|
||
f"--nn-trials {recommendation.parameters['nn_trials']} "
|
||
f"--batch-size {recommendation.parameters['batch_size']}")
|
||
```
|
||
|
||
---
|
||
|
||
## Troubleshooting
|
||
|
||
| Symptom | Cause | Solution |
|
||
|---------|-------|----------|
|
||
| "Insufficient trials" | < 20 FEA trials | Run more FEA trials first |
|
||
| Low confidence score | Conflicting signals | Try hybrid_loop as safe default |
|
||
| PURE_FEA recommended | High dimensionality | Consider dimension reduction |
|
||
| GNN_FIELD recommended | Need field visualization | Set up atomizer-field |
|
||
|
||
### Config Format Compatibility
|
||
|
||
The method selector supports multiple config JSON formats:
|
||
|
||
| Old Format | New Format | Both Supported |
|
||
|------------|------------|----------------|
|
||
| `parameter` | `name` | Variable name |
|
||
| `bounds: [min, max]` | `min`, `max` | Variable bounds |
|
||
| `goal` | `direction` | Objective direction |
|
||
|
||
**Example equivalent configs:**
|
||
```json
|
||
// Old format (UAV study style)
|
||
{"design_variables": [{"parameter": "angle", "bounds": [30, 60]}]}
|
||
|
||
// New format (beam study style)
|
||
{"design_variables": [{"name": "angle", "min": 30, "max": 60}]}
|
||
```
|
||
|
||
---
|
||
|
||
## Cross-References
|
||
|
||
- **Depends On**:
|
||
- [SYS_10_IMSO](./SYS_10_IMSO.md) for optimization framework
|
||
- [SYS_14_NEURAL_ACCELERATION](./SYS_14_NEURAL_ACCELERATION.md) for neural methods
|
||
- **Used By**: [OP_02_RUN_OPTIMIZATION](../operations/OP_02_RUN_OPTIMIZATION.md)
|
||
- **See Also**: [modules/method-selection.md](../../.claude/skills/modules/method-selection.md)
|
||
|
||
---
|
||
|
||
## Implementation Files
|
||
|
||
```
|
||
optimization_engine/
|
||
└── method_selector.py # Complete AMS implementation
|
||
├── ProblemProfiler # Static config analysis
|
||
├── EarlyMetricsCollector # Dynamic FEA metrics
|
||
├── NNQualityMetrics # NN accuracy dataclass
|
||
├── NNQualityAssessor # Relative accuracy assessment
|
||
├── AdaptiveMethodSelector # Main recommendation engine
|
||
├── RuntimeAdvisor # Mid-run pivot advisor
|
||
├── print_recommendation() # CLI output with NN quality table
|
||
└── recommend_method() # Convenience function
|
||
```
|
||
|
||
---
|
||
|
||
## Version History
|
||
|
||
| Version | Date | Changes |
|
||
|---------|------|---------|
|
||
| 2.1 | 2025-12-07 | Added config format flexibility (parameter/name, bounds/min-max, goal/direction) |
|
||
| 2.0 | 2025-12-07 | Added NNQualityAssessor with relative accuracy thresholds |
|
||
| 1.0 | 2025-12-06 | Initial implementation with 4 methods |
|