327 lines
12 KiB
Markdown
327 lines
12 KiB
Markdown
|
|
# SYS_15: Adaptive Method Selector
|
||
|
|
|
||
|
|
<!--
|
||
|
|
PROTOCOL: Adaptive Method Selector
|
||
|
|
LAYER: System
|
||
|
|
VERSION: 1.0
|
||
|
|
STATUS: Active
|
||
|
|
LAST_UPDATED: 2025-12-06
|
||
|
|
PRIVILEGE: user
|
||
|
|
LOAD_WITH: [SYS_10_IMSO, SYS_11_MULTI_OBJECTIVE, SYS_14_NEURAL_ACCELERATION]
|
||
|
|
-->
|
||
|
|
|
||
|
|
## Overview
|
||
|
|
|
||
|
|
The **Adaptive Method Selector (AMS)** analyzes optimization problems and recommends the best method (turbo, hybrid_loop, pure_fea, etc.) based on:
|
||
|
|
|
||
|
|
1. **Static Analysis**: Problem characteristics from config (dimensionality, objectives, constraints)
|
||
|
|
2. **Dynamic Analysis**: Early FEA trial metrics (smoothness, correlations, feasibility)
|
||
|
|
3. **Runtime Monitoring**: Continuous optimization performance assessment
|
||
|
|
|
||
|
|
**Key Value**: Eliminates guesswork in choosing optimization strategies by providing data-driven recommendations.
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
## When to Use
|
||
|
|
|
||
|
|
| Trigger | Action |
|
||
|
|
|---------|--------|
|
||
|
|
| Starting a new optimization | Run method selector first |
|
||
|
|
| "which method", "recommend" mentioned | Suggest method selector |
|
||
|
|
| Unsure between turbo/hybrid/fea | Use method selector |
|
||
|
|
| > 20 FEA trials completed | Re-run for updated recommendation |
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
## Quick Reference
|
||
|
|
|
||
|
|
### CLI Usage
|
||
|
|
|
||
|
|
```bash
|
||
|
|
python -m optimization_engine.method_selector <config_path> [db_path]
|
||
|
|
```
|
||
|
|
|
||
|
|
**Examples**:
|
||
|
|
```bash
|
||
|
|
# Config-only analysis (before any FEA trials)
|
||
|
|
python -m optimization_engine.method_selector 1_setup/optimization_config.json
|
||
|
|
|
||
|
|
# Full analysis with FEA data
|
||
|
|
python -m optimization_engine.method_selector 1_setup/optimization_config.json 2_results/study.db
|
||
|
|
```
|
||
|
|
|
||
|
|
### Python API
|
||
|
|
|
||
|
|
```python
|
||
|
|
from optimization_engine.method_selector import AdaptiveMethodSelector
|
||
|
|
|
||
|
|
selector = AdaptiveMethodSelector()
|
||
|
|
recommendation = selector.recommend("1_setup/optimization_config.json", "2_results/study.db")
|
||
|
|
|
||
|
|
print(recommendation.method) # 'turbo', 'hybrid_loop', 'pure_fea', 'gnn_field'
|
||
|
|
print(recommendation.confidence) # 0.0 - 1.0
|
||
|
|
print(recommendation.parameters) # {'nn_trials': 5000, 'batch_size': 100, ...}
|
||
|
|
print(recommendation.reasoning) # Explanation string
|
||
|
|
print(recommendation.alternatives) # Other methods with scores
|
||
|
|
```
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
## Available Methods
|
||
|
|
|
||
|
|
| Method | Description | Best For |
|
||
|
|
|--------|-------------|----------|
|
||
|
|
| **TURBO** | Aggressive NN exploration with single-best FEA validation | Low-dimensional, smooth responses |
|
||
|
|
| **HYBRID_LOOP** | Iterative train→predict→validate→retrain cycle | Moderate complexity, uncertain landscape |
|
||
|
|
| **PURE_FEA** | Traditional FEA-only optimization | High-dimensional, complex physics |
|
||
|
|
| **GNN_FIELD** | Graph neural network for field prediction | Need full field visualization |
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
## Selection Criteria
|
||
|
|
|
||
|
|
### Static Factors (from config)
|
||
|
|
|
||
|
|
| Factor | Favors TURBO | Favors HYBRID_LOOP | Favors PURE_FEA |
|
||
|
|
|--------|--------------|---------------------|-----------------|
|
||
|
|
| **n_variables** | ≤5 | 5-10 | >10 |
|
||
|
|
| **n_objectives** | 1-3 | 2-4 | Any |
|
||
|
|
| **n_constraints** | ≤3 | 3-5 | >5 |
|
||
|
|
| **FEA budget** | >50 trials | 30-50 trials | <30 trials |
|
||
|
|
|
||
|
|
### Dynamic Factors (from FEA trials)
|
||
|
|
|
||
|
|
| Factor | Measurement | Impact |
|
||
|
|
|--------|-------------|--------|
|
||
|
|
| **Response smoothness** | Lipschitz constant estimate | Smooth → NN works well |
|
||
|
|
| **Variable sensitivity** | Correlation with objectives | High correlation → easier to learn |
|
||
|
|
| **Feasibility rate** | % of valid designs | Low feasibility → need more exploration |
|
||
|
|
| **Objective correlations** | Pairwise correlations | Strong correlations → simpler landscape |
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
## Architecture
|
||
|
|
|
||
|
|
```
|
||
|
|
┌───────────────────────────────────────────────────────────────────┐
|
||
|
|
│ AdaptiveMethodSelector │
|
||
|
|
├───────────────────────────────────────────────────────────────────┤
|
||
|
|
│ │
|
||
|
|
│ ┌──────────────────┐ ┌──────────────────────┐ │
|
||
|
|
│ │ ProblemProfiler │ │ EarlyMetricsCollector │ │
|
||
|
|
│ │ (static analysis)│ │ (dynamic analysis) │ │
|
||
|
|
│ └────────┬─────────┘ └──────────┬────────────┘ │
|
||
|
|
│ │ │ │
|
||
|
|
│ ▼ ▼ │
|
||
|
|
│ ┌────────────────────────────────────────────────┐ │
|
||
|
|
│ │ _score_methods() │ │
|
||
|
|
│ │ (rule-based scoring with weighted factors) │ │
|
||
|
|
│ └──────────────────────┬─────────────────────────┘ │
|
||
|
|
│ │ │
|
||
|
|
│ ▼ │
|
||
|
|
│ ┌────────────────────────────────────────────────┐ │
|
||
|
|
│ │ MethodRecommendation │ │
|
||
|
|
│ │ method, confidence, parameters, reasoning │ │
|
||
|
|
│ └────────────────────────────────────────────────┘ │
|
||
|
|
│ │
|
||
|
|
│ ┌──────────────────┐ │
|
||
|
|
│ │ RuntimeAdvisor │ ← Monitors during optimization │
|
||
|
|
│ │ (pivot advisor) │ │
|
||
|
|
│ └──────────────────┘ │
|
||
|
|
│ │
|
||
|
|
└───────────────────────────────────────────────────────────────────┘
|
||
|
|
```
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
## Components
|
||
|
|
|
||
|
|
### 1. ProblemProfiler
|
||
|
|
|
||
|
|
Extracts static problem characteristics from `optimization_config.json`:
|
||
|
|
|
||
|
|
```python
|
||
|
|
@dataclass
|
||
|
|
class ProblemProfile:
|
||
|
|
n_variables: int
|
||
|
|
variable_names: List[str]
|
||
|
|
variable_bounds: Dict[str, Tuple[float, float]]
|
||
|
|
n_objectives: int
|
||
|
|
objective_names: List[str]
|
||
|
|
n_constraints: int
|
||
|
|
fea_time_estimate: float
|
||
|
|
max_fea_trials: int
|
||
|
|
is_multi_objective: bool
|
||
|
|
has_constraints: bool
|
||
|
|
```
|
||
|
|
|
||
|
|
### 2. EarlyMetricsCollector
|
||
|
|
|
||
|
|
Computes metrics from first N FEA trials in `study.db`:
|
||
|
|
|
||
|
|
```python
|
||
|
|
@dataclass
|
||
|
|
class EarlyMetrics:
|
||
|
|
n_trials_analyzed: int
|
||
|
|
objective_means: Dict[str, float]
|
||
|
|
objective_stds: Dict[str, float]
|
||
|
|
coefficient_of_variation: Dict[str, float]
|
||
|
|
objective_correlations: Dict[str, float]
|
||
|
|
variable_objective_correlations: Dict[str, Dict[str, float]]
|
||
|
|
feasibility_rate: float
|
||
|
|
response_smoothness: float # 0-1, higher = better for NN
|
||
|
|
variable_sensitivity: Dict[str, float]
|
||
|
|
```
|
||
|
|
|
||
|
|
### 3. AdaptiveMethodSelector
|
||
|
|
|
||
|
|
Main entry point that combines static + dynamic analysis:
|
||
|
|
|
||
|
|
```python
|
||
|
|
selector = AdaptiveMethodSelector(min_trials=20)
|
||
|
|
recommendation = selector.recommend(config_path, db_path)
|
||
|
|
```
|
||
|
|
|
||
|
|
### 4. RuntimeAdvisor
|
||
|
|
|
||
|
|
Monitors optimization progress and suggests pivots:
|
||
|
|
|
||
|
|
```python
|
||
|
|
advisor = RuntimeAdvisor()
|
||
|
|
pivot_advice = advisor.assess(db_path, config_path, current_method="turbo")
|
||
|
|
|
||
|
|
if pivot_advice.should_pivot:
|
||
|
|
print(f"Consider switching to {pivot_advice.recommended_method}")
|
||
|
|
print(f"Reason: {pivot_advice.reason}")
|
||
|
|
```
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
## Example Output
|
||
|
|
|
||
|
|
```
|
||
|
|
======================================================================
|
||
|
|
OPTIMIZATION METHOD ADVISOR
|
||
|
|
======================================================================
|
||
|
|
|
||
|
|
Problem Profile:
|
||
|
|
Variables: 2 (support_angle, tip_thickness)
|
||
|
|
Objectives: 3 (mass, stress, stiffness)
|
||
|
|
Constraints: 1
|
||
|
|
Max FEA budget: ~72 trials
|
||
|
|
|
||
|
|
----------------------------------------------------------------------
|
||
|
|
|
||
|
|
RECOMMENDED: TURBO
|
||
|
|
Confidence: 100%
|
||
|
|
Reason: low-dimensional design space; sufficient FEA budget; smooth landscape (79%)
|
||
|
|
|
||
|
|
Suggested parameters:
|
||
|
|
--nn-trials: 5000
|
||
|
|
--batch-size: 100
|
||
|
|
--retrain-every: 10
|
||
|
|
--epochs: 150
|
||
|
|
|
||
|
|
Alternatives:
|
||
|
|
- hybrid_loop (75%): uncertain landscape - hybrid adapts; adequate budget for iterations
|
||
|
|
- pure_fea (50%): default recommendation
|
||
|
|
|
||
|
|
======================================================================
|
||
|
|
```
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
## Parameter Recommendations
|
||
|
|
|
||
|
|
The selector suggests optimal parameters based on problem characteristics:
|
||
|
|
|
||
|
|
| Parameter | Low-D (≤3 vars) | Medium-D (4-6 vars) | High-D (>6 vars) |
|
||
|
|
|-----------|-----------------|---------------------|------------------|
|
||
|
|
| `--nn-trials` | 5000 | 10000 | 20000 |
|
||
|
|
| `--batch-size` | 100 | 100 | 200 |
|
||
|
|
| `--retrain-every` | 10 | 15 | 20 |
|
||
|
|
| `--epochs` | 150 | 200 | 300 |
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
## Scoring Algorithm
|
||
|
|
|
||
|
|
Each method receives a score based on weighted factors:
|
||
|
|
|
||
|
|
```python
|
||
|
|
# TURBO scoring
|
||
|
|
turbo_score = 50 # base score
|
||
|
|
turbo_score += 30 if n_variables <= 5 else -20 # dimensionality
|
||
|
|
turbo_score += 25 if smoothness > 0.7 else -10 # response smoothness
|
||
|
|
turbo_score += 20 if fea_budget > 50 else -15 # budget
|
||
|
|
turbo_score += 15 if feasibility > 0.8 else -5 # feasibility
|
||
|
|
turbo_score = max(0, min(100, turbo_score)) # clamp 0-100
|
||
|
|
|
||
|
|
# Similar for HYBRID_LOOP, PURE_FEA, GNN_FIELD
|
||
|
|
```
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
## Integration with run_optimization.py
|
||
|
|
|
||
|
|
The method selector can be integrated into the optimization workflow:
|
||
|
|
|
||
|
|
```python
|
||
|
|
# At start of optimization
|
||
|
|
from optimization_engine.method_selector import recommend_method
|
||
|
|
|
||
|
|
recommendation = recommend_method(config_path, db_path)
|
||
|
|
print(f"Recommended method: {recommendation.method}")
|
||
|
|
print(f"Parameters: {recommendation.parameters}")
|
||
|
|
|
||
|
|
# Ask user confirmation
|
||
|
|
if user_confirms:
|
||
|
|
if recommendation.method == 'turbo':
|
||
|
|
os.system(f"python run_nn_optimization.py --turbo "
|
||
|
|
f"--nn-trials {recommendation.parameters['nn_trials']} "
|
||
|
|
f"--batch-size {recommendation.parameters['batch_size']}")
|
||
|
|
```
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
## Troubleshooting
|
||
|
|
|
||
|
|
| Symptom | Cause | Solution |
|
||
|
|
|---------|-------|----------|
|
||
|
|
| "Insufficient trials" | < 20 FEA trials | Run more FEA trials first |
|
||
|
|
| Low confidence score | Conflicting signals | Try hybrid_loop as safe default |
|
||
|
|
| PURE_FEA recommended | High dimensionality | Consider dimension reduction |
|
||
|
|
| GNN_FIELD recommended | Need field visualization | Set up atomizer-field |
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
## Cross-References
|
||
|
|
|
||
|
|
- **Depends On**:
|
||
|
|
- [SYS_10_IMSO](./SYS_10_IMSO.md) for optimization framework
|
||
|
|
- [SYS_14_NEURAL_ACCELERATION](./SYS_14_NEURAL_ACCELERATION.md) for neural methods
|
||
|
|
- **Used By**: [OP_02_RUN_OPTIMIZATION](../operations/OP_02_RUN_OPTIMIZATION.md)
|
||
|
|
- **See Also**: [modules/method-selection.md](../../.claude/skills/modules/method-selection.md)
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
## Implementation Files
|
||
|
|
|
||
|
|
```
|
||
|
|
optimization_engine/
|
||
|
|
└── method_selector.py # Complete AMS implementation
|
||
|
|
├── ProblemProfiler # Static config analysis
|
||
|
|
├── EarlyMetricsCollector # Dynamic FEA metrics
|
||
|
|
├── AdaptiveMethodSelector # Main recommendation engine
|
||
|
|
├── RuntimeAdvisor # Mid-run pivot advisor
|
||
|
|
└── recommend_method() # Convenience function
|
||
|
|
```
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
## Version History
|
||
|
|
|
||
|
|
| Version | Date | Changes |
|
||
|
|
|---------|------|---------|
|
||
|
|
| 1.0 | 2025-12-06 | Initial implementation with 4 methods |
|