Major additions: - Training data export system for AtomizerField neural network training - Bracket stiffness optimization study with 50+ training samples - Intelligent NX model discovery (auto-detect solutions, expressions, mesh) - Result extractors module for displacement, stress, frequency, mass - User-generated NX journals for advanced workflows - Archive structure for legacy scripts and test outputs - Protocol documentation and dashboard launcher 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
58 KiB
ATOMIZER PROTOCOL
The Single Source of Truth for Atomizer Operations
Version: 1.0 Date: 2025-11-19 Status: ACTIVE - MUST BE FOLLOWED AT ALL TIMES
CORE PRINCIPLE
Atomizer is INTELLIGENT and AUTONOMOUS. It discovers, analyzes, and configures everything automatically.
Human users should provide:
- A CAD model (.prt)
- A simulation setup (.sim)
- An optimization goal (in natural language)
Atomizer handles the rest.
PROTOCOL: Study Creation Workflow
Phase 1: Intelligent Benchmarking (MANDATORY)
Purpose: Discover EVERYTHING about the simulation before running optimization.
Steps:
-
Extract ALL Expressions
- Use
NXParameterUpdater.get_all_expressions() - Catalog all design variables
- Identify bounds and units
- Use
-
Solve ALL Solutions in .sim File
- Use
IntelligentSetup._solve_all_solutions() - This runs NXOpen journal that:
- Updates FEM from master .prt (CRITICAL for modal analysis)
- Calls
SolveAllSolutions() - Saves simulation to write output files
- Returns: Number solved, failed, skipped, solution names
- Use
-
Analyze ALL Result Files
- Scan for all .op2 files in model directory
- Use pyNastran to read each .op2
- Catalog available results:
- Eigenvalues (modal frequencies)
- Displacements
- Stresses (CQUAD4, CTRIA3, etc.)
- Forces
- Map each result type to its source solution
-
Match Objectives to Available Results
- Parse user's optimization goal
- Match requested objectives to discovered results
- Example mappings:
- "frequency" → eigenvalues → Solution_Normal_Modes
- "displacement" → displacements → Solution_1
- "stress" → cquad4_stress → Solution_1
- Return recommended solution for optimization
Output: Benchmark data structure
{
"success": true,
"expressions": {"param1": {"value": 10.0, "units": "mm"}, ...},
"solutions": {"num_solved": 2, "solution_names": ["Solution[Solution 1]", "Solution[Solution_Normal_Modes]"]},
"available_results": {
"eigenvalues": {"file": "model_sim1-solution_normal_modes.op2", "count": 10, "solution": "Solution_Normal_Modes"},
"displacements": {"file": "model_sim1-solution_1.op2", "count": 613, "solution": "Solution_1"}
},
"objective_mapping": {
"objectives": {
"frequency_error": {
"solution": "Solution_Normal_Modes",
"result_type": "eigenvalues",
"extractor": "extract_first_frequency",
"op2_file": Path("..."),
"match_confidence": "HIGH"
}
},
"primary_solution": "Solution_Normal_Modes"
}
}
Phase 2: Study Structure Creation
-
Create directory structure:
studies/{study_name}/ ├── 1_setup/ │ ├── model/ # Model files (.prt, .sim, .fem, .fem_i) │ └── workflow_config.json ├── 2_results/ # Optimization database and history └── 3_reports/ # Human-readable markdown reports with graphs -
Copy model files to
1_setup/model/ -
Save workflow configuration to
1_setup/workflow_config.json
Phase 3: Optimization Runner Generation
Purpose: Generate a runner that is PERFECTLY configured based on benchmarking.
Critical Rules:
-
Solution Selection
- Use
benchmark_results['objective_mapping']['primary_solution'] - This is the solution that contains the required results
- Generate code:
solver.run_simulation(sim_file, solution_name="{primary_solution}")
- Use
-
Extractor Generation
- Use extractors matched to available results
- For eigenvalues:
extract_first_frequency - For displacements:
extract_max_displacement - For stresses:
extract_max_stress
-
Objective Function
- For target-matching objectives (e.g., "tune to 115 Hz"):
if 'target_frequency' in obj_config.get('extraction', {}).get('params', {}): target = obj_config['extraction']['params']['target_frequency'] objective_value = abs(results[result_name] - target) - For minimization:
objective_value = results[result_name] - For maximization:
objective_value = -results[result_name]
- For target-matching objectives (e.g., "tune to 115 Hz"):
-
Incremental History Tracking
- Write JSON after each trial
- File:
2_results/optimization_history_incremental.json - Format:
[ { "trial_number": 0, "design_variables": {"param1": 10.5, ...}, "results": {"first_frequency": 115.2}, "objective": 0.2 }, ... ]
-
Report Generation
- Use
generate_markdown_report()NOT plain text - Extract target values from workflow
- Save to
3_reports/OPTIMIZATION_REPORT.md - Include convergence plots, design space plots, sensitivity plots
- Use
Phase 4: Configuration Report
Generate 1_setup/CONFIGURATION_REPORT.md documenting:
- All discovered expressions
- All solutions found and solved
- Objective matching results
- Recommended configuration
- Any warnings or issues
CRITICAL PROTOCOLS
Protocol 1: NEVER Guess Solution Names
WRONG:
result = solver.run_simulation(sim_file, solution_name="normal_modes") # WRONG - guessing!
RIGHT:
# Use benchmark data to get actual solution name
recommended_solution = benchmark_results['objective_mapping']['primary_solution']
result = solver.run_simulation(sim_file, solution_name=recommended_solution)
WHY: Solution names in NX are user-defined and unpredictable. They might be "Solution 1", "SOLUTION_NORMAL_MODES", "Modal Analysis", etc. ALWAYS discover them via benchmarking.
Protocol 2: ALWAYS Update FEM Before Solving
Why: Modal analysis (eigenvalues) requires the FEM to match the current geometry. If geometry changed but FEM wasn't updated, eigenvalues will be wrong.
How: The _solve_all_solutions() journal includes:
femPart.UpdateFemodel() # Updates FEM from master CAD
This is done automatically during benchmarking.
Protocol 3: Target-Matching Objectives
User says: "Tune frequency to exactly 115 Hz"
Workflow:
{
"objectives": [{
"name": "frequency_error",
"goal": "minimize",
"extraction": {
"action": "extract_first_natural_frequency",
"params": {"mode_number": 1, "target_frequency": 115.0}
}
}]
}
Generated objective function:
if 'target_frequency' in obj_config.get('extraction', {}).get('params', {}):
target = obj_config['extraction']['params']['target_frequency']
objective_value = abs(results[result_name] - target)
print(f" Frequency: {results[result_name]:.4f} Hz, Target: {target} Hz, Error: {objective_value:.4f} Hz")
WHY: Optuna must minimize ERROR FROM TARGET, not minimize the raw frequency value.
Protocol 4: Optimization Configuration (CRITICAL!)
MANDATORY: Every study MUST have 1_setup/optimization_config.json.
Purpose: Full control over Optuna sampler, pruner, and optimization strategy.
Configuration File: {study_dir}/1_setup/optimization_config.json
{
"study_name": "my_optimization",
"direction": "minimize",
"sampler": {
"type": "TPESampler",
"params": {
"n_startup_trials": 5,
"n_ei_candidates": 24,
"multivariate": true,
"warn_independent_sampling": true
}
},
"pruner": {
"type": "MedianPruner",
"params": {
"n_startup_trials": 5,
"n_warmup_steps": 0
}
},
"trials": {
"n_trials": 50,
"timeout": null,
"catch": []
},
"optimization_notes": "TPE sampler with multivariate enabled for correlated parameters. n_startup_trials=5 means first 5 trials are random exploration, then TPE exploitation begins."
}
Sampler Configuration:
-
TPESampler (RECOMMENDED - Tree-structured Parzen Estimator)
n_startup_trials: Number of random trials before TPE kicks in (default: 10)n_ei_candidates: Number of candidate samples (default: 24)multivariate: Use multivariate TPE for correlated parameters (default: false, SHOULD BE TRUE)warn_independent_sampling: Warn when parameters sampled independently
-
RandomSampler (for baseline comparison only)
{ "type": "RandomSampler", "params": {"seed": 42} } -
CmaEsSampler (for continuous optimization)
{ "type": "CmaEsSampler", "params": { "n_startup_trials": 5, "restart_strategy": "ipop" } }
Runner Implementation:
# Load optimization configuration
opt_config_file = Path(__file__).parent / "1_setup/optimization_config.json"
with open(opt_config_file) as f:
opt_config = json.load(f)
# Create sampler from config
sampler_type = opt_config['sampler']['type']
sampler_params = opt_config['sampler']['params']
if sampler_type == "TPESampler":
sampler = optuna.samplers.TPESampler(**sampler_params)
elif sampler_type == "CmaEsSampler":
sampler = optuna.samplers.CmaEsSampler(**sampler_params)
else:
sampler = None # Use default
# Create study with configured sampler
study = optuna.create_study(
study_name=opt_config['study_name'],
storage=storage,
load_if_exists=True,
direction=opt_config['direction'],
sampler=sampler # CRITICAL - do not omit!
)
WHY THIS IS CRITICAL:
Without explicit sampler configuration, Optuna uses random sampling, which does NOT learn from previous trials! This is why your optimization wasn't converging. TPE with multivariate=True is essential for problems with correlated parameters (like diameter and thickness affecting frequency together).
Common Mistakes:
- ❌ Omitting
samplerparameter increate_study()→ Uses random search - ❌ Setting
multivariate=Falsefor correlated parameters → Poor convergence - ❌ Too high
n_startup_trials→ Wastes trials on random search - ❌ No configuration file → No reproducibility, no tuning
Protocol 5: Folder Structure
ALWAYS use this structure:
1_setup/- Model files, workflow_config.json, optimization_config.json2_results/- Optimization database and incremental history3_reports/- Markdown reports with graphs
NEVER use:
2_substudies/❌results/without parent folder ❌- Reports in
2_results/❌ - Missing
optimization_config.json❌
Protocol 6: Intelligent Early Stopping
Purpose: Stop optimization automatically when target is achieved with confidence, instead of wasting trials.
Implementation: Use Optuna callbacks to monitor convergence.
class TargetAchievedCallback:
"""Stop when target is achieved with confidence."""
def __init__(self, target_value, tolerance, min_trials=10):
self.target_value = target_value
self.tolerance = tolerance
self.min_trials = min_trials
self.consecutive_successes = 0
self.required_consecutive = 3 # Need 3 consecutive successes to confirm
def __call__(self, study, trial):
# Need minimum trials for surrogate to learn
if len(study.trials) < self.min_trials:
return
# Check if current trial achieved target
if trial.value <= self.tolerance:
self.consecutive_successes += 1
print(f" [STOPPING] Target achieved! ({self.consecutive_successes}/{self.required_consecutive} confirmations)")
if self.consecutive_successes >= self.required_consecutive:
print(f" [STOPPING] Confidence achieved - stopping optimization")
study.stop()
else:
self.consecutive_successes = 0
# Usage in runner
callback = TargetAchievedCallback(target_value=115.0, tolerance=0.1, min_trials=10)
study.optimize(objective, n_trials=max_trials, callbacks=[callback])
WHY THIS IS CRITICAL:
- Bayesian optimization with TPE learns from trials - need minimum 10 trials before stopping
- Require 3 consecutive successes to ensure it's not a lucky guess
- This is intelligent stopping based on surrogate model confidence
- Avoids wasting computational resources once target is reliably achieved
Parameters to Tune:
min_trials: Minimum trials before considering stopping (default: 10 for TPE)required_consecutive: Number of consecutive successes needed (default: 3)tolerance: How close to target counts as "achieved"
Protocol 7: Client-Friendly Reports
Purpose: Reports must show ACTUAL METRICS first (what clients care about), not just technical objective values.
Report Structure (in order of importance):
-
Achieved Performance (FIRST - what the client cares about)
### Achieved Performance - **First Frequency**: 115.4780 Hz - Target: 115.0000 Hz - Error: 0.4780 Hz (0.42%) -
Design Parameters (SECOND - how to build it)
### Design Parameters - **Inner Diameter**: 94.07 mm - **Plate Thickness**: 6.14 mm -
Technical Details (LAST - for engineers, in collapsible section)
<details> <summary>Technical Details (Objective Function)</summary> - **Objective Value (Error)**: 0.478014 Hz </details>
Graph Placement: Graphs MUST be saved to 3_reports/ folder (same as markdown file)
def generate_markdown_report(history_file: Path, target_value: Optional[float] = None,
tolerance: float = 0.1, reports_dir: Optional[Path] = None) -> str:
# Graphs should be saved to 3_reports/ folder (same as markdown file)
study_dir = history_file.parent.parent
if reports_dir is None:
reports_dir = study_dir / "3_reports"
reports_dir.mkdir(parents=True, exist_ok=True)
# Generate plots in reports folder
convergence_plot = create_convergence_plot(history, target_value, reports_dir)
design_space_plot = create_design_space_plot(history, reports_dir)
sensitivity_plot = create_parameter_sensitivity_plot(history, reports_dir)
Trial Tables: Show ACTUAL RESULTS (frequency), not just objective
| Rank | Trial | First Frequency | Inner Diameter | Plate Thickness |
|------|-------|-----------------|----------------|-----------------|
| 1 | #45 | 115.48 | 94.07 | 6.14 |
| 2 | #54 | 114.24 | 102.22 | 5.85 |
| 3 | #31 | 113.99 | 85.78 | 6.38 |
WHY THIS IS CRITICAL:
- Clients don't understand "objective = 0.478" - they need "Frequency = 115.48 Hz"
- Showing target comparison immediately tells them if goal was achieved
- Percentage error gives intuitive sense of accuracy
- Design parameters tell them how to manufacture the part
NEVER:
- Show only objective values without explaining what they mean
- Hide actual performance metrics in technical details
- Put graphs in wrong folder (must be in 3_reports/ with markdown)
Protocol 8: Adaptive Surrogate-Based Optimization
Purpose: Intelligently transition from exploration to exploitation based on surrogate model confidence, not arbitrary trial counts.
State-of-the-Art Confidence Metrics:
-
Convergence Score (40% weight)
- Measures consistency of recent improvements
- High score = surrogate reliably finding better regions
-
Exploration Coverage (30% weight)
- How well parameter space has been explored
- Calculated as spread relative to parameter bounds
-
Prediction Stability (30% weight)
- Are recent trials clustered in promising regions?
- High stability = surrogate has learned the landscape
Overall Confidence Formula:
confidence = 0.4 * convergence + 0.3 * coverage + 0.3 * stability
Phase Transition: When confidence reaches 65%, automatically transition from exploration to exploitation.
Implementation:
from optimization_engine.adaptive_surrogate import AdaptiveExploitationCallback
# In optimization_config.json
{
"adaptive_strategy": {
"enabled": true,
"min_confidence_for_exploitation": 0.65,
"min_trials_for_confidence": 15,
"target_confidence_metrics": {
"convergence_weight": 0.4,
"coverage_weight": 0.3,
"stability_weight": 0.3
}
}
}
# In run_optimization.py
callback = AdaptiveExploitationCallback(
target_value=target_value,
tolerance=tolerance,
min_confidence_for_exploitation=0.65,
min_trials=15,
verbose=True
)
study.optimize(objective, n_trials=100, callbacks=[callback])
What You'll See During Optimization:
[CONFIDENCE REPORT - Trial #15]
Phase: EXPLORATION
Overall Confidence: 42.3%
- Convergence: 38.5%
- Coverage: 52.1%
- Stability: 35.8%
MEDIUM CONFIDENCE (42.3%) - Continue exploration with some exploitation
[CONFIDENCE REPORT - Trial #25]
Phase: EXPLORATION
Overall Confidence: 68.7%
- Convergence: 71.2%
- Coverage: 67.5%
- Stability: 66.4%
HIGH CONFIDENCE (68.7%) - Transitioning to exploitation phase
============================================================
PHASE TRANSITION: EXPLORATION → EXPLOITATION
Surrogate confidence: 68.7%
Now focusing on refining best regions
============================================================
WHY THIS IS CRITICAL:
- Not arbitrary: Transitions based on actual surrogate learning, not fixed trial counts
- Adaptive: Different problems may need different exploration durations
- Efficient: Stops wasting trials on exploration once landscape is understood
- Rigorous: Uses statistical metrics (coefficient of variation, fANOVA) not guesswork
- Configurable: Confidence threshold and weights can be tuned per study
Parameters:
min_confidence_for_exploitation: Default 0.65 (65%)min_trials_for_confidence: Minimum 15 trials before assessing confidence- Confidence weights: Convergence (0.4), Coverage (0.3), Stability (0.3)
Protocol 9: Professional Optuna Visualizations
Purpose: Reports MUST use Optuna's built-in visualization library for professional-quality analysis plots.
Required Optuna Plots:
-
Parallel Coordinate Plot (
plot_parallel_coordinate)- Shows parameter interactions and relationships
- Each line = one trial, colored by objective value
- Identifies promising parameter combinations
-
Optimization History (
plot_optimization_history)- Professional convergence visualization
- Better than basic matplotlib plots
-
Parameter Importance (
plot_param_importances)- Quantifies which parameters matter most
- Uses fANOVA or other importance metrics
-
Slice Plot (
plot_slice)- Individual parameter effects
- Shows objective vs each parameter
-
Contour Plot (
plot_contour) - 2D only- Parameter interaction heatmap
- Reveals coupled effects
Implementation:
from optuna.visualization import (
plot_optimization_history,
plot_parallel_coordinate,
plot_param_importances,
plot_slice,
plot_contour
)
def create_optuna_plots(study: optuna.Study, output_dir: Path) -> Dict[str, str]:
"""Create professional Optuna visualization plots."""
plots = {}
# Parallel Coordinate
fig = plot_parallel_coordinate(study)
fig.write_image(str(output_dir / 'optuna_parallel_coordinate.png'), width=1200, height=600)
# Optimization History
fig = plot_optimization_history(study)
fig.write_image(str(output_dir / 'optuna_optimization_history.png'), width=1000, height=600)
# Parameter Importance
fig = plot_param_importances(study)
fig.write_image(str(output_dir / 'optuna_param_importances.png'), width=800, height=500)
# Slice Plot
fig = plot_slice(study)
fig.write_image(str(output_dir / 'optuna_slice.png'), width=1000, height=600)
# Contour (2D only)
if len(study.best_params) == 2:
fig = plot_contour(study)
fig.write_image(str(output_dir / 'optuna_contour.png'), width=800, height=800)
return plots
# Pass study object to report generator
report = generate_markdown_report(
history_file,
target_value=target_value,
tolerance=tolerance,
reports_dir=reports_dir,
study=study # CRITICAL - pass study for Optuna plots
)
Report Section:
## Advanced Optimization Analysis (Optuna)
The following plots leverage Optuna's professional visualization library to provide deeper insights into the optimization process.
### Parallel Coordinate Plot

This interactive plot shows how different parameter combinations lead to different objective values...
### Parameter Importance Analysis

This analysis quantifies which design variables have the most impact...
WHY THIS IS CRITICAL:
- Professional Quality: Optuna plots are research-grade, publication-ready
- Deeper Insights: Parallel coordinates show interactions basic plots miss
- Parameter Importance: Tells you which variables actually matter
- Industry Standard: Optuna is state-of-the-art, used by top research labs
- Client Expectations: "WAY better report than that" requires professional visualizations
NEVER:
- Use only matplotlib when Optuna visualizations are available
- Forget to pass
studyobject to report generator - Skip parameter importance analysis
Protocol 6: UTF-8 Encoding
ALWAYS use encoding='utf-8' when writing files:
with open(file_path, 'w', encoding='utf-8') as f:
f.write(content)
WHY: Checkmarks (✓), mathematical symbols, and international characters require UTF-8.
FILE STRUCTURE REFERENCE
Workflow Configuration (1_setup/workflow_config.json)
{
"study_name": "my_optimization",
"optimization_request": "Tune the first natural frequency mode to exactly 115 Hz",
"design_variables": [
{"parameter": "thickness", "bounds": [2, 10]},
{"parameter": "diameter", "bounds": [50, 150]}
],
"objectives": [{
"name": "frequency_error",
"goal": "minimize",
"extraction": {
"action": "extract_first_natural_frequency",
"params": {"mode_number": 1, "target_frequency": 115.0}
}
}],
"constraints": [{
"name": "frequency_tolerance",
"type": "less_than",
"threshold": 0.1
}]
}
Generated Runner Template Structure
"""
Auto-generated optimization runner
Created: {timestamp}
Intelligently configured from benchmarking
"""
import sys, json, optuna
from pathlib import Path
from optimization_engine.nx_updater import NXParameterUpdater
from optimization_engine.nx_solver import NXSolver
def extract_results(op2_file, workflow):
"""Extract results matched to workflow objectives"""
# Auto-generated based on benchmark discoveries
...
def main():
# Load workflow
config_file = Path(__file__).parent / "1_setup/workflow_config.json"
workflow = json.load(open(config_file))
# Setup paths
output_dir = Path(__file__).parent / "2_results"
reports_dir = Path(__file__).parent / "3_reports"
# Initialize
updater = NXParameterUpdater(prt_file)
solver = NXSolver()
# Create Optuna study
study = optuna.create_study(...)
def objective(trial):
# Sample design variables
params = {var['parameter']: trial.suggest_float(var['parameter'], *var['bounds'])
for var in workflow['design_variables']}
# Update model
updater.update_expressions(params)
# Run simulation (with discovered solution name!)
result = solver.run_simulation(sim_file, solution_name="{DISCOVERED_SOLUTION}")
# Extract results
results = extract_results(result['op2_file'], workflow)
# Calculate objective (with target matching if applicable)
obj_config = workflow['objectives'][0]
if 'target_frequency' in obj_config.get('extraction', {}).get('params', {}):
target = obj_config['extraction']['params']['target_frequency']
objective_value = abs(results['first_frequency'] - target)
else:
objective_value = results[list(results.keys())[0]]
# Save incremental history
with open(history_file, 'w', encoding='utf-8') as f:
json.dump(history, f, indent=2)
return objective_value
# Run optimization
study.optimize(objective, n_trials=n_trials)
# Generate markdown report with graphs
from optimization_engine.generate_report_markdown import generate_markdown_report
report = generate_markdown_report(history_file, target_value={TARGET}, tolerance=0.1)
with open(reports_dir / 'OPTIMIZATION_REPORT.md', 'w', encoding='utf-8') as f:
f.write(report)
if __name__ == "__main__":
main()
TESTING PROTOCOL
Before releasing any study:
- ✅ Benchmarking completed successfully
- ✅ All solutions discovered and solved
- ✅ Objectives matched to results with HIGH confidence
- ✅ Runner uses discovered solution name (not hardcoded)
- ✅ Objective function handles target-matching correctly
- ✅ Incremental history updates after each trial
- ✅ Reports generate in
3_reports/as markdown with graphs - ✅ Reports mention target values explicitly
TROUBLESHOOTING
Issue: "No object found with this name" when solving
Cause: Using guessed solution name instead of discovered name.
Fix: Use benchmark data to get actual solution name.
Issue: Optimization not learning / converging
Cause: Objective function returning wrong value (e.g., raw frequency instead of error from target).
Fix: Check if objective has target value. If yes, minimize abs(result - target).
Issue: No eigenvalues found in OP2
Cause: Either (1) solving wrong solution, or (2) FEM not updated before solving.
Fix:
- Check benchmark discovered correct modal solution
- Verify
_solve_all_solutions()callsUpdateFemodel()
Issue: Reports are plain text without graphs
Cause: Using old generate_report instead of generate_report_markdown.
Fix: Import and use generate_markdown_report() from optimization_engine.generate_report_markdown.
EXTENSIBILITY ARCHITECTURE
Core Principle: Modular Hooks System
Atomizer is designed to be extensible without modifying core code. All customization happens through hooks, extractors, and plugins.
Hook System
Purpose: Allow users and future features to inject custom behavior at key points in the workflow.
Hook Points:
-
pre_trial_hook - Before each optimization trial starts
- Use case: Custom parameter validation, parameter transformations
- Receives: trial_number, design_variables
- Returns: modified design_variables or None to skip trial
-
post_trial_hook - After each trial completes
- Use case: Custom logging, notifications, early stopping
- Receives: trial_number, design_variables, results, objective
- Returns: None or dict with custom metrics
-
pre_solve_hook - Before NX solver is called
- Use case: Custom model modifications, additional setup
- Receives: prt_file, sim_file, design_variables
- Returns: None
-
post_solve_hook - After NX solver completes
- Use case: Custom result extraction, post-processing
- Receives: op2_file, sim_file, solve_success
- Returns: dict with additional results
-
post_study_hook - After entire optimization study completes
- Use case: Custom reports, notifications, deployment
- Receives: study_dir, best_params, best_objective, history
- Returns: None
Hook Implementation:
Hooks are Python functions defined in {study_dir}/1_setup/hooks.py:
# studies/my_study/1_setup/hooks.py
def pre_trial_hook(trial_number, design_variables):
"""Validate parameters before trial."""
print(f"[HOOK] Starting trial {trial_number}")
# Example: Apply custom constraints
if design_variables['thickness'] < design_variables['diameter'] / 10:
print("[HOOK] Skipping trial - thickness too small")
return None # Skip trial
return design_variables # Proceed with trial
def post_trial_hook(trial_number, design_variables, results, objective):
"""Log custom metrics after trial."""
frequency = results.get('first_frequency', 0)
# Example: Save to custom database
custom_metrics = {
'frequency_ratio': frequency / 115.0,
'parameter_sum': sum(design_variables.values())
}
print(f"[HOOK] Custom metrics: {custom_metrics}")
return custom_metrics
def post_study_hook(study_dir, best_params, best_objective, history):
"""Generate custom reports after optimization."""
print(f"[HOOK] Optimization complete!")
print(f"[HOOK] Best objective: {best_objective}")
# Example: Send email notification
# send_email(f"Optimization complete: {best_objective}")
# Example: Deploy best design
# deploy_to_production(best_params)
Hook Loading (in generated runner):
# Load hooks if they exist
hooks_file = Path(__file__).parent / "1_setup/hooks.py"
hooks = {}
if hooks_file.exists():
import importlib.util
spec = importlib.util.spec_from_file_location("hooks", hooks_file)
hooks_module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(hooks_module)
hooks['pre_trial'] = getattr(hooks_module, 'pre_trial_hook', None)
hooks['post_trial'] = getattr(hooks_module, 'post_trial_hook', None)
hooks['post_study'] = getattr(hooks_module, 'post_study_hook', None)
# In objective function:
if hooks.get('pre_trial'):
modified_params = hooks['pre_trial'](trial.number, params)
if modified_params is None:
raise optuna.TrialPruned() # Skip this trial
params = modified_params
Extractor System
Purpose: Modular result extraction from OP2 files.
Extractor Library: optimization_engine/extractors/
optimization_engine/
└── extractors/
├── __init__.py
├── frequency_extractor.py
├── displacement_extractor.py
├── stress_extractor.py
└── custom_extractor.py # User-defined
Extractor Interface:
# optimization_engine/extractors/base_extractor.py
from abc import ABC, abstractmethod
from pathlib import Path
from typing import Dict, Any
class BaseExtractor(ABC):
"""Base class for all extractors."""
@abstractmethod
def extract(self, op2_file: Path, params: Dict[str, Any]) -> Dict[str, float]:
"""
Extract results from OP2 file.
Args:
op2_file: Path to OP2 result file
params: Extraction parameters (e.g., mode_number, node_id)
Returns:
Dict of result name → value
"""
pass
@property
@abstractmethod
def result_types(self) -> list:
"""List of result types this extractor can handle."""
pass
Example Custom Extractor:
# optimization_engine/extractors/custom_extractor.py
from .base_extractor import BaseExtractor
from pyNastran.op2.op2 import OP2
import numpy as np
class CustomResonanceExtractor(BaseExtractor):
"""Extract resonance quality factor from frequency response."""
@property
def result_types(self):
return ['quality_factor', 'resonance_bandwidth']
def extract(self, op2_file, params):
model = OP2()
model.read_op2(str(op2_file))
# Custom extraction logic
frequencies = model.eigenvalues[1].eigenvalues
# Calculate Q-factor
f0 = frequencies[0]
f1 = frequencies[1]
bandwidth = abs(f1 - f0)
q_factor = f0 / bandwidth
return {
'quality_factor': q_factor,
'resonance_bandwidth': bandwidth
}
Extractor Registration (in workflow config):
{
"objectives": [{
"name": "q_factor",
"goal": "maximize",
"extraction": {
"action": "custom_resonance_extractor",
"params": {}
}
}]
}
Plugin System
Purpose: Add entirely new functionality without modifying core.
Plugin Structure:
plugins/
├── ai_design_suggester/
│ ├── __init__.py
│ ├── plugin.py
│ └── README.md
├── sensitivity_analyzer/
│ ├── __init__.py
│ ├── plugin.py
│ └── README.md
└── cad_generator/
├── __init__.py
├── plugin.py
└── README.md
Plugin Interface:
# plugins/base_plugin.py
class BasePlugin(ABC):
"""Base class for Atomizer plugins."""
@property
@abstractmethod
def name(self) -> str:
"""Plugin name."""
pass
@abstractmethod
def initialize(self, config: Dict):
"""Initialize plugin with configuration."""
pass
@abstractmethod
def execute(self, context: Dict) -> Dict:
"""
Execute plugin logic.
Args:
context: Current optimization context (study_dir, history, etc.)
Returns:
Dict with plugin results
"""
pass
Example Plugin:
# plugins/ai_design_suggester/plugin.py
from plugins.base_plugin import BasePlugin
class AIDesignSuggester(BasePlugin):
"""Use AI to suggest promising design regions."""
@property
def name(self):
return "ai_design_suggester"
def initialize(self, config):
self.model_type = config.get('model', 'random_forest')
def execute(self, context):
"""Analyze history and suggest next designs."""
history = context['history']
# Train surrogate model
X = [list(h['design_variables'].values()) for h in history]
y = [h['objective'] for h in history]
# Suggest next promising region
suggested_params = self._suggest_next_design(X, y)
return {
'suggested_design': suggested_params,
'expected_improvement': 0.15
}
API-Ready Architecture
Purpose: Enable future web API / REST interface without refactoring.
API Endpoints (future):
# api/server.py (future implementation)
from fastapi import FastAPI
from optimization_engine.hybrid_study_creator import HybridStudyCreator
app = FastAPI()
@app.post("/api/v1/studies/create")
async def create_study(request: StudyRequest):
"""Create optimization study from API request."""
creator = HybridStudyCreator()
study_dir = creator.create_from_workflow(
workflow_json=request.workflow,
model_files=request.model_files,
study_name=request.study_name
)
return {"study_id": study_dir.name, "status": "created"}
@app.post("/api/v1/studies/{study_id}/run")
async def run_optimization(study_id: str, n_trials: int):
"""Run optimization via API."""
# Import and run the generated runner
# Stream progress back to client
pass
@app.get("/api/v1/studies/{study_id}/status")
async def get_status(study_id: str):
"""Get optimization status and current best."""
# Read incremental history JSON
# Return current progress
pass
@app.get("/api/v1/studies/{study_id}/report")
async def get_report(study_id: str):
"""Get markdown report."""
# Read OPTIMIZATION_REPORT.md
# Return as markdown or HTML
pass
Design Principles for API Compatibility:
- Stateless Operations: All operations work with paths and JSON, no global state
- Incremental Results: History JSON updated after each trial (enables streaming)
- Standard Formats: All inputs/outputs are JSON, Markdown, or standard file formats
- Self-Contained Studies: Each study folder is independent and portable
Configuration System
Purpose: Centralized configuration with environment overrides.
Configuration Hierarchy:
- Default config (in code)
- Global config:
~/.atomizer/config.json - Project config:
./atomizer.config.json - Study config:
{study_dir}/1_setup/config.json - Environment variables:
ATOMIZER_*
Example Configuration:
{
"nx": {
"executable": "C:/Program Files/Siemens/NX2412/ugraf.exe",
"timeout_seconds": 600,
"journal_runner": "C:/Program Files/Siemens/NX2412/run_journal.exe"
},
"optimization": {
"default_n_trials": 50,
"sampler": "TPE",
"pruner": "MedianPruner"
},
"reports": {
"auto_generate": true,
"include_sensitivity": true,
"plot_dpi": 150
},
"hooks": {
"enabled": true,
"timeout_seconds": 30
},
"plugins": {
"enabled": ["ai_design_suggester", "sensitivity_analyzer"],
"disabled": []
}
}
Future Feature Foundations
Ready for Implementation:
-
Multi-Objective Optimization
- Hook:
extract_multiple_objectives() - Extractor: Return dict with multiple values
- Report: Pareto front visualization
- Hook:
-
Constraint Handling
- Hook:
evaluate_constraints() - Optuna: Add constraint callbacks
- Report: Constraint violation history
- Hook:
-
Surrogate Models
- Plugin:
surrogate_model_trainer - Hook:
post_trial_hooktrains model - Use for cheap pre-screening
- Plugin:
-
Sensitivity Analysis
- Plugin:
sensitivity_analyzer - Hook:
post_study_hookruns analysis - Report: Sensitivity charts
- Plugin:
-
Design Space Exploration
- Plugin:
design_space_explorer - Different sampling strategies
- Adaptive design of experiments
- Plugin:
-
Parallel Evaluation
- Multiple NX instances
- Distributed solver via Ray/Dask
- Asynchronous trial management
-
CAD Generation
- Plugin:
cad_generator - Hook:
pre_solve_hookgenerates geometry - Parametric model creation
- Plugin:
-
Live Dashboards
- Web UI showing real-time progress
- Reads incremental history JSON
- Interactive design space plots
PROTOCOL 8: Adaptive Surrogate-Based Optimization (STUDY-AWARE)
Purpose
Intelligent Bayesian optimization with confidence-based exploration→exploitation transitions.
CRITICAL: All adaptive components MUST be study-aware, tracking state across multiple optimization sessions using the Optuna study database, not just the current session.
Implementation
Module: optimization_engine/adaptive_surrogate.py
Key Classes:
SurrogateConfidenceMetrics: Study-aware confidence calculationAdaptiveExploitationCallback: Optuna callback with phase transition tracking
Confidence Metrics (Study-Aware)
Uses study.trials directly, NOT session-based history:
all_trials = [t for t in study.trials if t.state == COMPLETE]
Metrics:
-
Convergence Score (40% weight)
- Recent improvement rate and consistency
- Based on last 10 completed trials
-
Exploration Coverage (30% weight)
- Parameter space coverage relative to bounds
- Calculated as spread/range for each parameter
-
Prediction Stability (30% weight)
- How stable recent best values are
- Indicates surrogate model reliability
Overall Confidence = weighted combination Ready for Exploitation = confidence ≥ 65% AND coverage ≥ 50%
Phase Transitions
Exploration Phase:
- Initial trials (usually < 15)
- Focus: broad parameter space coverage
- TPE sampler with n_startup_trials random sampling
Exploitation Phase:
- After confidence threshold reached
- Focus: refining best regions
- TPE sampler with n_ei_candidates=24 for intensive Expected Improvement
Transition Triggering:
- Automatic when confidence ≥ 65%
- Logged to terminal with banner
- Saved to
phase_transitions.json
Tracking Files (Study-Aware)
Location: studies/{study_name}/2_results/
Files:
-
phase_transitions.json: Phase transition events[{ "trial_number": 45, "from_phase": "exploration", "to_phase": "exploitation", "confidence_metrics": { "overall_confidence": 0.72, "convergence_score": 0.68, "exploration_coverage": 0.75, "prediction_stability": 0.81 } }] -
confidence_history.json: Confidence snapshots every 5 trials[{ "trial_number": 15, "phase": "exploration", "confidence_metrics": {...}, "total_trials": 15 }]
Configuration
File: 1_setup/optimization_config.json
{
"adaptive_strategy": {
"enabled": true,
"min_confidence_for_exploitation": 0.65,
"min_trials_for_confidence": 15,
"target_confidence_metrics": {
"convergence_weight": 0.4,
"coverage_weight": 0.3,
"stability_weight": 0.3
}
},
"sampler": {
"type": "TPESampler",
"params": {
"n_startup_trials": 10,
"n_ei_candidates": 24,
"multivariate": true
}
}
}
Report Integration
Markdown Report Sections:
-
Adaptive Optimization Strategy section
- Phase transition details
- Confidence at transition
- Metrics breakdown
-
Confidence Progression Plot
- Overall confidence over trials
- Component metrics (convergence, coverage, stability)
- Red vertical line marking exploitation transition
- Horizontal threshold line at 65%
Generator: optimization_engine/generate_report_markdown.py
- Reads
phase_transitions.json - Reads
confidence_history.json - Creates
confidence_progression.png
Critical Bug Fix (Nov 19, 2025)
Problem: Original implementation tracked trials in session-based self.history, resetting on each run.
Solution: Use study.trials directly for ALL calculations:
- Confidence metrics
- Phase transition decisions
- Trial counting
This ensures optimization behavior is consistent across interrupted/resumed sessions.
PROTOCOL 9: Professional Optimization Visualizations (Optuna Integration)
Purpose
Leverage Optuna's professional visualization library for publication-quality optimization analysis.
Implementation
Module: optimization_engine/generate_report_markdown.py
Function: create_optuna_plots(study, output_dir)
Generated Plots
-
Parallel Coordinate Plot (
optuna_parallel_coordinate.png)- Multi-dimensional parameter visualization
- Color-coded by objective value
- Shows parameter interactions and promising regions
-
Optimization History (
optuna_optimization_history.png)- Trial-by-trial objective progression
- Best value trajectory
- Professional alternative to custom convergence plot
-
Parameter Importance (
optuna_param_importances.png)- fANOVA-based importance analysis
- Quantifies which parameters most affect objective
- Critical for understanding sensitivity
-
Slice Plot (
optuna_slice.png)- Individual parameter effects
- Other parameters held constant
- Shows univariate relationships
-
Contour Plot (
optuna_contour.png)- 2D parameter interaction heatmaps
- All pairwise combinations
- Reveals interaction effects
Report Integration
All plots automatically added to markdown report under: "Advanced Optimization Analysis (Optuna)" section
Each plot includes explanatory text about:
- What the plot shows
- How to interpret it
- What insights it provides
Requirements
studyobject must be passed togenerate_markdown_report()- Minimum 10 trials for meaningful visualizations
- Handles errors gracefully if plots fail
PROTOCOL 10: Intelligent Multi-Strategy Optimization (IMSO)
Purpose
Make Atomizer self-tuning - automatically discover problem characteristics and select the best optimization algorithm, adapting strategy mid-run if needed.
User Experience: User provides optimization goal. Atomizer intelligently tests strategies, analyzes landscape, and converges efficiently WITHOUT manual algorithm tuning.
Design Philosophy
Different FEA problems need different optimization strategies:
- Smooth unimodal (e.g., beam bending) → CMA-ES for fast local convergence
- Smooth multimodal (e.g., resonance tuning) → GP-BO → CMA-ES hybrid
- Rugged multimodal (e.g., complex assemblies) → TPE for robust exploration
- High-dimensional (e.g., topology optimization) → TPE scales best
Traditional approach: User manually configures sampler (error-prone, suboptimal)
Protocol 10 approach: Atomizer discovers problem type, recommends strategy, switches mid-run if stagnating
Three-Phase Architecture
Stage 1: Landscape Characterization (Trials 1-15)
- Run initial exploration with random sampling
- Analyze problem characteristics:
- Smoothness (nearby points → similar objectives?)
- Multimodality (multiple local optima?)
- Parameter correlation (coupled effects?)
- Noise level (simulation stability?)
- Generate landscape report for transparency
Stage 2: Intelligent Strategy Selection (Trial 15+)
- Use decision tree to match landscape → strategy
- Recommend sampler with confidence score
- Switch to recommended strategy
- Log decision reasoning
Stage 3: Adaptive Optimization with Monitoring (Ongoing)
- Monitor strategy performance
- Detect stagnation (no improvement over N trials)
- Re-analyze landscape if needed
- Switch strategies dynamically
- Track transitions for learning
Core Components
1. Landscape Analyzer (optimization_engine/landscape_analyzer.py)
Computes problem characteristics from trial history:
from optimization_engine.landscape_analyzer import LandscapeAnalyzer
analyzer = LandscapeAnalyzer(min_trials_for_analysis=10)
landscape = analyzer.analyze(study)
# Returns:
{
'smoothness': 0.75, # 0-1, higher = smoother
'multimodal': False, # Multiple local optima?
'n_modes': 1, # Estimated local optima count
'parameter_correlation': {...}, # Correlation with objective
'noise_level': 0.12, # Evaluation noise estimate
'landscape_type': 'smooth_unimodal' # Classification
}
Landscape Types:
smooth_unimodal: Single smooth bowl → CMA-ESsmooth_multimodal: Multiple smooth regions → GP-BO or TPErugged_unimodal: Single rough region → TPErugged_multimodal: Multiple rough regions → TPEnoisy: High noise → Robust methods
2. Strategy Selector (optimization_engine/strategy_selector.py)
Expert decision tree for strategy recommendation:
from optimization_engine.strategy_selector import IntelligentStrategySelector
selector = IntelligentStrategySelector(verbose=True)
strategy, details = selector.recommend_strategy(
landscape=landscape,
trials_completed=15,
trials_budget=100
)
# Returns:
('cmaes', {
'confidence': 0.92,
'reasoning': 'Smooth unimodal with strong correlation - CMA-ES converges quickly',
'sampler_config': {
'type': 'CmaEsSampler',
'params': {'restart_strategy': 'ipop'}
}
})
Decision Tree Logic:
- High noise → TPE (robust)
- Smooth + correlated → CMA-ES (fast local convergence)
- Smooth + low-D → GP-BO (sample efficient)
- Multimodal → TPE (handles multiple modes)
- High-D → TPE (scales best)
- Default → TPE (safe general-purpose)
3. Strategy Portfolio Manager (optimization_engine/strategy_portfolio.py)
Manages dynamic strategy switching:
from optimization_engine.strategy_portfolio import StrategyTransitionManager
manager = StrategyTransitionManager(
stagnation_window=10,
min_improvement_threshold=0.001,
verbose=True,
tracking_dir=study_dir / "intelligent_optimizer"
)
# Checks for switching conditions
should_switch, reason = manager.should_switch_strategy(study, landscape)
# Executes transition with logging
manager.execute_strategy_switch(
study, from_strategy='tpe', to_strategy='cmaes',
reason='Stagnation detected', trial_number=45
)
Switching Triggers:
- Stagnation: <0.1% improvement over 10 trials
- Thrashing: High variance without improvement
- Strategy exhaustion: Algorithm reached theoretical limit
4. Intelligent Optimizer Orchestrator (optimization_engine/intelligent_optimizer.py)
Main entry point coordinating all components:
from optimization_engine.intelligent_optimizer import IntelligentOptimizer
optimizer = IntelligentOptimizer(
study_name="my_study",
study_dir=Path("studies/my_study/2_results"),
config=opt_config,
verbose=True
)
results = optimizer.optimize(
objective_function=objective,
design_variables={'thickness': (2, 10), 'diameter': (50, 150)},
n_trials=100,
target_value=115.0,
tolerance=0.1
)
# Returns comprehensive results with strategy performance breakdown
Configuration
File: 1_setup/optimization_config.json
{
"intelligent_optimization": {
"enabled": true,
"characterization_trials": 15,
"stagnation_window": 10,
"min_improvement_threshold": 0.001,
"min_analysis_trials": 10,
"reanalysis_interval": 15
},
"sampler": {
"type": "intelligent",
"fallback": "TPESampler"
}
}
Parameters:
enabled: Enable Protocol 10 (default: true)characterization_trials: Initial exploration trials (default: 15)stagnation_window: Trials to check for stagnation (default: 10)min_improvement_threshold: Minimum relative improvement (default: 0.001 = 0.1%)min_analysis_trials: Minimum trials for landscape analysis (default: 10)reanalysis_interval: Re-analyze landscape every N trials (default: 15)
Tracking and Transparency
Location: studies/{study_name}/2_results/intelligent_optimizer/
Files Generated:
-
strategy_transitions.json- All strategy switches[{ "trial_number": 45, "from_strategy": "tpe", "to_strategy": "cmaes", "reason": "Stagnation detected - <0.1% improvement in 10 trials", "best_value_at_switch": 0.485, "timestamp": "2025-11-19T14:23:15" }] -
strategy_performance.json- Performance breakdown by strategy{ "strategies": { "tpe": { "trials_used": 45, "best_value_achieved": 0.485, "improvement_rate": 0.032 }, "cmaes": { "trials_used": 55, "best_value_achieved": 0.185, "improvement_rate": 0.055 } } } -
intelligence_report.json- Complete decision log- Landscape analysis at each interval
- Strategy recommendations with reasoning
- Confidence metrics over time
- All transition events
Console Output
During Optimization:
======================================================================
STAGE 1: LANDSCAPE CHARACTERIZATION
======================================================================
Trial #10: Objective = 5.234
Trial #15: Objective = 3.456
======================================================================
LANDSCAPE ANALYSIS REPORT
======================================================================
Total Trials Analyzed: 15
Dimensionality: 2 parameters
LANDSCAPE CHARACTERISTICS:
Type: SMOOTH_UNIMODAL
Smoothness: 0.78 (smooth)
Multimodal: NO (1 modes)
Noise Level: 0.08 (low)
PARAMETER CORRELATIONS:
inner_diameter: +0.652 (strong positive)
plate_thickness: -0.543 (strong negative)
======================================================================
======================================================================
STAGE 2: STRATEGY SELECTION
======================================================================
======================================================================
STRATEGY RECOMMENDATION
======================================================================
Recommended: CMAES
Confidence: 92.0%
Reasoning: Smooth unimodal with strong correlation - CMA-ES converges quickly
======================================================================
======================================================================
STAGE 3: ADAPTIVE OPTIMIZATION
======================================================================
Trial #25: Objective = 1.234
Trial #45: Objective = 0.485
======================================================================
STRATEGY TRANSITION
======================================================================
Trial #45
TPE → CMAES
Reason: Stagnation detected - <0.1% improvement in 10 trials
Best value at transition: 0.485
======================================================================
Trial #55: Objective = 0.350
Trial #75: Objective = 0.185
======================================================================
OPTIMIZATION COMPLETE
======================================================================
Protocol: Protocol 10: Intelligent Multi-Strategy Optimization
Total Trials: 100
Best Value: 0.185 (Trial #98)
Final Strategy: CMAES
Strategy Transitions: 1
Trial #45: tpe → cmaes
Best Parameters:
inner_diameter: 124.486
plate_thickness: 5.072
======================================================================
Report Integration
Markdown Report Section (auto-generated):
## Intelligent Multi-Strategy Optimization (Protocol 10)
This optimization used **Protocol 10**, Atomizer's intelligent self-tuning framework.
### Problem Characteristics
Based on initial exploration (15 trials), the problem was classified as:
- **Landscape Type**: Smooth Unimodal
- **Smoothness**: 0.78 (smooth, well-behaved)
- **Multimodality**: Single optimum (no competing solutions)
- **Parameter Correlation**: Strong (0.65 overall)
### Strategy Selection
**Recommended Strategy**: CMA-ES (Covariance Matrix Adaptation)
- **Confidence**: 92%
- **Reasoning**: Smooth unimodal landscape with strong parameter correlation - CMA-ES will converge quickly
### Strategy Performance
| Strategy | Trials | Best Value | Improvement/Trial |
|----------|--------|------------|-------------------|
| TPE | 45 | 0.485 | 0.032 |
| CMA-ES | 55 | 0.185 | 0.055 |
**Transition Event (Trial #45)**: Switched from TPE to CMA-ES due to stagnation detection.
This automatic adaptation achieved **2.6x faster** convergence rate with CMA-ES compared to TPE.
Integration with Existing Protocols
Protocol 10 + Protocol 8 (Adaptive Surrogate):
- Landscape analyzer provides data for confidence calculation
- Confidence metrics inform strategy switching decisions
- Phase transitions tracked alongside strategy transitions
Protocol 10 + Protocol 9 (Optuna Visualizations):
- Optuna plots show different strategy regions
- Parameter importance analysis validates landscape classification
- Slice plots confirm smoothness/multimodality assessment
Algorithm Portfolio
Available Strategies:
-
TPE (Tree-structured Parzen Estimator) - Default safe choice
- Strengths: Robust, handles multimodality, scales to ~50D
- Weaknesses: Slower convergence on smooth problems
- Best for: General purpose, multimodal, rugged landscapes
-
CMA-ES (Covariance Matrix Adaptation) - Fast local optimizer
- Strengths: Very fast convergence, handles parameter correlation
- Weaknesses: Needs good initialization, poor for multimodal
- Best for: Smooth unimodal, correlated parameters, final refinement
-
GP-BO (Gaussian Process Bayesian Optimization) - Sample efficient
- Strengths: Excellent for expensive evaluations, good uncertainty
- Weaknesses: Scales poorly >10D, expensive surrogate training
- Best for: Smooth landscapes, low-dimensional, expensive simulations
-
Random - Baseline exploration
- Strengths: No assumptions, good for initial characterization
- Weaknesses: No learning, very slow convergence
- Best for: Initial exploration only
-
Hybrid GP→CMA-ES (Future) - Best of both worlds
- GP finds promising basin, CMA-ES refines locally
- Requires transition logic (planned)
Critical Implementation Details
Study-Aware Design:
- All components use
study.trialsnot session history - Supports interrupted/resumed optimization
- State persisted to JSON files
Callback Integration:
# In generated runner
from optimization_engine.intelligent_optimizer import create_intelligent_optimizer
optimizer = create_intelligent_optimizer(
study_name=study_name,
study_dir=results_dir,
verbose=True
)
results = optimizer.optimize(
objective_function=objective,
design_variables=design_vars,
n_trials=100
)
Fallback Behavior:
If intelligent_optimization.enabled = false, falls back to standard TPE optimization.
When to Use Protocol 10
ALWAYS use for:
- New problem types (unknown landscape)
- Production optimization runs (want best performance)
- Studies with >50 trial budget (enough for characterization)
Consider disabling for:
- Quick tests (<20 trials)
- Known problem types with proven strategy
- Debugging/development work
Future Enhancements
Planned Features:
- Transfer Learning: Build database of landscape → strategy mappings
- Multi-Armed Bandit: Thompson sampling for strategy portfolio
- Hybrid Strategies: Automatic GP→CMA-ES transitions
- Parallel Strategy Testing: Run multiple strategies concurrently
- Meta-Learning: Learn switching thresholds from historical data
VERSION HISTORY
-
v1.3 (2025-11-19): Added Protocol 10 - Intelligent Multi-Strategy Optimization
- Automatic landscape characterization
- Intelligent strategy selection with decision tree
- Dynamic strategy switching with stagnation detection
- Comprehensive tracking and transparency
- Four new modules: landscape_analyzer, strategy_selector, strategy_portfolio, intelligent_optimizer
- Full integration with Protocols 8 and 9
-
v1.2 (2025-11-19): Overhauled adaptive optimization
- Fixed critical study-aware bug in confidence tracking
- Added phase transition persistence and reporting
- Added confidence progression visualization
- Integrated Optuna professional visualization library
- Updated all protocols with study-aware architecture
-
v1.1 (2025-11-19): Added extensibility architecture
- Hook system specification
- Extractor library architecture
- Plugin system design
- API-ready architecture
- Configuration hierarchy
- Future feature foundations
-
v1.0 (2025-11-19): Initial protocol established
- Intelligent benchmarking workflow
- Solution discovery and matching
- Target-matching objective functions
- Markdown reports with graphs
- UTF-8 encoding standards
- Folder structure standards
END OF PROTOCOL
This protocol must be followed for ALL Atomizer operations. Deviations require explicit documentation and user approval.