Files
Atomizer/docs/archive/historical/OPTIMIZATION_WORKFLOW.md
Anto01 ea437d360e docs: Major documentation overhaul - restructure folders, update tagline, add Getting Started guide
- Restructure docs/ folder (remove numeric prefixes):
  - 04_USER_GUIDES -> guides/
  - 05_API_REFERENCE -> api/
  - 06_PHYSICS -> physics/
  - 07_DEVELOPMENT -> development/
  - 08_ARCHIVE -> archive/
  - 09_DIAGRAMS -> diagrams/

- Replace tagline 'Talk, don't click' with 'LLM-driven optimization framework' in 9 files

- Create comprehensive docs/GETTING_STARTED.md:
  - Prerequisites and quick setup
  - Project structure overview
  - First study tutorial (Claude or manual)
  - Dashboard usage guide
  - Neural acceleration introduction

- Rewrite docs/00_INDEX.md with correct paths and modern structure

- Archive obsolete files:
  - 01_PROTOCOLS.md -> archive/historical/01_PROTOCOLS_legacy.md
  - 03_GETTING_STARTED.md -> archive/historical/
  - ATOMIZER_PODCAST_BRIEFING.md -> archive/marketing/

- Update timestamps to 2026-01-20 across all key files

- Update .gitignore to exclude docs/generated/

- Version bump: ATOMIZER_CONTEXT v1.8 -> v2.0
2026-01-20 10:03:45 -05:00

677 lines
17 KiB
Markdown
Raw Blame History

This file contains invisible Unicode characters
This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
# Atomizer Optimization Workflow
> **Complete guide to running professional optimization campaigns with Atomizer**
>
> **Version**: 2.0 (Mandatory Benchmarking)
> **Last Updated**: 2025-11-17
---
## Overview
Atomizer enforces a professional, rigorous workflow for all optimization studies:
1. **Problem Definition** - User describes the engineering problem
2. **Benchmarking** (MANDATORY) - Discover, validate, propose
3. **Configuration** - User refines based on benchmark proposals
4. **Integration Testing** - Validate pipeline with 2-3 trials
5. **Full Optimization** - Run complete campaign
6. **Reporting** - Generate comprehensive results documentation
**Key Innovation**: Mandatory benchmarking ensures every study starts with a solid foundation.
---
## Phase 1: Problem Definition & Study Creation
### User Provides Problem Description
**Example**:
```
Optimize cantilevered beam with hole to minimize weight while
maintaining structural integrity.
Goals:
- Minimize: Total mass
- Constraints: Max stress < 150 MPa, Max deflection < 5 mm
Design Variables:
- Beam thickness: 5-15 mm
- Hole diameter: 20-60 mm
- Hole position from fixed end: 100-300 mm
Loading: 1000 N downward at free end
Material: Steel (yield 300 MPa)
```
### Atomizer Creates Study Structure
```python
from optimization_engine.study_creator import StudyCreator
from pathlib import Path
# Create study
creator = StudyCreator()
study_dir = creator.create_study(
study_name="cantilever_beam_optimization",
description="Minimize weight with stress and deflection constraints"
)
```
**Result**:
```
studies/cantilever_beam_optimization/
 model/ # Place NX files here
 substudies/
  benchmarking/ # Mandatory first substudy
 config/ # Configuration templates
 plugins/ # Study-specific hooks
 results/ # Optimization results
 study_metadata.json # Study tracking
 README.md # Study-specific guide
```
**Next**: User adds NX model files to `model/` directory
---
## Phase 2: Benchmarking (MANDATORY)
### Purpose
Benchmarking is a **mandatory first step** that:
1. **Discovers** what's in your model (expressions, OP2 contents)
2. **Validates** the pipeline works (simulation, extraction)
3. **Proposes** initial configuration (design variables, extractors, objectives)
4. **Gates** optimization (must pass before substudies can be created)
### Run Benchmarking
```python
# After placing NX files in model/ directory
results = creator.run_benchmarking(
study_dir=study_dir,
prt_file=study_dir / "model" / "beam.prt",
sim_file=study_dir / "model" / "beam_sim1.sim"
)
```
### What Benchmarking Does
**Step 1: Model Introspection**
- Reads all expressions from `.prt` file
- Lists parameter names, values, units
- Identifies potential design variables
**Example Output**:
```
Found 5 expressions:
- thickness: 10.0 mm
- hole_diameter: 40.0 mm
- hole_position: 200.0 mm
- beam_length: 500.0 mm (likely constant)
- applied_load: 1000.0 N (likely constant)
```
**Step 2: Baseline Simulation**
- Runs NX simulation with current parameters
- Validates NX integration works
- Generates baseline OP2 file
**Step 3: OP2 Introspection**
- Analyzes OP2 file contents
- Detects element types (CTETRA, CHEXA, etc.)
- Lists available result types (displacement, stress, etc.)
**Example Output**:
```
OP2 Analysis:
- Element types: CTETRA
- Result types: displacement, stress
- Subcases: [1]
- Nodes: 15234
- Elements: 8912
```
**Step 4: Baseline Results Extraction**
- Extracts key metrics from baseline OP2
- Provides performance reference
**Example Output**:
```
Baseline Performance:
- max_displacement: 0.004523 mm
- max_von_mises: 145.2 MPa
- mass: 2.45 kg (if available)
```
**Step 5: Configuration Proposals**
- Suggests design variables (excluding constants)
- Proposes extractors based on OP2 contents
- Recommends objectives
**Example Output**:
```
Proposed Design Variables:
- thickness: ±20% of 10.0 mm [8.0, 12.0] mm
- hole_diameter: ±20% of 40.0 mm [32.0, 48.0] mm
- hole_position: ±20% of 200.0 mm [160.0, 240.0] mm
Proposed Extractors:
- extract_displacement: Extract displacement from OP2
- extract_solid_stress: Extract stress from CTETRA elements
Proposed Objectives:
- max_displacement (minimize for stiffness)
- max_von_mises (minimize for safety)
- mass (minimize for weight)
```
### Benchmark Report
Results saved to `substudies/benchmarking/BENCHMARK_REPORT.md`:
```markdown
# Benchmarking Report
**Study**: cantilever_beam_optimization
**Date**: 2025-11-17T10:30:00
**Validation**:  PASSED
## Model Introspection
... (complete report)
## Configuration Proposals
... (proposals for user to review)
```
### Validation Status
Benchmarking either **PASSES** or **FAILS**:
-  **PASS**: Simulation works, OP2 extractable, ready for substudies
- L **FAIL**: Issues found, must fix before proceeding
**If failed**: Review errors in report, fix model/simulation issues, re-run benchmarking.
---
## Phase 3: Optimization Configuration
### User Reviews Benchmark Proposals
1. Open `substudies/benchmarking/BENCHMARK_REPORT.md`
2. Review discovered expressions
3. Review proposed design variables
4. Review proposed objectives
5. Decide on final configuration
### User Provides Official Guidance
Based on benchmark proposals, user specifies:
```python
# User's final configuration
config = {
"design_variables": [
{"parameter": "thickness", "min": 5.0, "max": 15.0, "units": "mm"},
{"parameter": "hole_diameter", "min": 20.0, "max": 60.0, "units": "mm"},
{"parameter": "hole_position", "min": 100.0, "max": 300.0, "units": "mm"}
],
"objectives": {
"primary": "mass", # Minimize weight
"direction": "minimize"
},
"constraints": [
{"metric": "max_von_mises", "limit": 150.0, "type": "less_than"},
{"metric": "max_displacement", "limit": 5.0, "type": "less_than"}
],
"n_trials": 50
}
```
### Create LLM Workflow
Use API to parse user's natural language request into structured workflow:
```python
from optimization_engine.llm_workflow_analyzer import LLMWorkflowAnalyzer
analyzer = LLMWorkflowAnalyzer(api_key="your-key")
workflow = analyzer.analyze_request("""
Minimize mass while ensuring:
- Max stress < 150 MPa
- Max deflection < 5 mm
Design variables: thickness (5-15mm), hole_diameter (20-60mm), hole_position (100-300mm)
Material: Steel with yield strength 300 MPa
""")
# Save workflow
with open(study_dir / "config" / "llm_workflow.json", 'w') as f:
json.dump(workflow, f, indent=2)
```
---
## Phase 4: Substudy Creation
### Create Substudies
Atomizer uses auto-numbering: `substudy_1`, `substudy_2`, etc.
```python
# Create first substudy (substudy_1)
substudy_1 = creator.create_substudy(
study_dir=study_dir,
config=config # Optional, uses benchmark proposals if not provided
)
# Or with custom name
substudy_coarse = creator.create_substudy(
study_dir=study_dir,
substudy_name="coarse_exploration",
config=coarse_config
)
```
**Auto-numbering**:
- First substudy: `substudy_1`
- Second substudy: `substudy_2`
- Third substudy: `substudy_3`
- etc.
**Custom naming** (optional):
- User can override with meaningful names
- `coarse_exploration`, `fine_tuning`, `robustness_check`, etc.
### Pre-Check Validation
**IMPORTANT**: Before ANY substudy runs, it validates against benchmarking:
1. Checks benchmarking completed
2. Verifies design variables exist in model
3. Validates extractors match OP2 contents
4. Ensures configuration is compatible
If validation fails Study creator raises error with clear message.
---
## Phase 5: Integration Testing
### Purpose
Run 2-3 trials to validate complete pipeline before full optimization.
### Integration Test Substudy
Create special integration test substudy:
```python
integration_config = {
"n_trials": 3, # Just 3 trials for validation
# ... same design variables and extractors
}
integration_dir = creator.create_substudy(
study_dir=study_dir,
substudy_name="integration_test",
config=integration_config
)
```
### Run Integration Test
```python
from optimization_engine.llm_optimization_runner import LLMOptimizationRunner
# ... setup model_updater and simulation_runner ...
runner = LLMOptimizationRunner(
llm_workflow=workflow,
model_updater=model_updater,
simulation_runner=simulation_runner,
study_name="integration_test",
output_dir=integration_dir
)
results = runner.run_optimization(n_trials=3)
```
### Validate Results
Check:
-  All 3 trials completed successfully
-  NX simulations ran
-  OP2 extraction worked
-  Calculations executed
-  Objective function computed
-  Constraints evaluated
**Example Output**:
```
Integration Test Results (3 trials):
Trial 1: thickness=7.5mm, hole_dia=40mm mass=2.1kg, max_stress=152MPa L (constraint violated)
Trial 2: thickness=10mm, hole_dia=35mm mass=2.6kg, max_stress=138MPa 
Trial 3: thickness=8mm, hole_dia=45mm mass=2.2kg, max_stress=148MPa 
 Pipeline validated - ready for full optimization
```
---
## Phase 6: Full Optimization
### Create Optimization Substudy
```python
# Create main optimization substudy
main_config = {
"n_trials": 50, # Full campaign
# ... design variables, objectives, constraints
}
main_dir = creator.create_substudy(
study_dir=study_dir,
config=main_config # substudy_1 (auto-numbered)
)
```
### Run Optimization
```python
runner = LLMOptimizationRunner(
llm_workflow=workflow,
model_updater=model_updater,
simulation_runner=simulation_runner,
study_name="substudy_1",
output_dir=main_dir
)
results = runner.run_optimization(n_trials=50)
```
### Monitor Progress
Live tracking with incremental history:
```python
# Check progress in real-time
history_file = main_dir / "optimization_history_incremental.json"
# Updates after each trial
```
---
## Phase 7: Comprehensive Reporting
### Auto-Generated Report
Atomizer generates complete documentation:
#### Executive Summary
- Best design found
- Objective value achieved
- All constraints satisfied?
- Performance vs baseline
- Improvement percentage
#### Optimization History
- Convergence plots (objective vs trial)
- Parameter evolution plots
- Constraint violation tracking
- Pareto frontier (if multi-objective)
#### Best Design Analysis
- Design variable values
- All result metrics
- Stress contour plots (if available)
- Comparison table (baseline vs optimized)
#### Technical Documentation
- Complete workflow specification
- All extractors used (with code)
- All hooks used (with code + references)
- Calculations performed (with formulas)
- Software versions, timestamps
#### Reproducibility Package
- Exact configuration files
- Random seeds used
- Environment specifications
- Instructions to reproduce
### Report Location
```
studies/cantilever_beam_optimization/
 substudies/
 substudy_1/
 OPTIMIZATION_REPORT.md # Main report
 plots/ # All visualizations
  convergence.png
  parameter_evolution.png
  pareto_frontier.png
 config/ # Configurations used
 code/ # All generated code
 results/ # Raw data
```
---
## Advanced Features
### Multiple Substudies (Continuation)
Build on previous results:
```python
# Substudy 1: Coarse exploration
coarse_config = {
"n_trials": 20,
"design_variables": [
{"parameter": "thickness", "min": 5.0, "max": 15.0}
]
}
coarse_dir = creator.create_substudy(study_dir, config=coarse_config) # substudy_1
# Substudy 2: Fine-tuning (narrow ranges based on substudy_1)
fine_config = {
"n_trials": 50,
"continuation": {
"enabled": True,
"from_substudy": "substudy_1",
"inherit_best_params": True
},
"design_variables": [
{"parameter": "thickness", "min": 8.0, "max": 12.0} # Narrowed
]
}
fine_dir = creator.create_substudy(study_dir, config=fine_config) # substudy_2
```
### Custom Hooks
Add study-specific post-processing:
```python
# Create custom hook in plugins/post_calculation/
custom_hook = study_dir / "plugins" / "post_calculation" / "custom_constraint.py"
# ... write hook code ...
# Hook automatically loaded by LLMOptimizationRunner
```
---
## Complete Example
### Step-by-Step Beam Optimization
```python
from pathlib import Path
from optimization_engine.study_creator import StudyCreator
from optimization_engine.llm_workflow_analyzer import LLMWorkflowAnalyzer
from optimization_engine.llm_optimization_runner import LLMOptimizationRunner
from optimization_engine.nx_updater import NXParameterUpdater
from optimization_engine.nx_solver import NXSolver
# 1. Create study
creator = StudyCreator()
study_dir = creator.create_study(
"cantilever_beam_opt",
"Minimize weight with stress/deflection constraints"
)
# 2. User adds NX files to model/ directory
# (manual step)
# 3. Run benchmarking
prt_file = study_dir / "model" / "beam.prt"
sim_file = study_dir / "model" / "beam_sim1.sim"
benchmark_results = creator.run_benchmarking(study_dir, prt_file, sim_file)
# 4. Review benchmark report
# Open substudies/benchmarking/BENCHMARK_REPORT.md
# 5. User provides configuration based on proposals
user_request = """
Minimize mass while ensuring:
- Max stress < 150 MPa
- Max deflection < 5 mm
Design variables:
- thickness: 5-15 mm
- hole_diameter: 20-60 mm
- hole_position: 100-300 mm
Material: Steel (yield 300 MPa)
Use TPE algorithm with 50 trials.
"""
# 6. Generate LLM workflow
analyzer = LLMWorkflowAnalyzer(api_key="your-key")
workflow = analyzer.analyze_request(user_request)
# 7. Create integration test substudy
integration_dir = creator.create_substudy(
study_dir,
substudy_name="integration_test"
)
# 8. Setup model updater and solver
updater = NXParameterUpdater(prt_file_path=prt_file)
def model_updater(design_vars):
updater.update_expressions(design_vars)
updater.save()
solver = NXSolver(nastran_version='2412', use_journal=True)
def simulation_runner(design_vars):
result = solver.run_simulation(sim_file, expression_updates=design_vars)
return result['op2_file']
# 9. Run integration test (3 trials)
integration_runner = LLMOptimizationRunner(
workflow, model_updater, simulation_runner,
"integration_test", integration_dir
)
integration_results = integration_runner.run_optimization(n_trials=3)
# 10. Validate integration results
# (check that all 3 trials passed)
# 11. Create main optimization substudy
main_dir = creator.create_substudy(study_dir) # auto: substudy_1
# 12. Run full optimization (50 trials)
main_runner = LLMOptimizationRunner(
workflow, model_updater, simulation_runner,
"substudy_1", main_dir
)
final_results = main_runner.run_optimization(n_trials=50)
# 13. Generate comprehensive report
# (auto-generated in substudy_1/OPTIMIZATION_REPORT.md)
# 14. Review results and archive
print(f"Best design: {final_results['best_params']}")
print(f"Best objective: {final_results['best_value']}")
```
---
## Benefits of This Workflow
### For Users
1. **No Surprises**: Benchmarking validates everything before optimization
2. **Clear Guidance**: Proposals help configure optimization correctly
3. **Professional Results**: Comprehensive reports with all documentation
4. **Reproducible**: Complete specification of what was done
5. **Traceable**: All code, formulas, references documented
### For Engineering Rigor
1. **Mandatory Validation**: Can't run optimization on broken setup
2. **Complete Documentation**: Every step is recorded
3. **Audit Trail**: Know exactly what was optimized and how
4. **Scientific Standards**: References, formulas, validation included
### For Collaboration
1. **Standard Structure**: Everyone uses same workflow
2. **Self-Documenting**: Reports explain the study
3. **Easy Handoff**: Complete package for review/approval
4. **Knowledge Capture**: Lessons learned documented
---
## Troubleshooting
### Benchmarking Fails
**Problem**: Benchmarking validation fails
**Solutions**:
1. Check NX model files are correct
2. Ensure simulation setup is valid
3. Review error messages in benchmark report
4. Fix model issues and re-run benchmarking
### Substudy Creation Blocked
**Problem**: Cannot create substudy
**Cause**: Benchmarking not completed
**Solution**: Run benchmarking first!
### Pipeline Validation Fails
**Problem**: Integration test trials fail
**Solutions**:
1. Check design variable ranges (too narrow/wide?)
2. Verify extractors match OP2 contents
3. Ensure constraints are feasible
4. Review error logs from failed trials
---
## References
- [BenchmarkingSubstudy](../optimization_engine/benchmarking_substudy.py) - Discovery and validation
- [StudyCreator](../optimization_engine/study_creator.py) - Study management
- [LLMOptimizationRunner](../optimization_engine/llm_optimization_runner.py) - Optimization execution
- [Phase 3.3 Wizard](../optimization_engine/optimization_setup_wizard.py) - Introspection tools
---
**Document Maintained By**: Antoine Letarte
**Last Updated**: 2025-11-17
**Version**: 2.0 (Mandatory Benchmarking Workflow)