- Restructure docs/ folder (remove numeric prefixes): - 04_USER_GUIDES -> guides/ - 05_API_REFERENCE -> api/ - 06_PHYSICS -> physics/ - 07_DEVELOPMENT -> development/ - 08_ARCHIVE -> archive/ - 09_DIAGRAMS -> diagrams/ - Replace tagline 'Talk, don't click' with 'LLM-driven optimization framework' in 9 files - Create comprehensive docs/GETTING_STARTED.md: - Prerequisites and quick setup - Project structure overview - First study tutorial (Claude or manual) - Dashboard usage guide - Neural acceleration introduction - Rewrite docs/00_INDEX.md with correct paths and modern structure - Archive obsolete files: - 01_PROTOCOLS.md -> archive/historical/01_PROTOCOLS_legacy.md - 03_GETTING_STARTED.md -> archive/historical/ - ATOMIZER_PODCAST_BRIEFING.md -> archive/marketing/ - Update timestamps to 2026-01-20 across all key files - Update .gitignore to exclude docs/generated/ - Version bump: ATOMIZER_CONTEXT v1.8 -> v2.0
17 KiB
Atomizer Optimization Workflow
Complete guide to running professional optimization campaigns with Atomizer
Version: 2.0 (Mandatory Benchmarking) Last Updated: 2025-11-17
Overview
Atomizer enforces a professional, rigorous workflow for all optimization studies:
- Problem Definition - User describes the engineering problem
- Benchmarking (MANDATORY) - Discover, validate, propose
- Configuration - User refines based on benchmark proposals
- Integration Testing - Validate pipeline with 2-3 trials
- Full Optimization - Run complete campaign
- Reporting - Generate comprehensive results documentation
Key Innovation: Mandatory benchmarking ensures every study starts with a solid foundation.
Phase 1: Problem Definition & Study Creation
User Provides Problem Description
Example:
Optimize cantilevered beam with hole to minimize weight while
maintaining structural integrity.
Goals:
- Minimize: Total mass
- Constraints: Max stress < 150 MPa, Max deflection < 5 mm
Design Variables:
- Beam thickness: 5-15 mm
- Hole diameter: 20-60 mm
- Hole position from fixed end: 100-300 mm
Loading: 1000 N downward at free end
Material: Steel (yield 300 MPa)
Atomizer Creates Study Structure
from optimization_engine.study_creator import StudyCreator
from pathlib import Path
# Create study
creator = StudyCreator()
study_dir = creator.create_study(
study_name="cantilever_beam_optimization",
description="Minimize weight with stress and deflection constraints"
)
Result:
studies/cantilever_beam_optimization/
model/ # Place NX files here
substudies/
benchmarking/ # Mandatory first substudy
config/ # Configuration templates
plugins/ # Study-specific hooks
results/ # Optimization results
study_metadata.json # Study tracking
README.md # Study-specific guide
Next: User adds NX model files to model/ directory
Phase 2: Benchmarking (MANDATORY)
Purpose
Benchmarking is a mandatory first step that:
- Discovers what's in your model (expressions, OP2 contents)
- Validates the pipeline works (simulation, extraction)
- Proposes initial configuration (design variables, extractors, objectives)
- Gates optimization (must pass before substudies can be created)
Run Benchmarking
# After placing NX files in model/ directory
results = creator.run_benchmarking(
study_dir=study_dir,
prt_file=study_dir / "model" / "beam.prt",
sim_file=study_dir / "model" / "beam_sim1.sim"
)
What Benchmarking Does
Step 1: Model Introspection
- Reads all expressions from
.prtfile - Lists parameter names, values, units
- Identifies potential design variables
Example Output:
Found 5 expressions:
- thickness: 10.0 mm
- hole_diameter: 40.0 mm
- hole_position: 200.0 mm
- beam_length: 500.0 mm (likely constant)
- applied_load: 1000.0 N (likely constant)
Step 2: Baseline Simulation
- Runs NX simulation with current parameters
- Validates NX integration works
- Generates baseline OP2 file
Step 3: OP2 Introspection
- Analyzes OP2 file contents
- Detects element types (CTETRA, CHEXA, etc.)
- Lists available result types (displacement, stress, etc.)
Example Output:
OP2 Analysis:
- Element types: CTETRA
- Result types: displacement, stress
- Subcases: [1]
- Nodes: 15234
- Elements: 8912
Step 4: Baseline Results Extraction
- Extracts key metrics from baseline OP2
- Provides performance reference
Example Output:
Baseline Performance:
- max_displacement: 0.004523 mm
- max_von_mises: 145.2 MPa
- mass: 2.45 kg (if available)
Step 5: Configuration Proposals
- Suggests design variables (excluding constants)
- Proposes extractors based on OP2 contents
- Recommends objectives
Example Output:
Proposed Design Variables:
- thickness: ±20% of 10.0 mm ’ [8.0, 12.0] mm
- hole_diameter: ±20% of 40.0 mm ’ [32.0, 48.0] mm
- hole_position: ±20% of 200.0 mm ’ [160.0, 240.0] mm
Proposed Extractors:
- extract_displacement: Extract displacement from OP2
- extract_solid_stress: Extract stress from CTETRA elements
Proposed Objectives:
- max_displacement (minimize for stiffness)
- max_von_mises (minimize for safety)
- mass (minimize for weight)
Benchmark Report
Results saved to substudies/benchmarking/BENCHMARK_REPORT.md:
# Benchmarking Report
**Study**: cantilever_beam_optimization
**Date**: 2025-11-17T10:30:00
**Validation**: PASSED
## Model Introspection
... (complete report)
## Configuration Proposals
... (proposals for user to review)
Validation Status
Benchmarking either PASSES or FAILS:
- PASS: Simulation works, OP2 extractable, ready for substudies
- L FAIL: Issues found, must fix before proceeding
If failed: Review errors in report, fix model/simulation issues, re-run benchmarking.
Phase 3: Optimization Configuration
User Reviews Benchmark Proposals
- Open
substudies/benchmarking/BENCHMARK_REPORT.md - Review discovered expressions
- Review proposed design variables
- Review proposed objectives
- Decide on final configuration
User Provides Official Guidance
Based on benchmark proposals, user specifies:
# User's final configuration
config = {
"design_variables": [
{"parameter": "thickness", "min": 5.0, "max": 15.0, "units": "mm"},
{"parameter": "hole_diameter", "min": 20.0, "max": 60.0, "units": "mm"},
{"parameter": "hole_position", "min": 100.0, "max": 300.0, "units": "mm"}
],
"objectives": {
"primary": "mass", # Minimize weight
"direction": "minimize"
},
"constraints": [
{"metric": "max_von_mises", "limit": 150.0, "type": "less_than"},
{"metric": "max_displacement", "limit": 5.0, "type": "less_than"}
],
"n_trials": 50
}
Create LLM Workflow
Use API to parse user's natural language request into structured workflow:
from optimization_engine.llm_workflow_analyzer import LLMWorkflowAnalyzer
analyzer = LLMWorkflowAnalyzer(api_key="your-key")
workflow = analyzer.analyze_request("""
Minimize mass while ensuring:
- Max stress < 150 MPa
- Max deflection < 5 mm
Design variables: thickness (5-15mm), hole_diameter (20-60mm), hole_position (100-300mm)
Material: Steel with yield strength 300 MPa
""")
# Save workflow
with open(study_dir / "config" / "llm_workflow.json", 'w') as f:
json.dump(workflow, f, indent=2)
Phase 4: Substudy Creation
Create Substudies
Atomizer uses auto-numbering: substudy_1, substudy_2, etc.
# Create first substudy (substudy_1)
substudy_1 = creator.create_substudy(
study_dir=study_dir,
config=config # Optional, uses benchmark proposals if not provided
)
# Or with custom name
substudy_coarse = creator.create_substudy(
study_dir=study_dir,
substudy_name="coarse_exploration",
config=coarse_config
)
Auto-numbering:
- First substudy:
substudy_1 - Second substudy:
substudy_2 - Third substudy:
substudy_3 - etc.
Custom naming (optional):
- User can override with meaningful names
coarse_exploration,fine_tuning,robustness_check, etc.
Pre-Check Validation
IMPORTANT: Before ANY substudy runs, it validates against benchmarking:
- Checks benchmarking completed
- Verifies design variables exist in model
- Validates extractors match OP2 contents
- Ensures configuration is compatible
If validation fails ’ Study creator raises error with clear message.
Phase 5: Integration Testing
Purpose
Run 2-3 trials to validate complete pipeline before full optimization.
Integration Test Substudy
Create special integration test substudy:
integration_config = {
"n_trials": 3, # Just 3 trials for validation
# ... same design variables and extractors
}
integration_dir = creator.create_substudy(
study_dir=study_dir,
substudy_name="integration_test",
config=integration_config
)
Run Integration Test
from optimization_engine.llm_optimization_runner import LLMOptimizationRunner
# ... setup model_updater and simulation_runner ...
runner = LLMOptimizationRunner(
llm_workflow=workflow,
model_updater=model_updater,
simulation_runner=simulation_runner,
study_name="integration_test",
output_dir=integration_dir
)
results = runner.run_optimization(n_trials=3)
Validate Results
Check:
- All 3 trials completed successfully
- NX simulations ran
- OP2 extraction worked
- Calculations executed
- Objective function computed
- Constraints evaluated
Example Output:
Integration Test Results (3 trials):
Trial 1: thickness=7.5mm, hole_dia=40mm ’ mass=2.1kg, max_stress=152MPa L (constraint violated)
Trial 2: thickness=10mm, hole_dia=35mm ’ mass=2.6kg, max_stress=138MPa
Trial 3: thickness=8mm, hole_dia=45mm ’ mass=2.2kg, max_stress=148MPa
Pipeline validated - ready for full optimization
Phase 6: Full Optimization
Create Optimization Substudy
# Create main optimization substudy
main_config = {
"n_trials": 50, # Full campaign
# ... design variables, objectives, constraints
}
main_dir = creator.create_substudy(
study_dir=study_dir,
config=main_config # substudy_1 (auto-numbered)
)
Run Optimization
runner = LLMOptimizationRunner(
llm_workflow=workflow,
model_updater=model_updater,
simulation_runner=simulation_runner,
study_name="substudy_1",
output_dir=main_dir
)
results = runner.run_optimization(n_trials=50)
Monitor Progress
Live tracking with incremental history:
# Check progress in real-time
history_file = main_dir / "optimization_history_incremental.json"
# Updates after each trial
Phase 7: Comprehensive Reporting
Auto-Generated Report
Atomizer generates complete documentation:
Executive Summary
- Best design found
- Objective value achieved
- All constraints satisfied?
- Performance vs baseline
- Improvement percentage
Optimization History
- Convergence plots (objective vs trial)
- Parameter evolution plots
- Constraint violation tracking
- Pareto frontier (if multi-objective)
Best Design Analysis
- Design variable values
- All result metrics
- Stress contour plots (if available)
- Comparison table (baseline vs optimized)
Technical Documentation
- Complete workflow specification
- All extractors used (with code)
- All hooks used (with code + references)
- Calculations performed (with formulas)
- Software versions, timestamps
Reproducibility Package
- Exact configuration files
- Random seeds used
- Environment specifications
- Instructions to reproduce
Report Location
studies/cantilever_beam_optimization/
substudies/
substudy_1/
OPTIMIZATION_REPORT.md # Main report
plots/ # All visualizations
convergence.png
parameter_evolution.png
pareto_frontier.png
config/ # Configurations used
code/ # All generated code
results/ # Raw data
Advanced Features
Multiple Substudies (Continuation)
Build on previous results:
# Substudy 1: Coarse exploration
coarse_config = {
"n_trials": 20,
"design_variables": [
{"parameter": "thickness", "min": 5.0, "max": 15.0}
]
}
coarse_dir = creator.create_substudy(study_dir, config=coarse_config) # substudy_1
# Substudy 2: Fine-tuning (narrow ranges based on substudy_1)
fine_config = {
"n_trials": 50,
"continuation": {
"enabled": True,
"from_substudy": "substudy_1",
"inherit_best_params": True
},
"design_variables": [
{"parameter": "thickness", "min": 8.0, "max": 12.0} # Narrowed
]
}
fine_dir = creator.create_substudy(study_dir, config=fine_config) # substudy_2
Custom Hooks
Add study-specific post-processing:
# Create custom hook in plugins/post_calculation/
custom_hook = study_dir / "plugins" / "post_calculation" / "custom_constraint.py"
# ... write hook code ...
# Hook automatically loaded by LLMOptimizationRunner
Complete Example
Step-by-Step Beam Optimization
from pathlib import Path
from optimization_engine.study_creator import StudyCreator
from optimization_engine.llm_workflow_analyzer import LLMWorkflowAnalyzer
from optimization_engine.llm_optimization_runner import LLMOptimizationRunner
from optimization_engine.nx_updater import NXParameterUpdater
from optimization_engine.nx_solver import NXSolver
# 1. Create study
creator = StudyCreator()
study_dir = creator.create_study(
"cantilever_beam_opt",
"Minimize weight with stress/deflection constraints"
)
# 2. User adds NX files to model/ directory
# (manual step)
# 3. Run benchmarking
prt_file = study_dir / "model" / "beam.prt"
sim_file = study_dir / "model" / "beam_sim1.sim"
benchmark_results = creator.run_benchmarking(study_dir, prt_file, sim_file)
# 4. Review benchmark report
# Open substudies/benchmarking/BENCHMARK_REPORT.md
# 5. User provides configuration based on proposals
user_request = """
Minimize mass while ensuring:
- Max stress < 150 MPa
- Max deflection < 5 mm
Design variables:
- thickness: 5-15 mm
- hole_diameter: 20-60 mm
- hole_position: 100-300 mm
Material: Steel (yield 300 MPa)
Use TPE algorithm with 50 trials.
"""
# 6. Generate LLM workflow
analyzer = LLMWorkflowAnalyzer(api_key="your-key")
workflow = analyzer.analyze_request(user_request)
# 7. Create integration test substudy
integration_dir = creator.create_substudy(
study_dir,
substudy_name="integration_test"
)
# 8. Setup model updater and solver
updater = NXParameterUpdater(prt_file_path=prt_file)
def model_updater(design_vars):
updater.update_expressions(design_vars)
updater.save()
solver = NXSolver(nastran_version='2412', use_journal=True)
def simulation_runner(design_vars):
result = solver.run_simulation(sim_file, expression_updates=design_vars)
return result['op2_file']
# 9. Run integration test (3 trials)
integration_runner = LLMOptimizationRunner(
workflow, model_updater, simulation_runner,
"integration_test", integration_dir
)
integration_results = integration_runner.run_optimization(n_trials=3)
# 10. Validate integration results
# (check that all 3 trials passed)
# 11. Create main optimization substudy
main_dir = creator.create_substudy(study_dir) # auto: substudy_1
# 12. Run full optimization (50 trials)
main_runner = LLMOptimizationRunner(
workflow, model_updater, simulation_runner,
"substudy_1", main_dir
)
final_results = main_runner.run_optimization(n_trials=50)
# 13. Generate comprehensive report
# (auto-generated in substudy_1/OPTIMIZATION_REPORT.md)
# 14. Review results and archive
print(f"Best design: {final_results['best_params']}")
print(f"Best objective: {final_results['best_value']}")
Benefits of This Workflow
For Users
- No Surprises: Benchmarking validates everything before optimization
- Clear Guidance: Proposals help configure optimization correctly
- Professional Results: Comprehensive reports with all documentation
- Reproducible: Complete specification of what was done
- Traceable: All code, formulas, references documented
For Engineering Rigor
- Mandatory Validation: Can't run optimization on broken setup
- Complete Documentation: Every step is recorded
- Audit Trail: Know exactly what was optimized and how
- Scientific Standards: References, formulas, validation included
For Collaboration
- Standard Structure: Everyone uses same workflow
- Self-Documenting: Reports explain the study
- Easy Handoff: Complete package for review/approval
- Knowledge Capture: Lessons learned documented
Troubleshooting
Benchmarking Fails
Problem: Benchmarking validation fails
Solutions:
- Check NX model files are correct
- Ensure simulation setup is valid
- Review error messages in benchmark report
- Fix model issues and re-run benchmarking
Substudy Creation Blocked
Problem: Cannot create substudy
Cause: Benchmarking not completed
Solution: Run benchmarking first!
Pipeline Validation Fails
Problem: Integration test trials fail
Solutions:
- Check design variable ranges (too narrow/wide?)
- Verify extractors match OP2 contents
- Ensure constraints are feasible
- Review error logs from failed trials
References
- BenchmarkingSubstudy - Discovery and validation
- StudyCreator - Study management
- LLMOptimizationRunner - Optimization execution
- Phase 3.3 Wizard - Introspection tools
Document Maintained By: Antoine Letarte Last Updated: 2025-11-17 Version: 2.0 (Mandatory Benchmarking Workflow)