Files
Atomizer/.claude/skills/create-study.md
Antoine 8cbdbcad78 feat: Add Protocol 13 adaptive optimization, Plotly charts, and dashboard improvements
## Protocol 13: Adaptive Multi-Objective Optimization
- Iterative FEA + Neural Network surrogate workflow
- Initial FEA sampling, NN training, NN-accelerated search
- FEA validation of top NN predictions, retraining loop
- adaptive_state.json tracks iteration history and best values
- M1 mirror study (V11) with 103 FEA, 3000 NN trials

## Dashboard Visualization Enhancements
- Added Plotly.js interactive charts (parallel coords, Pareto, convergence)
- Lazy loading with React.lazy() for performance
- Code splitting: plotly.js-basic-dist (~1MB vs 3.5MB)
- Chart library toggle (Recharts default, Plotly on-demand)
- ExpandableChart component for full-screen modal views
- ConsoleOutput component for real-time log viewing

## Documentation
- Protocol 13 detailed documentation
- Dashboard visualization guide
- Plotly components README
- Updated run-optimization skill with Mode 5 (adaptive)

## Bug Fixes
- Fixed TypeScript errors in dashboard components
- Fixed Card component to accept ReactNode title
- Removed unused imports across components

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-04 07:41:54 -05:00

75 KiB
Raw Blame History

Create Optimization Study Skill

Last Updated: November 26, 2025 Version: 2.0 - Protocol Reference + Code Patterns (Centralized)

You are helping the user create a complete Atomizer optimization study from a natural language description.

CRITICAL: This skill is your SINGLE SOURCE OF TRUTH. DO NOT improvise or look at other studies for patterns. Use ONLY the extractors, patterns, and code templates documented here.


Protocol Reference (MUST USE)

This section defines ALL available components. When generating run_optimization.py, use ONLY these documented patterns.

PR.1 Extractor Catalog

ID Extractor Module Function Input Output Returns
E1 Displacement optimization_engine.extractors.extract_displacement extract_displacement(op2_file, subcase=1) .op2 mm {'max_displacement': float, 'max_disp_node': int, 'max_disp_x/y/z': float}
E2 Frequency optimization_engine.extractors.extract_frequency extract_frequency(op2_file, subcase=1, mode_number=1) .op2 Hz {'frequency': float, 'mode_number': int, 'eigenvalue': float, 'all_frequencies': list}
E3 Von Mises Stress optimization_engine.extractors.extract_von_mises_stress extract_solid_stress(op2_file, subcase=1, element_type='cquad4') .op2 MPa {'max_von_mises': float, 'max_stress_element': int}
E4 BDF Mass optimization_engine.extractors.bdf_mass_extractor extract_mass_from_bdf(bdf_file) .dat/.bdf kg float (mass in kg)
E5 CAD Expression Mass optimization_engine.extractors.extract_mass_from_expression extract_mass_from_expression(prt_file, expression_name='p173') .prt + _temp_mass.txt kg float (mass in kg)
E6 Field Data optimization_engine.extractors.field_data_extractor FieldDataExtractor(field_file, result_column, aggregation) .fld/.csv varies {'value': float, 'stats': dict}
E7 Stiffness optimization_engine.extractors.stiffness_calculator StiffnessCalculator(field_file, op2_file, force_component, displacement_component) .fld + .op2 N/mm {'stiffness': float, 'displacement': float, 'force': float}
E8 Zernike WFE optimization_engine.extractors.extract_zernike extract_zernike_from_op2(op2_file, bdf_file, subcase) .op2 + .bdf nm {'global_rms_nm': float, 'filtered_rms_nm': float, 'coefficients': list, ...}
E9 Zernike Relative optimization_engine.extractors.extract_zernike extract_zernike_relative_rms(op2_file, bdf_file, target_subcase, ref_subcase) .op2 + .bdf nm {'relative_filtered_rms_nm': float, 'delta_coefficients': list, ...}
E10 Zernike Helpers optimization_engine.extractors.zernike_helpers create_zernike_objective(op2_finder, subcase, metric) .op2 nm Callable returning metric value

PR.2 Extractor Code Snippets (COPY-PASTE)

E1: Displacement Extraction

from optimization_engine.extractors.extract_displacement import extract_displacement

disp_result = extract_displacement(op2_file, subcase=1)
max_displacement = disp_result['max_displacement']  # mm

E2: Frequency Extraction

from optimization_engine.extractors.extract_frequency import extract_frequency

freq_result = extract_frequency(op2_file, subcase=1, mode_number=1)
frequency = freq_result['frequency']  # Hz

E3: Stress Extraction

from optimization_engine.extractors.extract_von_mises_stress import extract_solid_stress

# For shell elements (CQUAD4, CTRIA3)
stress_result = extract_solid_stress(op2_file, subcase=1, element_type='cquad4')
# For solid elements (CTETRA, CHEXA)
stress_result = extract_solid_stress(op2_file, subcase=1, element_type='ctetra')
max_stress = stress_result['max_von_mises']  # MPa

E4: BDF Mass Extraction

from optimization_engine.extractors.bdf_mass_extractor import extract_mass_from_bdf

mass_kg = extract_mass_from_bdf(str(dat_file))  # kg

E5: CAD Expression Mass

from optimization_engine.extractors.extract_mass_from_expression import extract_mass_from_expression

mass_kg = extract_mass_from_expression(model_file, expression_name="p173")  # kg
# Note: Requires _temp_mass.txt to be written by solve journal

E6: Stiffness Calculation (k = F/δ)

# Simple stiffness from displacement
applied_force = 1000.0  # N - MUST MATCH YOUR MODEL'S APPLIED LOAD
stiffness = applied_force / max(abs(max_displacement), 1e-6)  # N/mm

E8: Zernike Wavefront Error Extraction (Telescope Mirrors)

from optimization_engine.extractors.extract_zernike import extract_zernike_from_op2

# Extract Zernike coefficients and RMS metrics for a single subcase
result = extract_zernike_from_op2(
    op2_file,
    bdf_file=None,  # Auto-detect from op2 location
    subcase="20",   # Subcase label (e.g., "20" = 20 deg elevation)
    displacement_unit="mm"
)
global_rms = result['global_rms_nm']       # Total surface RMS in nm
filtered_rms = result['filtered_rms_nm']   # RMS with low orders (piston, tip, tilt, defocus) removed
coefficients = result['coefficients']       # List of 50 Zernike coefficients

E9: Zernike Relative RMS (Between Subcases)

from optimization_engine.extractors.extract_zernike import extract_zernike_relative_rms

# Compare wavefront error between subcases (e.g., 40 deg vs 20 deg reference)
result = extract_zernike_relative_rms(
    op2_file,
    bdf_file=None,
    target_subcase="40",      # Target orientation
    reference_subcase="20",   # Reference (usually polishing orientation)
    displacement_unit="mm"
)
relative_rms = result['relative_filtered_rms_nm']  # Differential WFE in nm
delta_coeffs = result['delta_coefficients']        # Coefficient differences

E10: Zernike Objective Builder (Multi-Subcase Optimization)

from optimization_engine.extractors.zernike_helpers import ZernikeObjectiveBuilder

# Build objectives for multiple subcases in one extractor
builder = ZernikeObjectiveBuilder(
    op2_finder=lambda: model_dir / "ASSY_M1-solution_1.op2"
)

# Add relative objectives (target vs reference)
builder.add_relative_objective("40", "20", metric="relative_filtered_rms_nm", weight=5.0)
builder.add_relative_objective("60", "20", metric="relative_filtered_rms_nm", weight=5.0)

# Add absolute objective for polishing orientation
builder.add_subcase_objective("90", metric="rms_filter_j1to3", weight=1.0)

# Evaluate all at once (efficient - parses OP2 only once)
results = builder.evaluate_all()
# Returns: {'rel_40_vs_20': 4.2, 'rel_60_vs_20': 8.7, 'rms_90': 15.3}

PR.3 NXSolver Interface

Module: optimization_engine.nx_solver

Constructor:

from optimization_engine.nx_solver import NXSolver

nx_solver = NXSolver(
    nastran_version="2412",      # NX version
    timeout=600,                  # Max solve time (seconds)
    use_journal=True,             # Use journal mode (recommended)
    enable_session_management=True,
    study_name="my_study"
)

Main Method - run_simulation():

result = nx_solver.run_simulation(
    sim_file=sim_file,           # Path to .sim file
    working_dir=model_dir,       # Working directory
    expression_updates=design_vars,  # Dict: {'param_name': value}
    solution_name=None,          # None = solve ALL solutions
    cleanup=True                 # Remove temp files after
)

# Returns:
# {
#     'success': bool,
#     'op2_file': Path,
#     'log_file': Path,
#     'elapsed_time': float,
#     'errors': list,
#     'solution_name': str
# }

CRITICAL: For multi-solution workflows (static + modal), set solution_name=None.

PR.4 Sampler Configurations

Sampler Use Case Import Config
NSGAIISampler Multi-objective (2-3 objectives) from optuna.samplers import NSGAIISampler NSGAIISampler(population_size=20, mutation_prob=0.1, crossover_prob=0.9, seed=42)
TPESampler Single-objective from optuna.samplers import TPESampler TPESampler(seed=42)
CmaEsSampler Single-objective, continuous from optuna.samplers import CmaEsSampler CmaEsSampler(seed=42)

PR.5 Study Creation Patterns

Multi-Objective (NSGA-II):

study = optuna.create_study(
    study_name=study_name,
    storage=f"sqlite:///{results_dir / 'study.db'}",
    sampler=NSGAIISampler(population_size=20, seed=42),
    directions=['minimize', 'maximize'],  # [obj1_dir, obj2_dir]
    load_if_exists=True
)

Single-Objective (TPE):

study = optuna.create_study(
    study_name=study_name,
    storage=f"sqlite:///{results_dir / 'study.db'}",
    sampler=TPESampler(seed=42),
    direction='minimize',  # or 'maximize'
    load_if_exists=True
)

PR.6 Objective Function Return Formats

Multi-Objective (directions=['minimize', 'minimize']):

def objective(trial) -> Tuple[float, float]:
    # ... extraction ...
    return (obj1, obj2)  # Both positive, framework handles direction

Multi-Objective with maximize (directions=['maximize', 'minimize']):

def objective(trial) -> Tuple[float, float]:
    # ... extraction ...
    # Negate maximization objective for minimize direction
    return (-stiffness, mass)  # -stiffness so minimize → maximize

Single-Objective:

def objective(trial) -> float:
    # ... extraction ...
    return objective_value

PR.7 Hook System

Available Hook Points (from optimization_engine.plugins.hooks):

Hook Point When Context Keys
PRE_MESH Before meshing trial_number, design_variables, sim_file
POST_MESH After mesh trial_number, design_variables, sim_file
PRE_SOLVE Before solve trial_number, design_variables, sim_file, working_dir
POST_SOLVE After solve trial_number, design_variables, op2_file, working_dir
POST_EXTRACTION After extraction trial_number, design_variables, results, working_dir
POST_CALCULATION After calculations trial_number, objectives, constraints, feasible
CUSTOM_OBJECTIVE Custom objectives trial_number, design_variables, extracted_results

PR.8 Structured Logging (MANDATORY)

Always use structured logging:

from optimization_engine.logger import get_logger

logger = get_logger(study_name, study_dir=results_dir)

# Study lifecycle
logger.study_start(study_name, n_trials, "NSGAIISampler")
logger.study_complete(study_name, total_trials, successful_trials)

# Trial lifecycle
logger.trial_start(trial.number, design_vars)
logger.trial_complete(trial.number, objectives_dict, constraints_dict, feasible)
logger.trial_failed(trial.number, error_message)

# General logging
logger.info("message")
logger.warning("message")
logger.error("message", exc_info=True)

PR.9 Training Data Export (AtomizerField)

from optimization_engine.training_data_exporter import TrainingDataExporter

training_exporter = TrainingDataExporter(
    export_dir=export_dir,
    study_name=study_name,
    design_variable_names=['param1', 'param2'],
    objective_names=['stiffness', 'mass'],
    constraint_names=['mass_limit'],
    metadata={'atomizer_version': '2.0', 'optimization_algorithm': 'NSGA-II'}
)

# In objective function:
training_exporter.export_trial(
    trial_number=trial.number,
    design_variables=design_vars,
    results={'objectives': {...}, 'constraints': {...}},
    simulation_files={'dat_file': dat_path, 'op2_file': op2_path}
)

# After optimization:
training_exporter.finalize()

PR.10 Complete run_optimization.py Template

"""
{Study Name} Optimization
{Brief description}
"""

from pathlib import Path
import sys
import json
import argparse
from datetime import datetime
from typing import Optional, Tuple

project_root = Path(__file__).resolve().parents[2]
sys.path.insert(0, str(project_root))

import optuna
from optuna.samplers import NSGAIISampler  # or TPESampler

from optimization_engine.nx_solver import NXSolver
from optimization_engine.logger import get_logger

# Import extractors - USE ONLY FROM PR.2
from optimization_engine.extractors.extract_displacement import extract_displacement
from optimization_engine.extractors.bdf_mass_extractor import extract_mass_from_bdf
# Add other extractors as needed from PR.2


def load_config(config_file: Path) -> dict:
    with open(config_file, 'r') as f:
        return json.load(f)


def objective(trial: optuna.Trial, config: dict, nx_solver: NXSolver,
              model_dir: Path, logger) -> Tuple[float, float]:
    """Multi-objective function. Returns (obj1, obj2)."""

    # 1. Sample design variables
    design_vars = {}
    for var in config['design_variables']:
        param_name = var['parameter']
        bounds = var['bounds']
        design_vars[param_name] = trial.suggest_float(param_name, bounds[0], bounds[1])

    logger.trial_start(trial.number, design_vars)

    try:
        # 2. Run simulation
        sim_file = model_dir / config['simulation']['sim_file']
        result = nx_solver.run_simulation(
            sim_file=sim_file,
            working_dir=model_dir,
            expression_updates=design_vars,
            solution_name=config['simulation'].get('solution_name'),
            cleanup=True
        )

        if not result['success']:
            logger.trial_failed(trial.number, f"Simulation failed: {result.get('error')}")
            return (float('inf'), float('inf'))

        op2_file = result['op2_file']

        # 3. Extract results - USE PATTERNS FROM PR.2
        # Example: displacement and mass
        disp_result = extract_displacement(op2_file, subcase=1)
        max_displacement = disp_result['max_displacement']

        dat_file = model_dir / config['simulation']['dat_file']
        mass_kg = extract_mass_from_bdf(str(dat_file))

        # 4. Calculate objectives
        applied_force = 1000.0  # N - adjust to your model
        stiffness = applied_force / max(abs(max_displacement), 1e-6)

        # 5. Check constraints
        feasible = True
        constraint_results = {}
        for constraint in config.get('constraints', []):
            # Add constraint checking logic
            pass

        # 6. Set trial attributes
        trial.set_user_attr('stiffness', stiffness)
        trial.set_user_attr('mass', mass_kg)
        trial.set_user_attr('feasible', feasible)

        objectives = {'stiffness': stiffness, 'mass': mass_kg}
        logger.trial_complete(trial.number, objectives, constraint_results, feasible)

        # 7. Return objectives (negate maximize objectives if using minimize direction)
        return (-stiffness, mass_kg)

    except Exception as e:
        logger.trial_failed(trial.number, str(e))
        return (float('inf'), float('inf'))


def main():
    parser = argparse.ArgumentParser(
        description='{Study Name} Optimization',
        formatter_class=argparse.RawDescriptionHelpFormatter,
        epilog="""
Staged Workflow (recommended order):
  1. --discover    Clean old files, run ONE solve, discover outputs
  2. --validate    Run single trial to validate extraction works
  3. --test        Run 3 trials as integration test
  4. --train       Run FEA trials for training data collection
  5. --run         Launch official optimization (with --enable-nn for neural)
        """
    )

    # Workflow stage selection (mutually exclusive)
    stage_group = parser.add_mutually_exclusive_group()
    stage_group.add_argument('--discover', action='store_true',
                             help='Stage 1: Clean files, run ONE solve, discover outputs')
    stage_group.add_argument('--validate', action='store_true',
                             help='Stage 2: Run single validation trial')
    stage_group.add_argument('--test', action='store_true',
                             help='Stage 3: Run 3-trial integration test')
    stage_group.add_argument('--train', action='store_true',
                             help='Stage 4: Run FEA trials for training data')
    stage_group.add_argument('--run', action='store_true',
                             help='Stage 5: Launch official optimization')

    # Common options
    parser.add_argument('--trials', type=int, default=100)
    parser.add_argument('--resume', action='store_true')
    parser.add_argument('--enable-nn', action='store_true',
                        help='Enable neural surrogate')
    parser.add_argument('--clean', action='store_true',
                        help='Clean old Nastran files before running')

    args = parser.parse_args()

    # Require a workflow stage
    if not any([args.discover, args.validate, args.test, args.train, args.run]):
        print("No workflow stage specified. Use --discover, --validate, --test, --train, or --run")
        return 1

    study_dir = Path(__file__).parent
    config_path = study_dir / "1_setup" / "optimization_config.json"
    model_dir = study_dir / "1_setup" / "model"
    results_dir = study_dir / "2_results"
    results_dir.mkdir(exist_ok=True)

    study_name = "{study_name}"  # Replace with actual name

    logger = get_logger(study_name, study_dir=results_dir)
    config = load_config(config_path)
    nx_solver = NXSolver()

    # Handle staged workflow - see bracket_stiffness_optimization_atomizerfield for full implementation
    # Each stage calls specific helper functions:
    # - args.discover -> run_discovery(config, nx_solver, model_dir, results_dir, study_name, logger)
    # - args.validate -> run_validation(config, nx_solver, model_dir, results_dir, study_name, logger)
    # - args.test -> run_test(config, nx_solver, model_dir, results_dir, study_name, logger, n_trials=3)
    # - args.train/args.run -> continue to optimization below

    # For --run or --train stages, run full optimization
    storage = f"sqlite:///{results_dir / 'study.db'}"
    sampler = NSGAIISampler(population_size=20, seed=42)

    logger.study_start(study_name, args.trials, "NSGAIISampler")

    if args.resume:
        study = optuna.load_study(study_name=study_name, storage=storage, sampler=sampler)
    else:
        study = optuna.create_study(
            study_name=study_name,
            storage=storage,
            sampler=sampler,
            directions=['minimize', 'minimize'],  # Adjust per objectives
            load_if_exists=True
        )

    study.optimize(
        lambda trial: objective(trial, config, nx_solver, model_dir, logger),
        n_trials=args.trials,
        show_progress_bar=True
    )

    n_successful = len([t for t in study.trials if t.state == optuna.trial.TrialState.COMPLETE])
    logger.study_complete(study_name, len(study.trials), n_successful)

    # Print Pareto front
    for i, trial in enumerate(study.best_trials[:5]):
        logger.info(f"Pareto {i+1}: {trial.values}, params={trial.params}")


if __name__ == "__main__":
    main()

PR.11 Adding New Features to Atomizer Framework (REUSABILITY)

CRITICAL: When developing extractors, calculators, or post-processing logic for a study, ALWAYS add them to the Atomizer framework for reuse!

Why Reusability Matters

Each study should build on the framework, not duplicate code:

  • WRONG: Embed 500 lines of Zernike analysis in run_optimization.py
  • CORRECT: Create extract_zernike.py in optimization_engine/extractors/ and import it

When to Create a New Extractor

Create a new extractor when:

  1. The study needs result extraction not covered by existing extractors (E1-E10)
  2. The logic is reusable across different studies
  3. The extraction involves non-trivial calculations (>20 lines of code)

Workflow for Adding New Extractors

STEP 1: Check existing extractors in PR.1 Catalog
        ├── If exists → IMPORT and USE it (done!)
        └── If missing → Continue to STEP 2

STEP 2: Create extractor in optimization_engine/extractors/
        ├── File: extract_{feature}.py
        ├── Follow existing extractor patterns
        └── Include comprehensive docstrings

STEP 3: Add to __init__.py
        └── Export functions in optimization_engine/extractors/__init__.py

STEP 4: Update this skill (create-study.md)
        ├── Add to PR.1 Extractor Catalog table
        └── Add code snippet to PR.2

STEP 5: Document in CLAUDE.md (if major feature)
        └── Add to Available Extractors table

New Extractor Template

"""
{Feature} Extractor for Atomizer Optimization
=============================================

Extract {description} from {input_type} files.

Usage:
    from optimization_engine.extractors.extract_{feature} import extract_{feature}

    result = extract_{feature}(input_file, **params)
"""

from pathlib import Path
from typing import Dict, Any, Optional
import logging

logger = logging.getLogger(__name__)


def extract_{feature}(
    input_file: Path,
    param1: str = "default",
    **kwargs
) -> Dict[str, Any]:
    """
    Extract {feature} from {input_type}.

    Args:
        input_file: Path to input file
        param1: Description of param
        **kwargs: Additional parameters

    Returns:
        Dict with keys:
            - primary_result: Main extracted value
            - metadata: Additional extraction info

    Example:
        >>> result = extract_{feature}("model.op2")
        >>> print(result['primary_result'])
    """
    # Implementation here
    pass


# Export for __init__.py
__all__ = ['extract_{feature}']

Example: Adding Thermal Gradient Extractor

If a study needs thermal gradient analysis:

  1. Create: optimization_engine/extractors/extract_thermal_gradient.py
  2. Implement: Functions for parsing thermal OP2 data
  3. Export: Add to __init__.py
  4. Document: Add E11 to catalog here
  5. Use: Import in run_optimization.py
# In run_optimization.py - CORRECT
from optimization_engine.extractors.extract_thermal_gradient import extract_thermal_gradient

result = extract_thermal_gradient(op2_file, subcase=1)
max_gradient = result['max_gradient_K_per_mm']

NEVER Do This

# In run_optimization.py - WRONG!
def calculate_thermal_gradient(op2_file, subcase):
    """200 lines of thermal gradient calculation..."""
    # This should be in optimization_engine/extractors/!
    pass

result = calculate_thermal_gradient(op2_file, 1)  # Not reusable!

Updating This Skill After Adding Extractor

When you add a new extractor to the framework:

  1. PR.1: Add row to Extractor Catalog table with ID, name, module, function, input, output, returns
  2. PR.2: Add code snippet showing usage
  3. Common Patterns: Add new pattern if this creates a new optimization type

Document Philosophy

Two separate documents serve different purposes:

Document Purpose When Created Content Type
README.md Study Blueprint Before running What the study IS
optimization_report.md Results Report After running What the study FOUND

README = Engineering Blueprint (THIS skill generates):

  • Mathematical formulation with LaTeX notation
  • Design space definition
  • Algorithm properties and complexity
  • Extraction methods with formulas
  • Where results WILL BE generated

Results Report = Scientific Findings (Generated after optimization):

  • Convergence history and plots
  • Pareto front analysis with all iterations
  • Parameter correlations
  • Neural surrogate performance metrics
  • Algorithm statistics (hypervolume, diversity)

README Workflow Standard

The README is a complete scientific/engineering blueprint of THIS study - a formal document with mathematical rigor.

Required Sections (11 numbered sections)

# Section Purpose Format
1 Engineering Problem Physical system context 1.1 Objective, 1.2 Physical System
2 Mathematical Formulation Rigorous problem definition LaTeX: objectives, design space, constraints, Pareto dominance
3 Optimization Algorithm Algorithm configuration NSGA-II/TPE properties, complexity, return format
4 Simulation Pipeline Trial execution flow ASCII diagram with all steps + hooks
5 Result Extraction Methods How each result is obtained Formula, code, file sources per extraction
6 Neural Acceleration AtomizerField configuration Config table + training data location + expected performance
7 Study File Structure Complete directory tree Every file with description
8 Results Location Where outputs go File list + Results Report preview
9 Quick Start Launch commands validate, run, view, reset
10 Configuration Reference Config file mapping Key sections in optimization_config.json
11 References Academic citations Algorithms, tools, methods

Mathematical Notation Requirements

Use LaTeX/markdown formulas throughout:

### Objectives
| Objective | Goal | Weight | Formula | Units |
|-----------|------|--------|---------|-------|
| Stiffness | maximize | 1.0 | $k = \frac{F}{\delta_{max}}$ | N/mm |
| Mass | minimize | 0.1 | $m = \sum_{e} \rho_e V_e$ | kg |

### Design Space
$$\mathbf{x} = [\theta, t]^T \in \mathbb{R}^2$$
$$20 \leq \theta \leq 70$$
$$30 \leq t \leq 60$$

### Constraints
$$g_1(\mathbf{x}) = m - m_{max} \leq 0$$

### Pareto Dominance
Solution $\mathbf{x}_1$ dominates $\mathbf{x}_2$ if:
- $f_1(\mathbf{x}_1) \geq f_1(\mathbf{x}_2)$ and $f_2(\mathbf{x}_1) \leq f_2(\mathbf{x}_2)$
- With at least one strict inequality

Algorithm Properties to Document

NSGA-II:

  • Fast non-dominated sorting: O(MN^2) where M = objectives, N = population
  • Crowding distance for diversity preservation
  • Binary tournament selection with crowding comparison

TPE:

  • Tree-structured Parzen Estimator
  • Models p(x|y) and p(y) separately
  • Expected Improvement acquisition

Results Report Specification

After optimization completes, the system generates 2_results/reports/optimization_report.md containing:

Results Report Sections

Section Content Visualizations
1. Executive Summary Best solutions, convergence status, key findings -
2. Pareto Front Analysis All non-dominated solutions, trade-off analysis Pareto plot with all iterations
3. Convergence History Objective values over trials Line plots per objective
4. Parameter Correlations Design variable vs objective relationships Scatter plots, correlation matrix
5. Constraint Satisfaction Feasibility statistics, violation distribution Bar charts
6. Neural Surrogate Performance Training loss, validation R², prediction accuracy Training curves, parity plots
7. Algorithm Statistics NSGA-II: hypervolume indicator, diversity metrics Evolution plots
8. Recommended Configurations Top N solutions with engineering interpretation Summary table
9. Special Analysis Study-specific hooks (e.g., Zernike for optical) Domain-specific plots

Example Results Report Content

# Optimization Results Report
**Study**: bracket_stiffness_optimization_atomizerfield
**Completed**: 2025-11-26 14:32:15
**Total Trials**: 100 (50 FEA + 50 Neural)

## 1. Executive Summary
- **Best Stiffness**: 2,450 N/mm (Trial 67)
- **Best Mass**: 0.142 kg (Trial 23)
- **Pareto Solutions**: 12 non-dominated designs
- **Convergence**: Hypervolume stabilized after trial 75

## 2. Pareto Front Analysis
| Rank | Stiffness (N/mm) | Mass (kg) | θ (deg) | t (mm) |
|------|------------------|-----------|---------|--------|
| 1 | 2,450 | 0.185 | 45.2 | 58.1 |
| 2 | 2,320 | 0.168 | 42.1 | 52.3 |
...

## 6. Neural Surrogate Performance
- **Stiffness R²**: 0.967
- **Mass R²**: 0.994
- **Mean Absolute Error**: Stiffness ±42 N/mm, Mass ±0.003 kg
- **Prediction Time**: 4.2 ms (vs 18 min FEA)

Study-Specific Results Analysis

For specialized studies, document custom analysis hooks:

Study Type Custom Analysis Hook Point Output
Optical Mirror Zernike decomposition POST_EXTRACTION Aberration coefficients
Vibration Mode shape correlation POST_EXTRACTION MAC values
Thermal Temperature gradient analysis POST_CALCULATION ΔT distribution

Example Zernike Hook Documentation:

### 9. Special Analysis: Zernike Decomposition

The POST_EXTRACTION hook runs Zernike polynomial decomposition on the mirror surface deformation.

**Zernike Modes Tracked**:
| Mode | Name | Physical Meaning |
|------|------|------------------|
| Z4 | Defocus | Power error |
| Z5/Z6 | Astigmatism | Cylindrical error |
| Z7/Z8 | Coma | Off-axis aberration |

**Results Table**:
| Trial | Z4 (nm) | Z5 (nm) | RMS (nm) |
|-------|---------|---------|----------|
| 1 | 12.4 | 5.2 | 14.1 |

How Atomizer Studies Work

Study Architecture

An Atomizer study is a self-contained optimization project that combines:

  • NX CAD/FEA Model - Parametric part with simulation
  • Configuration - Objectives, constraints, design variables
  • Execution Scripts - Python runner using Optuna
  • Results Database - SQLite storage for all trials

Study Structure

studies/{study_name}/
├── 1_setup/                          # INPUT: Configuration & Model
│   ├── model/                        # WORKING COPY of NX Files (AUTO-COPIED)
│   │   ├── {Model}.prt               # Parametric part (COPIED FROM SOURCE)
│   │   ├── {Model}_sim1.sim          # Simulation setup (COPIED FROM SOURCE)
│   │   ├── {Model}_fem1.fem          # FEM mesh (AUTO-GENERATED)
│   │   ├── {Model}_fem1_i.prt        # Idealized part (COPIED if Assembly FEM)
│   │   └── *.dat, *.op2, *.f06       # Solver outputs (AUTO-GENERATED)
│   ├── optimization_config.json      # Study configuration (SKILL GENERATES)
│   └── workflow_config.json          # Workflow metadata (SKILL GENERATES)
├── 2_results/                        # OUTPUT: Results (AUTO-CREATED)
│   ├── study.db                      # Optuna SQLite database
│   ├── optimization_history.json     # Trial history
│   └── *.png, *.json                 # Plots and summaries
├── run_optimization.py               # Main entry point (SKILL GENERATES)
├── reset_study.py                    # Database reset script (SKILL GENERATES)
└── README.md                         # Study documentation (SKILL GENERATES)

CRITICAL: Model File Protection

NEVER modify the user's original/master model files. Always work on copies.

Why This Matters:

  • Optimization iteratively modifies expressions, meshes, and geometry
  • NX saves changes automatically - corruption during iteration can damage master files
  • Broken geometry/mesh can make files unrecoverable
  • Users may have months of CAD work in master files

Mandatory Workflow:

User's Source Files              Study Working Copy
─────────────────────────────────────────────────────────────
C:/Projects/M1-Gigabit/Latest/   studies/{study_name}/1_setup/model/
├── M1_Blank.prt           ──►   ├── M1_Blank.prt
├── M1_Blank_fem1.fem      ──►   ├── M1_Blank_fem1.fem
├── M1_Blank_fem1_i.prt    ──►   ├── M1_Blank_fem1_i.prt
├── ASSY_*.afm             ──►   ├── ASSY_*.afm
└── *_sim1.sim             ──►   └── *_sim1.sim

                    ↓ COPY ALL FILES ↓

        OPTIMIZATION RUNS ON WORKING COPY ONLY

                    ↓ IF CORRUPTION ↓

        Delete working copy, re-copy from source
        No damage to master files

Copy Script (generated in run_optimization.py):

import shutil
from pathlib import Path

def setup_working_copy(source_dir: Path, model_dir: Path, file_patterns: list):
    """
    Copy model files from user's source to study working directory.

    Args:
        source_dir: Path to user's original model files (NEVER MODIFY)
        model_dir: Path to study's 1_setup/model/ directory (WORKING COPY)
        file_patterns: List of glob patterns to copy (e.g., ['*.prt', '*.fem', '*.sim'])
    """
    model_dir.mkdir(parents=True, exist_ok=True)

    for pattern in file_patterns:
        for src_file in source_dir.glob(pattern):
            dst_file = model_dir / src_file.name
            if not dst_file.exists() or src_file.stat().st_mtime > dst_file.stat().st_mtime:
                print(f"Copying: {src_file.name}")
                shutil.copy2(src_file, dst_file)

    print(f"Working copy ready in: {model_dir}")

Assembly FEM Files to Copy:

File Pattern Purpose Required
*.prt Parametric geometry parts Yes
*_fem1.fem Component FEM meshes Yes
*_fem1_i.prt Idealized parts (geometry link) Yes (if exists)
*.afm Assembly FEM Yes (if assembly)
*_sim1.sim Simulation setup Yes
*.exp Expression files If used

When User Provides Source Path:

  1. ASK: "Where are your model files?"
  2. STORE: Record source path in optimization_config.json as "source_model_dir"
  3. COPY: All relevant NX files to 1_setup/model/
  4. NEVER: Point optimization directly at source files
  5. DOCUMENT: In README, show both source and working paths

Optimization Trial Loop

Each optimization trial follows this execution flow:

┌──────────────────────────────────────────────────────────────────────┐
│                         SINGLE TRIAL EXECUTION                        │
├──────────────────────────────────────────────────────────────────────┤
│                                                                       │
│  1. SAMPLE DESIGN VARIABLES (Optuna)                                 │
│     ├── support_angle = trial.suggest_float("support_angle", 20, 70) │
│     └── tip_thickness = trial.suggest_float("tip_thickness", 30, 60) │
│                              │                                        │
│                              ▼                                        │
│  2. UPDATE NX MODEL (nx_updater.py)                                  │
│     ├── Open .prt file                                               │
│     ├── Modify expressions: support_angle=45, tip_thickness=40       │
│     └── Save changes                                                 │
│                              │                                        │
│                              ▼                                        │
│  3. EXECUTE HOOKS: PRE_SOLVE                                         │
│     └── Validate design, log start                                   │
│                              │                                        │
│                              ▼                                        │
│  4. RUN NX SIMULATION (solve_simulation.py)                          │
│     ├── Open .sim file                                               │
│     ├── Update FEM from modified part                                │
│     ├── Solve (Nastran SOL 101/103)                                  │
│     └── Generate: .dat, .op2, .f06                                   │
│                              │                                        │
│                              ▼                                        │
│  5. EXECUTE HOOKS: POST_SOLVE                                        │
│     └── Export field data, log completion                            │
│                              │                                        │
│                              ▼                                        │
│  6. EXTRACT RESULTS (extractors/)                                    │
│     ├── Mass: BDFMassExtractor(.dat) → kg                            │
│     └── Stiffness: StiffnessCalculator(.fld, .op2) → N/mm           │
│                              │                                        │
│                              ▼                                        │
│  7. EXECUTE HOOKS: POST_EXTRACTION                                   │
│     └── Export training data, validate results                       │
│                              │                                        │
│                              ▼                                        │
│  8. EVALUATE CONSTRAINTS                                             │
│     └── mass_limit: mass ≤ 0.2 kg → feasible/infeasible             │
│                              │                                        │
│                              ▼                                        │
│  9. RETURN TO OPTUNA                                                 │
│     ├── Single-objective: return stiffness                           │
│     └── Multi-objective: return (stiffness, mass)                    │
│                                                                       │
└──────────────────────────────────────────────────────────────────────┘

Key Components

Component Module Purpose
NX Updater optimization_engine/nx_updater.py Modify .prt expressions
Simulation Solver optimization_engine/solve_simulation.py Run NX/Nastran solve
Result Extractors optimization_engine/extractors/*.py Parse .op2, .dat, .fld files
Hook System optimization_engine/plugins/hooks.py Lifecycle callbacks
Optuna Runner optimization_engine/runner.py Orchestrate optimization
Validators optimization_engine/validators/ Pre-flight checks

Hook Points

Hooks allow custom code execution at specific points in the trial:

Hook When Common Uses
PRE_MESH Before meshing Modify mesh parameters
POST_MESH After mesh Validate mesh quality
PRE_SOLVE Before solve Log trial start, validate
POST_SOLVE After solve Export fields, cleanup
POST_EXTRACTION After extraction Export training data
POST_CALCULATION After calculations Apply constraint penalties
CUSTOM_OBJECTIVE For custom objectives User-defined calculations

Available Extractors

Extractor Input Output Use Case
bdf_mass_extractor .dat/.bdf mass (kg) FEM mass from element properties
stiffness_calculator .fld + .op2 stiffness (N/mm) k = F/δ calculation
field_data_extractor .fld/.csv aggregated values Any field result
extract_displacement .op2 displacement (mm) Nodal displacements
extract_frequency .op2 frequency (Hz) Modal frequencies
extract_solid_stress .op2 stress (MPa) Von Mises stress

Protocol Selection

Protocol Use Case Sampler Output
Protocol 11 2-3 objectives NSGAIISampler Pareto front
Protocol 10 1 objective + constraints TPESampler/CMA-ES Single optimum
Legacy Simple problems TPESampler Single optimum

AtomizerField Neural Acceleration

AtomizerField is a neural network surrogate model that predicts FEA results from design parameters, enabling ~2,200x speedup after initial training.

How it works:

  1. FEA Exploration Phase - Run N trials with full FEA simulation
  2. Training Data Export - Each trial exports: BDF (mesh + params), OP2 (results), metadata.json
  3. Auto-Training Trigger - When min_training_points reached, neural network trains automatically
  4. Neural Acceleration Phase - Use trained model instead of FEA (4.5ms vs 10-30min)

Training Data Structure:

atomizer_field_training_data/{study_name}/
├── trial_0001/
│   ├── input/model.bdf      # Mesh + design parameters
│   ├── output/model.op2     # FEA results
│   └── metadata.json        # {design_vars, objectives, timestamp}
├── trial_0002/
└── ...

Neural Model Types:

Type Input Output Use Case
parametric Design params only Scalar objectives Fast, simple problems
mesh_based BDF mesh + params Field predictions Complex geometry changes

Config Options:

"neural_acceleration": {
  "enabled": true,
  "min_training_points": 50,     // When to auto-train
  "auto_train": true,            // Trigger training automatically
  "epochs": 100,                 // Training epochs
  "validation_split": 0.2,       // 20% for validation
  "retrain_threshold": 25,       // Retrain after N new points
  "model_type": "parametric"     // or "mesh_based"
}

Accuracy Expectations:

  • Well-behaved problems: R² > 0.95 after 50-100 samples
  • Complex nonlinear: R² > 0.90 after 100-200 samples
  • Always validate on held-out test set before production use

Optimization Algorithms

NSGA-II (Multi-Objective):

  • Non-dominated Sorting Genetic Algorithm II
  • Maintains population diversity on Pareto front
  • Uses crowding distance for selection pressure
  • Returns set of Pareto-optimal solutions (trade-offs)

TPE (Single-Objective):

  • Tree-structured Parzen Estimator
  • Bayesian optimization approach
  • Models good/bad parameter distributions
  • Efficient for expensive black-box functions

CMA-ES (Single-Objective):

  • Covariance Matrix Adaptation Evolution Strategy
  • Self-adaptive population-based search
  • Good for continuous, non-convex problems
  • Learns correlation structure of design space

Engineering Result Types

Result Type Nastran SOL Output File Extractor
Static Stress SOL 101 .op2 extract_solid_stress
Displacement SOL 101 .op2 extract_displacement
Natural Frequency SOL 103 .op2 extract_frequency
Buckling Load SOL 105 .op2 extract_buckling
Modal Shapes SOL 103 .op2 extract_mode_shapes
Mass - .dat/.bdf bdf_mass_extractor
Stiffness SOL 101 .fld + .op2 stiffness_calculator

Common Objective Formulations

Stiffness Maximization:

  • k = F/δ (force/displacement)
  • Maximize k or minimize 1/k (compliance)
  • Requires consistent load magnitude across trials

Mass Minimization:

  • Extract from BDF element properties + material density
  • Units: typically kg (NX uses kg-mm-s)

Stress Constraints:

  • Von Mises < σ_yield / safety_factor
  • Account for stress concentrations

Frequency Constraints:

  • f₁ > threshold (avoid resonance)
  • Often paired with mass minimization

Your Role

Guide the user through an interactive conversation to:

  1. Understand their optimization problem
  2. Classify objectives, constraints, and design variables
  3. Create the complete study infrastructure
  4. Generate all required files with proper configuration
  5. Provide clear next steps for running the optimization

Study Structure

A complete Atomizer study has this structure:

CRITICAL: All study files, including README.md and results, MUST be located within the study directory. NEVER create study documentation at the project root.

studies/{study_name}/
├── 1_setup/
│   ├── model/
│   │   ├── {Model}.prt              # NX Part file (user provides)
│   │   ├── {Model}_sim1.sim         # NX Simulation file (user provides)
│   │   └── {Model}_fem1.fem         # FEM mesh file (auto-generated by NX)
│   ├── optimization_config.json     # YOU GENERATE THIS
│   └── workflow_config.json         # YOU GENERATE THIS
├── 2_results/                       # Created automatically during optimization
│   ├── study.db                     # Optuna SQLite database
│   ├── optimization_history_incremental.json
│   └── [various analysis files]
├── run_optimization.py              # YOU GENERATE THIS
├── reset_study.py                   # YOU GENERATE THIS
├── README.md                        # YOU GENERATE THIS (INSIDE study directory!)
└── NX_FILE_MODIFICATIONS_REQUIRED.md  # YOU GENERATE THIS (if needed)

Interactive Discovery Process

Step 1: Problem Understanding

Ask clarifying questions to understand:

Engineering Context:

  • "What component are you optimizing?"
  • "What is the engineering application or scenario?"
  • "What are the real-world requirements or constraints?"

Objectives:

  • "What do you want to optimize?" (minimize/maximize)
  • "Is this single-objective or multi-objective?"
  • "What are the target values or acceptable ranges?"

Constraints:

  • "What limits must be satisfied?"
  • "What are the threshold values?"
  • "Are these hard constraints (must satisfy) or soft constraints (prefer to satisfy)?"

Design Variables:

  • "What parameters can be changed?"
  • "What are the min/max bounds for each parameter?"
  • "Are these NX expressions, geometry features, or material properties?"

Simulation Setup:

  • "What NX model files do you have?"
  • "What analysis types are needed?" (static, modal, thermal, etc.)
  • "What results need to be extracted?" (stress, displacement, frequency, mass, etc.)

Step 2: Classification & Analysis

Use the analyze-workflow skill to classify the problem:

# Invoke the analyze-workflow skill with user's description
# This returns JSON with classified engineering features, extractors, etc.

Review the classification with the user and confirm:

  • Are the objectives correctly identified?
  • Are constraints properly classified?
  • Are extractors mapped to the right result types?
  • Is the protocol selection appropriate?

Step 3: Protocol Selection

Based on analysis, recommend protocol:

Protocol 11 (Multi-Objective NSGA-II):

  • Use when: 2-3 conflicting objectives
  • Algorithm: NSGAIISampler
  • Output: Pareto front of optimal trade-offs
  • Example: Minimize mass + Maximize frequency

Protocol 10 (Single-Objective with Intelligent Strategies):

  • Use when: 1 objective with constraints
  • Algorithm: TPE, CMA-ES, or adaptive
  • Output: Single optimal solution
  • Example: Minimize stress subject to displacement < 1.5mm

Legacy (Basic TPE):

  • Use when: Simple single-objective problem
  • Algorithm: TPE
  • Output: Single optimal solution
  • Example: Quick exploration or testing

Step 4: Extractor Mapping

Map each result extraction to centralized extractors:

User Need Extractor Parameters
Displacement extract_displacement op2_file, subcase
Von Mises Stress extract_solid_stress op2_file, subcase, element_type
Natural Frequency extract_frequency op2_file, subcase, mode_number
FEM Mass extract_mass_from_bdf bdf_file
CAD Mass extract_mass_from_expression prt_file, expression_name

Step 5: Multi-Solution Detection

Check if multi-solution workflow is needed:

Indicators:

  • Extracting both static results (stress, displacement) AND modal results (frequency)
  • User mentions "static + modal analysis"
  • Objectives/constraints require different solution types

Action:

  • Set solution_name=None in run_optimization.py to solve all solutions
  • Document requirement in NX_FILE_MODIFICATIONS_REQUIRED.md
  • Use SolveAllSolutions() protocol (see NX_MULTI_SOLUTION_PROTOCOL.md)

File Generation

1. optimization_config.json

{
  "study_name": "{study_name}",
  "description": "{concise description}",
  "engineering_context": "{detailed real-world context}",

  "optimization_settings": {
    "protocol": "protocol_11_multi_objective",  // or protocol_10, etc.
    "n_trials": 30,
    "sampler": "NSGAIISampler",  // or "TPESampler"
    "pruner": null,
    "timeout_per_trial": 600
  },

  "design_variables": [
    {
      "parameter": "{nx_expression_name}",
      "bounds": [min, max],
      "description": "{what this controls}"
    }
  ],

  "objectives": [
    {
      "name": "{objective_name}",
      "goal": "minimize",  // or "maximize"
      "weight": 1.0,
      "description": "{what this measures}",
      "target": {target_value},
      "extraction": {
        "action": "extract_{type}",
        "domain": "result_extraction",
        "params": {
          "result_type": "{type}",
          "metric": "{specific_metric}"
        }
      }
    }
  ],

  "constraints": [
    {
      "name": "{constraint_name}",
      "type": "less_than",  // or "greater_than"
      "threshold": {value},
      "description": "{engineering justification}",
      "extraction": {
        "action": "extract_{type}",
        "domain": "result_extraction",
        "params": {
          "result_type": "{type}",
          "metric": "{specific_metric}"
        }
      }
    }
  ],

  "simulation": {
    "model_file": "{Model}.prt",
    "sim_file": "{Model}_sim1.sim",
    "fem_file": "{Model}_fem1.fem",
    "solver": "nastran",
    "analysis_types": ["static", "modal"]  // or just ["static"]
  },

  "reporting": {
    "generate_plots": true,
    "save_incremental": true,
    "llm_summary": false
  }
}

2. workflow_config.json

{
  "workflow_id": "{study_name}_workflow",
  "description": "{workflow description}",
  "steps": []  // Can be empty for now, used by future intelligent workflow system
}

3. run_optimization.py

Generate a complete Python script based on protocol:

Key sections:

  • Import statements (centralized extractors, NXSolver, Optuna)
  • Configuration loading
  • Objective function with proper:
    • Design variable sampling
    • Simulation execution with multi-solution support
    • Result extraction using centralized extractors
    • Constraint checking
    • Return format (tuple for multi-objective, float for single-objective)
  • Study creation with proper:
    • Directions for multi-objective (['minimize', 'maximize'])
    • Sampler selection (NSGAIISampler or TPESampler)
    • Storage location
  • Results display and dashboard instructions

IMPORTANT: Always include structured logging from Phase 1.3:

  • Import: from optimization_engine.logger import get_logger
  • Initialize in main(): logger = get_logger("{study_name}", study_dir=results_dir)
  • Replace all print() with logger.info/warning/error
  • Use structured methods:
    • logger.study_start(study_name, n_trials, sampler)
    • logger.trial_start(trial.number, design_vars)
    • logger.trial_complete(trial.number, objectives, constraints, feasible)
    • logger.trial_failed(trial.number, error)
    • logger.study_complete(study_name, n_trials, n_successful)
  • Error handling: logger.error("message", exc_info=True) for tracebacks

Template: Use studies/drone_gimbal_arm_optimization/run_optimization.py as reference

4. reset_study.py

Simple script to delete Optuna database:

"""Reset {study_name} optimization study by deleting database."""
import optuna
from pathlib import Path

study_dir = Path(__file__).parent
storage = f"sqlite:///{study_dir / '2_results' / 'study.db'}"
study_name = "{study_name}"

try:
    optuna.delete_study(study_name=study_name, storage=storage)
    print(f"[OK] Deleted study: {study_name}")
except KeyError:
    print(f"[WARNING] Study '{study_name}' not found (database may not exist)")
except Exception as e:
    print(f"[ERROR] Error: {e}")

5. README.md - SCIENTIFIC ENGINEERING BLUEPRINT

CRITICAL: ALWAYS place README.md INSIDE the study directory at studies/{study_name}/README.md

The README is a formal scientific/engineering blueprint - a rigorous document defining WHAT the study IS (not what it found).

Reference: See bracket_stiffness_optimization_atomizerfield/README.md for a complete example.

Use this template with 11 numbered sections:

# {Study Name}

{Brief description - 1-2 sentences}

**Created**: {date}
**Protocol**: {protocol} ({Algorithm Name})
**Status**: Ready to Run

---

## 1. Engineering Problem

### 1.1 Objective

{Clear statement of what you're trying to achieve - 1-2 sentences}

### 1.2 Physical System

- **Component**: {component name}
- **Material**: {material and key properties}
- **Loading**: {load description}
- **Boundary Conditions**: {BC description}
- **Analysis Type**: {Linear static/modal/etc.} (Nastran SOL {number})

---

## 2. Mathematical Formulation

### 2.1 Objectives

| Objective | Goal | Weight | Formula | Units |
|-----------|------|--------|---------|-------|
| {name} | maximize | 1.0 | $k = \frac{F}{\delta_{max}}$ | N/mm |
| {name} | minimize | 0.1 | $m = \sum_{e} \rho_e V_e$ | kg |

Where:
- $k$ = structural stiffness
- $F$ = applied force magnitude (N)
- $\delta_{max}$ = maximum absolute displacement (mm)
- $\rho_e$ = element material density (kg/mm³)
- $V_e$ = element volume (mm³)

### 2.2 Design Variables

| Parameter | Symbol | Bounds | Units | Description |
|-----------|--------|--------|-------|-------------|
| {param} | $\theta$ | [{min}, {max}] | {units} | {description} |

**Design Space**:
$$\mathbf{x} = [\theta, t]^T \in \mathbb{R}^n$$
$${min} \leq \theta \leq {max}$$

### 2.3 Constraints

| Constraint | Type | Formula | Threshold | Handling |
|------------|------|---------|-----------|----------|
| {name} | Inequality | $g_1(\mathbf{x}) = m - m_{max}$ | $m_{max} = {value}$ | Infeasible if violated |

**Feasible Region**:
$$\mathcal{F} = \{\mathbf{x} : g_1(\mathbf{x}) \leq 0\}$$

### 2.4 Multi-Objective Formulation

**Pareto Optimization Problem**:
$$\max_{\mathbf{x} \in \mathcal{F}} \quad f_1(\mathbf{x})$$
$$\min_{\mathbf{x} \in \mathcal{F}} \quad f_2(\mathbf{x})$$

**Pareto Dominance**: Solution $\mathbf{x}_1$ dominates $\mathbf{x}_2$ if:
- $f_1(\mathbf{x}_1) \geq f_1(\mathbf{x}_2)$ and $f_2(\mathbf{x}_1) \leq f_2(\mathbf{x}_2)$
- With at least one strict inequality

---

## 3. Optimization Algorithm

### 3.1 {Algorithm} Configuration

| Parameter | Value | Description |
|-----------|-------|-------------|
| Algorithm | {NSGA-II/TPE} | {Full algorithm name} |
| Population | auto | Managed by Optuna |
| Directions | `['{dir1}', '{dir2}']` | (obj1, obj2) |
| Sampler | `{Sampler}` | {Sampler description} |
| Trials | {n} | {breakdown if applicable} |

**Algorithm Properties**:
- {Property 1 with complexity if applicable: $O(...)$}
- {Property 2}
- {Property 3}

### 3.2 Return Format

```python
def objective(trial) -> Tuple[float, float]:
    # ... simulation and extraction ...
    return (obj1, obj2)  # Tuple, NOT negated

4. Simulation Pipeline

4.1 Trial Execution Flow

┌─────────────────────────────────────────────────────────────────────┐
│                         TRIAL n EXECUTION                            │
├─────────────────────────────────────────────────────────────────────┤
│                                                                      │
│  1. OPTUNA SAMPLES ({Algorithm})                                    │
│     {param1} = trial.suggest_float("{param1}", {min}, {max})        │
│     {param2} = trial.suggest_float("{param2}", {min}, {max})        │
│                                                                      │
│  2. NX PARAMETER UPDATE                                             │
│     Module: optimization_engine/nx_updater.py                       │
│     Action: {Part}.prt expressions ← {{params}}                     │
│                                                                      │
│  3. HOOK: PRE_SOLVE                                                 │
│     → {hook action description}                                     │
│                                                                      │
│  4. NX SIMULATION (Nastran SOL {num})                               │
│     Module: optimization_engine/solve_simulation.py                 │
│     Input: {Part}_sim1.sim                                          │
│     Output: .dat, .op2, .f06                                        │
│                                                                      │
│  5. HOOK: POST_SOLVE                                                │
│     → {hook action description}                                     │
│                                                                      │
│  6. RESULT EXTRACTION                                               │
│     {Obj1} ← {extractor}(.{ext})                                    │
│     {Obj2} ← {extractor}(.{ext})                                    │
│                                                                      │
│  7. HOOK: POST_EXTRACTION                                           │
│     → {hook action description}                                     │
│                                                                      │
│  8. CONSTRAINT EVALUATION                                           │
│     {constraint} → feasible/infeasible                              │
│                                                                      │
│  9. RETURN TO OPTUNA                                                │
│     return ({obj1}, {obj2})                                         │
│                                                                      │
└─────────────────────────────────────────────────────────────────────┘

4.2 Hooks Configuration

Hook Point Function Purpose
PRE_SOLVE {function}() {purpose}
POST_SOLVE {function}() {purpose}
POST_EXTRACTION {function}() {purpose}

5. Result Extraction Methods

5.1 {Objective 1} Extraction

Attribute Value
Extractor {extractor_name}
Module {full.module.path}
Function {function_name}()
Source {source_file}
Output {units}

Algorithm:

{formula}

Where {variable definitions}.

Code:

from {module} import {function}

result = {function}("{source_file}")
{variable} = result['{key}']  # {units}

{Repeat for each objective/extraction}


6. Neural Acceleration (AtomizerField)

6.1 Configuration

Setting Value Description
enabled {true/false} Neural surrogate active
min_training_points {n} FEA trials before auto-training
auto_train {true/false} Trigger training automatically
epochs {n} Training epochs
validation_split {n} Holdout for validation
retrain_threshold {n} Retrain after N new FEA points
model_type {type} Input format

6.2 Surrogate Model

Input: \mathbf{x} = [{params}]^T \in \mathbb{R}^n

Output: \hat{\mathbf{y}} = [{outputs}]^T \in \mathbb{R}^m

Training Objective:

\mathcal{L} = \frac{1}{N} \sum_{i=1}^{N} \left[ (y_i - \hat{y}_i)^2 \right]

6.3 Training Data Location

{training_data_path}/
├── trial_0001/
│   ├── input/model.bdf
│   ├── output/model.op2
│   └── metadata.json
├── trial_0002/
└── ...

6.4 Expected Performance

Metric Value
FEA time per trial {time}
Neural time per trial ~{time}
Speedup ~{n}x
Expected R² > {value}

7. Study File Structure

{study_name}/
│
├── 1_setup/                              # INPUT CONFIGURATION
│   ├── model/                            # NX Model Files
│   │   ├── {Part}.prt                    # Parametric part
│   │   │   └── Expressions: {list}
│   │   ├── {Part}_sim1.sim               # Simulation
│   │   ├── {Part}_fem1.fem               # FEM mesh
│   │   ├── {output}.dat                  # Nastran BDF
│   │   ├── {output}.op2                  # Binary results
│   │   └── {other files}                 # {descriptions}
│   │
│   ├── optimization_config.json          # Study configuration
│   └── workflow_config.json              # Workflow metadata
│
├── 2_results/                            # OUTPUT (auto-generated)
│   ├── study.db                          # Optuna SQLite database
│   ├── optimization_history.json         # Trial history
│   ├── pareto_front.json                 # Pareto-optimal solutions
│   ├── optimization.log                  # Structured log
│   └── reports/                          # Generated reports
│       └── optimization_report.md        # Full results report
│
├── run_optimization.py                   # Entry point
├── reset_study.py                        # Database reset
└── README.md                             # This blueprint

8. Results Location

After optimization completes, results will be generated in 2_results/:

File Description Format
study.db Optuna database with all trials SQLite
optimization_history.json Full trial history JSON
pareto_front.json Pareto-optimal solutions JSON
optimization.log Execution log Text
reports/optimization_report.md Full Results Report Markdown

8.1 Results Report Contents

The generated optimization_report.md will contain:

  1. Optimization Summary - Best solutions, convergence status
  2. Pareto Front Analysis - All non-dominated solutions with trade-off visualization
  3. Parameter Correlations - Design variable vs objective relationships
  4. Convergence History - Objective values over trials
  5. Constraint Satisfaction - Feasibility statistics
  6. Neural Surrogate Performance - Training loss, validation R², prediction accuracy
  7. Algorithm Statistics - {Algorithm}-specific metrics
  8. Recommendations - Suggested optimal configurations

9. Quick Start

# STAGE 1: DISCOVER - Clean old files, run ONE solve, discover available outputs
python run_optimization.py --discover

# STAGE 2: VALIDATE - Run single trial to validate extraction works
python run_optimization.py --validate

# STAGE 3: TEST - Run 3-trial integration test
python run_optimization.py --test

# STAGE 4: TRAIN - Collect FEA training data for neural surrogate
python run_optimization.py --train --trials 50

# STAGE 5: RUN - Official optimization
python run_optimization.py --run --trials 100

# With neural acceleration (after training)
python run_optimization.py --run --trials 100 --enable-nn --resume

Stage Descriptions

Stage Command Purpose When to Use
DISCOVER --discover Clean old files, run 1 solve, report all output files First time setup, exploring model outputs
VALIDATE --validate Run 1 trial with full extraction pipeline After discover, verify everything works
TEST --test Run 3 trials, check consistency Before committing to long runs
TRAIN --train Collect FEA data for neural network Building AtomizerField surrogate
RUN --run Official optimization Production runs

Additional Options

# Clean old Nastran files before any stage
python run_optimization.py --discover --clean
python run_optimization.py --run --trials 100 --clean

# Resume from existing study
python run_optimization.py --run --trials 50 --resume

# Reset study (delete database)
python reset_study.py

Dashboard Access

Dashboard URL Purpose
Atomizer Dashboard http://localhost:3003 Live optimization monitoring, Pareto plots
Optuna Dashboard http://localhost:8081 Trial history, parameter importance
API Docs http://localhost:8000/docs Backend API documentation

Launch Dashboard (from project root):

# Windows
launch_dashboard.bat

# Or manually:
# Terminal 1: cd atomizer-dashboard/backend && python -m uvicorn api.main:app --port 8000 --reload
# Terminal 2: cd atomizer-dashboard/frontend && npm run dev

10. Configuration Reference

File: 1_setup/optimization_config.json

Section Key Description
optimization_settings.protocol {protocol} Algorithm selection
optimization_settings.sampler {Sampler} Optuna sampler
optimization_settings.n_trials {n} Total trials
design_variables[] [{params}] Params to optimize
objectives[] [{objectives}] Objectives with goals
constraints[] [{constraints}] Constraints with thresholds
result_extraction.* Extractor configs How to get results
neural_acceleration.* Neural settings AtomizerField config

11. References

  • {Author} ({year}). {Title}. {Journal}.
  • {Tool} Documentation: {description}

**Location**: `studies/{study_name}/README.md` (NOT at project root)

### 6. NX_FILE_MODIFICATIONS_REQUIRED.md (if needed)

If multi-solution workflow or specific NX setup is required:

```markdown
# NX File Modifications Required

Before running this optimization, you must modify the NX simulation files.

## Required Changes

### 1. Add Modal Analysis Solution (if needed)

Current: Only static analysis (SOL 101)
Required: Static + Modal (SOL 101 + SOL 103)

Steps:
1. Open `{Model}_sim1.sim` in NX
2. Solution → Create → Modal Analysis
3. Set frequency extraction parameters
4. Save simulation

### 2. Update Load Cases (if needed)

Current: [describe current loads]
Required: [describe required loads]

Steps: [specific instructions]

### 3. Verify Material Properties

Required: [material name and properties]

## Verification

After modifications:
1. Run simulation manually in NX
2. Verify OP2 files are generated
3. Check solution_1.op2 and solution_2.op2 exist (if multi-solution)

User Interaction Best Practices

Ask Before Generating

Always confirm with user:

  1. "Here's what I understand about your optimization problem: [summary]. Is this correct?"
  2. "I'll use Protocol {X} because [reasoning]. Does this sound right?"
  3. "I'll create extractors for: [list]. Are these the results you need?"
  4. "Should I generate the complete study structure now?"

Provide Clear Next Steps

After generating files:

✓ Created study: studies/{study_name}/
✓ Generated optimization config
✓ Generated run_optimization.py with {protocol}
✓ Generated README.md with full documentation

Next Steps:
1. Place your NX files in studies/{study_name}/1_setup/model/
   - {Model}.prt
   - {Model}_sim1.sim
2. [If NX modifications needed] Read NX_FILE_MODIFICATIONS_REQUIRED.md
3. Test with 3 trials: cd studies/{study_name} && python run_optimization.py --trials 3
4. Monitor in dashboard: http://localhost:3003
5. Full run: python run_optimization.py --trials {n_trials}

Handle Edge Cases

User has incomplete information:

  • Suggest reasonable defaults based on similar studies
  • Document assumptions clearly in README
  • Mark as "REQUIRES USER INPUT" in generated files

User wants custom extractors:

  • Explain centralized extractor library
  • If truly custom, guide them to create in optimization_engine/extractors/
  • Inherit from OP2Extractor base class

User unsure about bounds:

  • Recommend conservative bounds based on engineering judgment
  • Suggest iterative approach: "Start with [bounds], then refine based on initial results"

User doesn't have NX files yet:

  • Generate all Python/JSON files anyway
  • Create placeholder model directory
  • Provide clear instructions for adding NX files later

Common Patterns

Pattern 1: Mass Minimization with Constraints

Objective: Minimize mass
Constraints: Stress < limit, Displacement < limit, Frequency > limit
Protocol: Protocol 10 (single-objective TPE)
Extractors: extract_mass_from_expression, extract_solid_stress,
            extract_displacement, extract_frequency
Multi-Solution: Yes (static + modal)

Pattern 2: Mass vs Frequency Trade-off

Objectives: Minimize mass, Maximize frequency
Constraints: Stress < limit, Displacement < limit
Protocol: Protocol 11 (multi-objective NSGA-II)
Extractors: extract_mass_from_expression, extract_frequency,
            extract_solid_stress, extract_displacement
Multi-Solution: Yes (static + modal)

Pattern 3: Stress Minimization

Objective: Minimize stress
Constraints: Displacement < limit
Protocol: Protocol 10 (single-objective TPE)
Extractors: extract_solid_stress, extract_displacement
Multi-Solution: No (static only)

Validation Integration

After generating files, always validate the study setup using the validator system:

Config Validation

from optimization_engine.validators import validate_config_file

result = validate_config_file("studies/{study_name}/1_setup/optimization_config.json")
if result.is_valid:
    print("[OK] Configuration is valid!")
else:
    for error in result.errors:
        print(f"[ERROR] {error}")

Model Validation

from optimization_engine.validators import validate_study_model

result = validate_study_model("{study_name}")
if result.is_valid:
    print(f"[OK] Model files valid!")
    print(f"    Part: {result.prt_file.name}")
    print(f"    Simulation: {result.sim_file.name}")
else:
    for error in result.errors:
        print(f"[ERROR] {error}")

Complete Study Validation

from optimization_engine.validators import validate_study

result = validate_study("{study_name}")
print(result)  # Shows complete health check

Validation Checklist for Generated Studies

Before declaring a study complete, ensure:

  1. Config Validation Passes:

    • All design variables have valid bounds (min < max)
    • All objectives have proper extraction methods
    • All constraints have thresholds defined
    • Protocol matches objective count
  2. Model Files Ready (user must provide):

    • Part file (.prt) exists in model directory
    • Simulation file (.sim) exists
    • FEM file (.fem) will be auto-generated
  3. Run Script Works:

    • Test with python run_optimization.py --trials 1
    • Verify imports resolve correctly
    • Verify NX solver can be reached

Automated Pre-Flight Check

Add this to run_optimization.py:

def preflight_check():
    """Validate study setup before running."""
    from optimization_engine.validators import validate_study

    result = validate_study(STUDY_NAME)

    if not result.is_ready_to_run:
        print("[X] Study validation failed!")
        print(result)
        sys.exit(1)

    print("[OK] Pre-flight check passed!")
    return True

if __name__ == "__main__":
    preflight_check()
    # ... rest of optimization

Critical Reminders

Multi-Objective Return Format

# ✅ CORRECT: Return tuple with proper semantic directions
study = optuna.create_study(
    directions=['minimize', 'maximize'],  # Semantic directions
    sampler=NSGAIISampler()
)

def objective(trial):
    return (mass, frequency)  # Return positive values
# ❌ WRONG: Using negative values
return (mass, -frequency)  # Creates degenerate Pareto front

Multi-Solution NX Protocol

# ✅ CORRECT: Solve all solutions
result = nx_solver.run_simulation(
    sim_file=sim_file,
    working_dir=model_dir,
    expression_updates=design_vars,
    solution_name=None  # None = solve ALL solutions
)
# ❌ WRONG: Only solves first solution
solution_name="Solution 1"  # Multi-solution workflows will fail

Extractor Selection

Always use centralized extractors from optimization_engine/extractors/:

  • Standardized error handling
  • Consistent return formats
  • Well-tested and documented
  • No code duplication

Output Format

After completing study creation, provide:

  1. Summary Table:
Study Created: {study_name}
Protocol: {protocol}
Objectives: {list}
Constraints: {list}
Design Variables: {list}
Multi-Solution: {Yes/No}
  1. File Checklist (ALL MANDATORY):
✓ studies/{study_name}/1_setup/optimization_config.json
✓ studies/{study_name}/1_setup/workflow_config.json
✓ studies/{study_name}/run_optimization.py
✓ studies/{study_name}/reset_study.py
✓ studies/{study_name}/README.md                         # Engineering blueprint
✓ studies/{study_name}/STUDY_REPORT.md                   # MANDATORY - Results report template
[✓] studies/{study_name}/NX_FILE_MODIFICATIONS_REQUIRED.md (if needed)

STUDY_REPORT.md - MANDATORY FOR ALL STUDIES

CRITICAL: Every study MUST have a STUDY_REPORT.md file. This is the living results document that gets updated as optimization progresses. Create it at study setup time with placeholder sections.

Location: studies/{study_name}/STUDY_REPORT.md

Purpose:

  • Documents optimization results as they come in
  • Tracks best solutions found
  • Records convergence history
  • Compares FEA vs Neural predictions (if applicable)
  • Provides engineering recommendations

Template:

# {Study Name} - Optimization Report

**Study**: {study_name}
**Created**: {date}
**Status**: 🔄 In Progress / ✅ Complete

---

## Executive Summary

| Metric | Value |
|--------|-------|
| Total Trials | - |
| FEA Trials | - |
| NN Trials | - |
| Best {Objective1} | - |
| Best {Objective2} | - |

*Summary will be updated as optimization progresses.*

---

## 1. Optimization Progress

### Trial History
| Trial | {Obj1} | {Obj2} | Source | Status |
|-------|--------|--------|--------|--------|
| - | - | - | - | - |

### Convergence
*Convergence plots will be added after sufficient trials.*

---

## 2. Best Designs Found

### Best {Objective1}
| Parameter | Value |
|-----------|-------|
| - | - |

### Best {Objective2}
| Parameter | Value |
|-----------|-------|
| - | - |

### Pareto Front (if multi-objective)
*Pareto solutions will be listed here.*

---

## 3. Neural Surrogate Performance (if applicable)

| Metric | {Obj1} | {Obj2} |
|--------|--------|--------|
| R² Score | - | - |
| MAE | - | - |
| Prediction Time | - | - |

---

## 4. Engineering Recommendations

*Recommendations based on optimization results.*

---

## 5. Next Steps

- [ ] Continue optimization
- [ ] Validate best design with detailed FEA
- [ ] Manufacturing review

---

*Report auto-generated by Atomizer. Last updated: {date}*
  1. Next Steps (as shown earlier)

Remember

  • Be conversational and helpful
  • Ask clarifying questions early
  • Confirm understanding before generating
  • Provide context for technical decisions
  • Make next steps crystal clear
  • Anticipate common mistakes
  • Reference existing studies as examples
  • Always test-run your generated code mentally

The goal is for the user to have a COMPLETE, WORKING study that they can run immediately after placing their NX files.