Files
Atomizer/studies/bracket_stiffness_optimization_atomizerfield/README.md
Anto01 2b3573ec42 feat: Add AtomizerField training data export and intelligent model discovery
Major additions:
- Training data export system for AtomizerField neural network training
- Bracket stiffness optimization study with 50+ training samples
- Intelligent NX model discovery (auto-detect solutions, expressions, mesh)
- Result extractors module for displacement, stress, frequency, mass
- User-generated NX journals for advanced workflows
- Archive structure for legacy scripts and test outputs
- Protocol documentation and dashboard launcher

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-26 12:01:50 -05:00

16 KiB
Raw Blame History

Bracket Stiffness Optimization - AtomizerField

Multi-objective bracket geometry optimization with neural network acceleration.

Created: 2025-11-26 Protocol: Protocol 11 (Multi-Objective NSGA-II) Status: Ready to Run


1. Engineering Problem

1.1 Objective

Optimize an L-bracket mounting structure to maximize structural stiffness while minimizing mass, subject to manufacturing constraints.

1.2 Physical System

  • Component: L-shaped mounting bracket
  • Material: Steel (density ρ defined in NX model)
  • Loading: Static force applied in Z-direction
  • Boundary Conditions: Fixed support at mounting face
  • Analysis Type: Linear static (Nastran SOL 101)

2. Mathematical Formulation

2.1 Objectives

Objective Goal Weight Formula Units
Stiffness maximize 1.0 k = \frac{F}{\delta_{max}} N/mm
Mass minimize 0.1 m = \sum_{e} \rho_e V_e kg

Where:

  • k = structural stiffness
  • F = applied force magnitude (N)
  • \delta_{max} = maximum absolute displacement (mm)
  • \rho_e = element material density (kg/mm³)
  • V_e = element volume (mm³)

2.2 Design Variables

Parameter Symbol Bounds Units Description
Support Angle \theta [20, 70] degrees Angle of the support arm relative to base
Tip Thickness t [30, 60] mm Thickness at the bracket tip

Design Space:

\mathbf{x} = [\theta, t]^T \in \mathbb{R}^2 20 \leq \theta \leq 70 30 \leq t \leq 60

2.3 Constraints

Constraint Type Formula Threshold Handling
Mass Limit Inequality g_1(\mathbf{x}) = m - m_{max} m_{max} = 0.2 kg Infeasible if violated

Feasible Region:

\mathcal{F} = \{\mathbf{x} : g_1(\mathbf{x}) \leq 0\}

2.4 Multi-Objective Formulation

Pareto Optimization Problem:

\max_{\mathbf{x} \in \mathcal{F}} \quad k(\mathbf{x}) \min_{\mathbf{x} \in \mathcal{F}} \quad m(\mathbf{x})

Pareto Dominance: Solution \mathbf{x}_1 dominates \mathbf{x}_2 if:

  • k(\mathbf{x}_1) \geq k(\mathbf{x}_2) and m(\mathbf{x}_1) \leq m(\mathbf{x}_2)
  • With at least one strict inequality

3. Optimization Algorithm

3.1 NSGA-II Configuration

Parameter Value Description
Algorithm NSGA-II Non-dominated Sorting Genetic Algorithm II
Population auto Managed by Optuna
Directions ['maximize', 'minimize'] (stiffness, mass)
Sampler NSGAIISampler Multi-objective sampler
Trials 100 50 FEA + 50 neural

NSGA-II Properties:

  • Fast non-dominated sorting: O(MN^2) where M = objectives, N = population
  • Crowding distance for diversity preservation
  • Binary tournament selection with crowding comparison

3.2 Return Format

def objective(trial) -> Tuple[float, float]:
    # ... simulation and extraction ...
    return (stiffness, mass)  # Tuple, NOT negated

4. Simulation Pipeline

4.1 Trial Execution Flow

┌─────────────────────────────────────────────────────────────────────┐
│                         TRIAL n EXECUTION                            │
├─────────────────────────────────────────────────────────────────────┤
│                                                                      │
│  1. OPTUNA SAMPLES (NSGA-II)                                        │
│     θ = trial.suggest_float("support_angle", 20, 70)                │
│     t = trial.suggest_float("tip_thickness", 30, 60)                │
│                                                                      │
│  2. NX PARAMETER UPDATE                                             │
│     Module: optimization_engine/nx_updater.py                       │
│     Action: Bracket.prt expressions ← {θ, t}                        │
│                                                                      │
│  3. HOOK: PRE_SOLVE                                                 │
│     → Log trial start, validate design bounds                       │
│                                                                      │
│  4. NX SIMULATION (Nastran SOL 101)                                 │
│     Module: optimization_engine/solve_simulation.py                 │
│     Input: Bracket_sim1.sim                                         │
│     Output: .dat, .op2, .f06                                        │
│                                                                      │
│  5. HOOK: POST_SOLVE                                                │
│     → Run export_displacement_field.py                              │
│     → Generate export_field_dz.fld                                  │
│                                                                      │
│  6. RESULT EXTRACTION                                               │
│     Mass ← bdf_mass_extractor(.dat)                                 │
│     Stiffness ← stiffness_calculator(.fld, .op2)                   │
│                                                                      │
│  7. HOOK: POST_EXTRACTION                                           │
│     → Export BDF/OP2 to training data directory                     │
│                                                                      │
│  8. CONSTRAINT EVALUATION                                           │
│     mass ≤ 0.2 kg → feasible/infeasible                            │
│                                                                      │
│  9. RETURN TO OPTUNA                                                │
│     return (stiffness, mass)                                        │
│                                                                      │
└─────────────────────────────────────────────────────────────────────┘

4.2 Hooks Configuration

Hook Point Function Purpose
PRE_SOLVE log_trial_start() Log design variables, trial number
POST_SOLVE export_field_data() Run NX journal for .fld export
POST_EXTRACTION export_training_data() Save BDF/OP2 for neural training

5. Result Extraction Methods

5.1 Mass Extraction

Attribute Value
Extractor bdf_mass_extractor
Module optimization_engine.extractors.bdf_mass_extractor
Function extract_mass_from_bdf()
Source bracket_sim1-solution_1.dat
Output kg

Algorithm:

m = \sum_{e=1}^{N_{elem}} m_e = \sum_{e=1}^{N_{elem}} \rho_e \cdot V_e

Where element volume V_e is computed from BDF geometry (CTETRA, CHEXA, etc.).

Code:

from optimization_engine.extractors.bdf_mass_extractor import extract_mass_from_bdf

mass_kg = extract_mass_from_bdf("1_setup/model/bracket_sim1-solution_1.dat")

5.2 Stiffness Extraction

Attribute Value
Extractor stiffness_calculator
Module optimization_engine.extractors.stiffness_calculator
Displacement Source export_field_dz.fld
Force Source bracket_sim1-solution_1.op2
Components Force: F_z, Displacement: \delta_z
Output N/mm

Algorithm:

k = \frac{F_z}{\delta_{max,z}}

Where:

  • F_z = applied force in Z-direction (extracted from OP2 OLOAD resultant)
  • \delta_{max,z} = \max_{i \in nodes} |u_{z,i}| (from field export)

Code:

from optimization_engine.extractors.stiffness_calculator import StiffnessCalculator

calculator = StiffnessCalculator(
    field_file="1_setup/model/export_field_dz.fld",
    op2_file="1_setup/model/bracket_sim1-solution_1.op2",
    force_component="fz",
    displacement_component="z"
)
result = calculator.calculate()
stiffness = result['stiffness']  # N/mm

5.3 Field Data Format

NX Field Export (.fld):

FIELD: [ResultProbe] : [TABLE]
RESULT TYPE: Displacement
COMPONENT: Z
START DATA
step, node_id, value
0, 396, -0.086716040968895
0, 397, -0.091234567890123
...
END DATA

6. Neural Acceleration (AtomizerField)

6.1 Configuration

Setting Value Description
enabled true Neural surrogate active
min_training_points 50 FEA trials before auto-training
auto_train true Trigger training automatically
epochs 100 Training epochs
validation_split 0.2 20% holdout for validation
retrain_threshold 25 Retrain after N new FEA points
model_type parametric Input: design params only

6.2 Surrogate Model

Input: \mathbf{x} = [\theta, t]^T \in \mathbb{R}^2

Output: \hat{\mathbf{y}} = [\hat{k}, \hat{m}]^T \in \mathbb{R}^2

Architecture: Parametric neural network (MLP)

Training Objective:

\mathcal{L} = \frac{1}{N} \sum_{i=1}^{N} \left[ (k_i - \hat{k}_i)^2 + (m_i - \hat{m}_i)^2 \right]

6.3 Training Data Location

atomizer_field_training_data/bracket_stiffness_optimization_atomizerfield/
├── trial_0001/
│   ├── input/model.bdf       # Mesh + design parameters
│   ├── output/model.op2      # FEA displacement/stress results
│   └── metadata.json         # {support_angle, tip_thickness, stiffness, mass}
├── trial_0002/
└── ...

6.4 Expected Performance

Metric Value
FEA time per trial 10-30 min
Neural time per trial ~4.5 ms
Speedup ~2,200x
Expected R² > 0.95 (after 50 samples)

7. Study File Structure

bracket_stiffness_optimization_atomizerfield/
│
├── 1_setup/                              # INPUT CONFIGURATION
│   ├── model/                            # NX Model Files
│   │   ├── Bracket.prt                   # Parametric part
│   │   │   └── Expressions: support_angle, tip_thickness
│   │   ├── Bracket_sim1.sim              # Simulation (SOL 101)
│   │   ├── Bracket_fem1.fem              # FEM mesh (auto-updated)
│   │   ├── bracket_sim1-solution_1.dat   # Nastran BDF input
│   │   ├── bracket_sim1-solution_1.op2   # Binary results
│   │   ├── bracket_sim1-solution_1.f06   # Text summary
│   │   ├── export_displacement_field.py  # Field export journal
│   │   └── export_field_dz.fld           # Z-displacement field
│   │
│   ├── optimization_config.json          # Study configuration
│   └── workflow_config.json              # Workflow metadata
│
├── 2_results/                            # OUTPUT (auto-generated)
│   ├── study.db                          # Optuna SQLite database
│   ├── optimization_history.json         # Trial history
│   ├── pareto_front.json                 # Pareto-optimal solutions
│   ├── optimization.log                  # Structured log
│   └── reports/                          # Generated reports
│       └── optimization_report.md        # Full results report
│
├── run_optimization.py                   # Entry point
├── reset_study.py                        # Database reset
└── README.md                             # This blueprint

8. Results Location

After optimization completes, results will be generated in 2_results/:

File Description Format
study.db Optuna database with all trials SQLite
optimization_history.json Full trial history JSON
pareto_front.json Pareto-optimal solutions JSON
optimization.log Execution log Text
reports/optimization_report.md Full Results Report Markdown

8.1 Results Report Contents

The generated optimization_report.md will contain:

  1. Optimization Summary - Best solutions, convergence status
  2. Pareto Front Analysis - All non-dominated solutions with trade-off visualization
  3. Parameter Correlations - Design variable vs objective relationships
  4. Convergence History - Objective values over trials
  5. Constraint Satisfaction - Feasibility statistics
  6. Neural Surrogate Performance - Training loss, validation R², prediction accuracy
  7. Algorithm Statistics - NSGA-II population diversity, hypervolume indicator
  8. Recommendations - Suggested optimal configurations

9. Quick Start

# STAGE 1: DISCOVER - Clean old files, run ONE solve, discover available outputs
python run_optimization.py --discover

# STAGE 2: VALIDATE - Run single trial to validate extraction works
python run_optimization.py --validate

# STAGE 3: TEST - Run 3-trial integration test
python run_optimization.py --test

# STAGE 4: TRAIN - Collect FEA training data for neural surrogate
python run_optimization.py --train --trials 50

# STAGE 5: RUN - Official optimization
python run_optimization.py --run --trials 100

# With neural acceleration (after training)
python run_optimization.py --run --trials 100 --enable-nn --resume

Stage Descriptions

Stage Command Purpose When to Use
DISCOVER --discover Scan model, clean files, run 1 solve, report outputs First time setup
VALIDATE --validate Run 1 trial with full extraction pipeline After discover
TEST --test Run 3 trials, check consistency Before long runs
TRAIN --train Collect FEA data for neural network Building surrogate
RUN --run Official optimization Production runs

Additional Options

# Clean old Nastran files before any stage
python run_optimization.py --discover --clean

# Resume from existing study
python run_optimization.py --run --trials 50 --resume

# Reset study (delete database)
python reset_study.py
python reset_study.py --clean  # Also clean Nastran files

10. Dashboard Access

Live Monitoring

Dashboard URL Purpose
Atomizer Dashboard http://localhost:3003 Live optimization monitoring, Pareto plots
Optuna Dashboard http://localhost:8081 Trial history, hyperparameter importance

Starting Dashboards

# Start Atomizer Dashboard (from project root)
cd atomizer-dashboard/frontend && npm run dev
cd atomizer-dashboard/backend && python -m uvicorn api.main:app --port 8000

# Start Optuna Dashboard (for this study)
optuna-dashboard sqlite:///2_results/study.db --port 8081

What You'll See

Atomizer Dashboard (localhost:3003):

  • Real-time Pareto front visualization
  • Parallel coordinates plot for design variables
  • Trial progress and success/failure rates
  • Study comparison across multiple optimizations

Optuna Dashboard (localhost:8081):

  • Trial history with all parameters and objectives
  • Hyperparameter importance analysis
  • Optimization history plots
  • Slice plots for parameter sensitivity

11. Configuration Reference

File: 1_setup/optimization_config.json

Section Key Description
optimization_settings.protocol protocol_11_multi_objective Algorithm selection
optimization_settings.sampler NSGAIISampler Optuna sampler
optimization_settings.n_trials 100 Total trials
design_variables[] [support_angle, tip_thickness] Params to optimize
objectives[] [stiffness, mass] Objectives with goals
constraints[] [mass_limit] Constraints with thresholds
result_extraction.* Extractor configs How to get results
neural_acceleration.* Neural settings AtomizerField config
training_data_export.* Export settings Training data location

12. References

  • Deb, K. et al. (2002). A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation.
  • pyNastran Documentation: BDF/OP2 parsing
  • Optuna Documentation: Multi-objective optimization with NSGA-II