feat: Add MLP surrogate with Turbo Mode for 100x faster optimization

Neural Acceleration (MLP Surrogate):
- Add run_nn_optimization.py with hybrid FEA/NN workflow
- MLP architecture: 4-layer (64->128->128->64) with BatchNorm/Dropout
- Three workflow modes:
  - --all: Sequential export->train->optimize->validate
  - --hybrid-loop: Iterative Train->NN->Validate->Retrain cycle
  - --turbo: Aggressive single-best validation (RECOMMENDED)
- Turbo mode: 5000 NN trials + 50 FEA validations in ~12 minutes
- Separate nn_study.db to avoid overloading dashboard

Performance Results (bracket_pareto_3obj study):
- NN prediction errors: mass 1-5%, stress 1-4%, stiffness 5-15%
- Found minimum mass designs at boundary (angle~30deg, thick~30mm)
- 100x speedup vs pure FEA exploration

Protocol Operating System:
- Add .claude/skills/ with Bootstrap, Cheatsheet, Context Loader
- Add docs/protocols/ with operations (OP_01-06) and system (SYS_10-14)
- Update SYS_14_NEURAL_ACCELERATION.md with MLP Turbo Mode docs

NX Automation:
- Add optimization_engine/hooks/ for NX CAD/CAE automation
- Add study_wizard.py for guided study creation
- Fix FEM mesh update: load idealized part before UpdateFemodel()

New Study:
- bracket_pareto_3obj: 3-objective Pareto (mass, stress, stiffness)
- 167 FEA trials + 5000 NN trials completed
- Demonstrates full hybrid workflow

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
Antoine
2025-12-06 20:01:59 -05:00
parent 0cb2808c44
commit 602560c46a
70 changed files with 31018 additions and 289 deletions

View File

@@ -0,0 +1,289 @@
# Extractors Catalog Module
**Last Updated**: December 5, 2025
**Version**: 1.0
**Type**: Optional Module
This module documents all available extractors in the Atomizer framework. Load this when the user asks about result extraction or needs to understand what extractors are available.
---
## When to Load
- User asks "what extractors are available?"
- User needs to extract results from OP2/BDF files
- Setting up a new study with custom extraction needs
- Debugging extraction issues
---
## PR.1 Extractor Catalog
| ID | Extractor | Module | Function | Input | Output | Returns |
|----|-----------|--------|----------|-------|--------|---------|
| E1 | **Displacement** | `optimization_engine.extractors.extract_displacement` | `extract_displacement(op2_file, subcase=1)` | `.op2` | mm | `{'max_displacement': float, 'max_disp_node': int, 'max_disp_x/y/z': float}` |
| E2 | **Frequency** | `optimization_engine.extractors.extract_frequency` | `extract_frequency(op2_file, subcase=1, mode_number=1)` | `.op2` | Hz | `{'frequency': float, 'mode_number': int, 'eigenvalue': float, 'all_frequencies': list}` |
| E3 | **Von Mises Stress** | `optimization_engine.extractors.extract_von_mises_stress` | `extract_solid_stress(op2_file, subcase=1, element_type='cquad4')` | `.op2` | MPa | `{'max_von_mises': float, 'max_stress_element': int}` |
| E4 | **BDF Mass** | `optimization_engine.extractors.bdf_mass_extractor` | `extract_mass_from_bdf(bdf_file)` | `.dat`/`.bdf` | kg | `float` (mass in kg) |
| E5 | **CAD Expression Mass** | `optimization_engine.extractors.extract_mass_from_expression` | `extract_mass_from_expression(prt_file, expression_name='p173')` | `.prt` + `_temp_mass.txt` | kg | `float` (mass in kg) |
| E6 | **Field Data** | `optimization_engine.extractors.field_data_extractor` | `FieldDataExtractor(field_file, result_column, aggregation)` | `.fld`/`.csv` | varies | `{'value': float, 'stats': dict}` |
| E7 | **Stiffness** | `optimization_engine.extractors.stiffness_calculator` | `StiffnessCalculator(field_file, op2_file, force_component, displacement_component)` | `.fld` + `.op2` | N/mm | `{'stiffness': float, 'displacement': float, 'force': float}` |
| E11 | **Part Mass & Material** | `optimization_engine.extractors.extract_part_mass_material` | `extract_part_mass_material(prt_file)` | `.prt` | kg + dict | `{'mass_kg': float, 'volume_mm3': float, 'material': {'name': str}, ...}` |
**For Zernike extractors (E8-E10)**, see the [zernike-optimization module](./zernike-optimization.md).
---
## PR.2 Extractor Code Snippets (COPY-PASTE)
### E1: Displacement Extraction
```python
from optimization_engine.extractors.extract_displacement import extract_displacement
disp_result = extract_displacement(op2_file, subcase=1)
max_displacement = disp_result['max_displacement'] # mm
max_node = disp_result['max_disp_node'] # Node ID
```
**Return Dictionary**:
```python
{
'max_displacement': 0.523, # Maximum magnitude (mm)
'max_disp_node': 1234, # Node ID with max displacement
'max_disp_x': 0.123, # X component at max node
'max_disp_y': 0.456, # Y component at max node
'max_disp_z': 0.234 # Z component at max node
}
```
### E2: Frequency Extraction
```python
from optimization_engine.extractors.extract_frequency import extract_frequency
# Get first mode frequency
freq_result = extract_frequency(op2_file, subcase=1, mode_number=1)
frequency = freq_result['frequency'] # Hz
# Get all frequencies
all_freqs = freq_result['all_frequencies'] # List of all mode frequencies
```
**Return Dictionary**:
```python
{
'frequency': 125.4, # Requested mode frequency (Hz)
'mode_number': 1, # Mode number requested
'eigenvalue': 6.21e5, # Eigenvalue (rad/s)^2
'all_frequencies': [125.4, 234.5, 389.2, ...] # All mode frequencies
}
```
### E3: Stress Extraction
```python
from optimization_engine.extractors.extract_von_mises_stress import extract_solid_stress
# For shell elements (CQUAD4, CTRIA3)
stress_result = extract_solid_stress(op2_file, subcase=1, element_type='cquad4')
# For solid elements (CTETRA, CHEXA)
stress_result = extract_solid_stress(op2_file, subcase=1, element_type='ctetra')
max_stress = stress_result['max_von_mises'] # MPa
```
**Return Dictionary**:
```python
{
'max_von_mises': 187.5, # Maximum von Mises stress (MPa)
'max_stress_element': 5678, # Element ID with max stress
'mean_stress': 45.2, # Mean stress across all elements
'stress_distribution': {...} # Optional: full distribution data
}
```
### E4: BDF Mass Extraction
```python
from optimization_engine.extractors.bdf_mass_extractor import extract_mass_from_bdf
mass_kg = extract_mass_from_bdf(str(dat_file)) # kg
```
**Note**: Calculates mass from element properties and material density in the BDF/DAT file.
### E5: CAD Expression Mass
```python
from optimization_engine.extractors.extract_mass_from_expression import extract_mass_from_expression
mass_kg = extract_mass_from_expression(model_file, expression_name="p173") # kg
```
**Note**: Requires `_temp_mass.txt` to be written by solve journal. The expression name is the NX expression that contains the mass value.
### E6: Field Data Extraction
```python
from optimization_engine.extractors.field_data_extractor import FieldDataExtractor
# Create extractor
extractor = FieldDataExtractor(
field_file="results.fld",
result_column="Temperature",
aggregation="max" # or "min", "mean", "sum"
)
result = extractor.extract()
value = result['value'] # Aggregated value
stats = result['stats'] # Full statistics
```
### E7: Stiffness Calculation
```python
# Simple stiffness from displacement (most common)
applied_force = 1000.0 # N - MUST MATCH YOUR MODEL'S APPLIED LOAD
stiffness = applied_force / max(abs(max_displacement), 1e-6) # N/mm
# Or using StiffnessCalculator for complex cases
from optimization_engine.extractors.stiffness_calculator import StiffnessCalculator
calc = StiffnessCalculator(
field_file="displacement.fld",
op2_file="results.op2",
force_component="Fz",
displacement_component="Tz"
)
result = calc.calculate()
stiffness = result['stiffness'] # N/mm
```
### E11: Part Mass & Material Extraction
```python
from optimization_engine.extractors import extract_part_mass_material, extract_part_mass
# Full extraction with all properties
result = extract_part_mass_material(prt_file)
mass_kg = result['mass_kg'] # kg
volume = result['volume_mm3'] # mm³
area = result['surface_area_mm2'] # mm²
cog = result['center_of_gravity_mm'] # [x, y, z] mm
material = result['material']['name'] # e.g., "Aluminum_2014"
# Simple mass-only extraction
mass_kg = extract_part_mass(prt_file) # kg
```
**Return Dictionary**:
```python
{
'mass_kg': 0.1098, # Mass in kg
'mass_g': 109.84, # Mass in grams
'volume_mm3': 39311.99, # Volume in mm³
'surface_area_mm2': 10876.71, # Surface area in mm²
'center_of_gravity_mm': [0, 42.3, 39.6], # CoG in mm
'material': {
'name': 'Aluminum_2014', # Material name (or None)
'density': None, # Density if available
'density_unit': 'kg/mm^3'
},
'num_bodies': 1 # Number of solid bodies
}
```
**Prerequisites**: Run the NX journal first to create the temp file:
```bash
run_journal.exe nx_journals/extract_part_mass_material.py -args model.prt
```
---
## Extractor Selection Guide
| Need | Extractor | When to Use |
|------|-----------|-------------|
| Max deflection | E1 | Static analysis displacement check |
| Natural frequency | E2 | Modal analysis, resonance avoidance |
| Peak stress | E3 | Strength validation, fatigue life |
| FEM mass | E4 | When mass is from mesh elements |
| CAD mass | E5 | When mass is from NX expression |
| Temperature/Custom | E6 | Thermal or custom field results |
| k = F/δ | E7 | Stiffness maximization |
| Wavefront error | E8-E10 | Telescope/mirror optimization |
| Part mass + material | E11 | Direct from .prt file with material info |
---
## Engineering Result Types
| Result Type | Nastran SOL | Output File | Extractor |
|-------------|-------------|-------------|-----------|
| Static Stress | SOL 101 | `.op2` | E3: `extract_solid_stress` |
| Displacement | SOL 101 | `.op2` | E1: `extract_displacement` |
| Natural Frequency | SOL 103 | `.op2` | E2: `extract_frequency` |
| Buckling Load | SOL 105 | `.op2` | `extract_buckling` |
| Modal Shapes | SOL 103 | `.op2` | `extract_mode_shapes` |
| Mass | - | `.dat`/`.bdf` | E4: `bdf_mass_extractor` |
| Stiffness | SOL 101 | `.fld` + `.op2` | E7: `stiffness_calculator` |
---
## Common Objective Formulations
### Stiffness Maximization
- k = F/δ (force/displacement)
- Maximize k or minimize 1/k (compliance)
- Requires consistent load magnitude across trials
### Mass Minimization
- Extract from BDF element properties + material density
- Units: typically kg (NX uses kg-mm-s)
### Stress Constraints
- Von Mises < σ_yield / safety_factor
- Account for stress concentrations
### Frequency Constraints
- f₁ > threshold (avoid resonance)
- Often paired with mass minimization
---
## Adding New Extractors
When the study needs result extraction not covered by existing extractors (E1-E10):
```
STEP 1: Check existing extractors in this catalog
├── If exists → IMPORT and USE it (done!)
└── If missing → Continue to STEP 2
STEP 2: Create extractor in optimization_engine/extractors/
├── File: extract_{feature}.py
├── Follow existing extractor patterns
└── Include comprehensive docstrings
STEP 3: Add to __init__.py
└── Export functions in optimization_engine/extractors/__init__.py
STEP 4: Update this module
├── Add to Extractor Catalog table
└── Add code snippet
STEP 5: Document in SYS_12_EXTRACTOR_LIBRARY.md
```
See [EXT_01_CREATE_EXTRACTOR](../../docs/protocols/extensions/EXT_01_CREATE_EXTRACTOR.md) for full guide.
---
## Cross-References
- **System Protocol**: [SYS_12_EXTRACTOR_LIBRARY](../../docs/protocols/system/SYS_12_EXTRACTOR_LIBRARY.md)
- **Extension Guide**: [EXT_01_CREATE_EXTRACTOR](../../docs/protocols/extensions/EXT_01_CREATE_EXTRACTOR.md)
- **Zernike Extractors**: [zernike-optimization module](./zernike-optimization.md)
- **Core Skill**: [study-creation-core](../core/study-creation-core.md)

View File

@@ -0,0 +1,340 @@
# Neural Acceleration Module
**Last Updated**: December 5, 2025
**Version**: 1.0
**Type**: Optional Module
This module provides guidance for AtomizerField neural network surrogate acceleration, enabling 1000x faster optimization by replacing expensive FEA evaluations with instant neural predictions.
---
## When to Load
- User needs >50 optimization trials
- User mentions "neural", "surrogate", "NN", "machine learning"
- User wants faster optimization
- Exporting training data for neural networks
---
## Overview
**Key Innovation**: Train once on FEA data, then explore 50,000+ designs in the time it takes to run 50 FEA trials.
| Metric | Traditional FEA | Neural Network | Improvement |
|--------|-----------------|----------------|-------------|
| Time per evaluation | 10-30 minutes | 4.5 milliseconds | **2,000-500,000x** |
| Trials per hour | 2-6 | 800,000+ | **1000x** |
| Design exploration | ~50 designs | ~50,000 designs | **1000x** |
---
## Training Data Export (PR.9)
Enable training data export in your optimization config:
```json
{
"training_data_export": {
"enabled": true,
"export_dir": "atomizer_field_training_data/my_study"
}
}
```
### Using TrainingDataExporter
```python
from optimization_engine.training_data_exporter import TrainingDataExporter
training_exporter = TrainingDataExporter(
export_dir=export_dir,
study_name=study_name,
design_variable_names=['param1', 'param2'],
objective_names=['stiffness', 'mass'],
constraint_names=['mass_limit'],
metadata={'atomizer_version': '2.0', 'optimization_algorithm': 'NSGA-II'}
)
# In objective function:
training_exporter.export_trial(
trial_number=trial.number,
design_variables=design_vars,
results={'objectives': {...}, 'constraints': {...}},
simulation_files={'dat_file': dat_path, 'op2_file': op2_path}
)
# After optimization:
training_exporter.finalize()
```
### Training Data Structure
```
atomizer_field_training_data/{study_name}/
├── trial_0001/
│ ├── input/model.bdf # Nastran input (mesh + params)
│ ├── output/model.op2 # Binary results
│ └── metadata.json # Design params + objectives
├── trial_0002/
│ └── ...
└── study_summary.json # Study-level metadata
```
**Recommended**: 100-500 FEA samples for good generalization.
---
## Neural Configuration
### Full Configuration Example
```json
{
"study_name": "bracket_neural_optimization",
"surrogate_settings": {
"enabled": true,
"model_type": "parametric_gnn",
"model_path": "models/bracket_surrogate.pt",
"confidence_threshold": 0.85,
"validation_frequency": 10,
"fallback_to_fea": true
},
"training_data_export": {
"enabled": true,
"export_dir": "atomizer_field_training_data/bracket_study",
"export_bdf": true,
"export_op2": true,
"export_fields": ["displacement", "stress"]
},
"neural_optimization": {
"initial_fea_trials": 50,
"neural_trials": 5000,
"retraining_interval": 500,
"uncertainty_threshold": 0.15
}
}
```
### Configuration Parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `enabled` | bool | false | Enable neural surrogate |
| `model_type` | string | "parametric_gnn" | Model architecture |
| `model_path` | string | - | Path to trained model |
| `confidence_threshold` | float | 0.85 | Min confidence for predictions |
| `validation_frequency` | int | 10 | FEA validation every N trials |
| `fallback_to_fea` | bool | true | Use FEA when uncertain |
---
## Model Types
### Parametric Predictor GNN (Recommended)
Direct optimization objective prediction - fastest option.
```
Design Parameters (ND) → Design Encoder (MLP) → GNN Backbone → Scalar Heads
Output (objectives):
├── mass (grams)
├── frequency (Hz)
├── max_displacement (mm)
└── max_stress (MPa)
```
**Use When**: You only need scalar objectives, not full field predictions.
### Field Predictor GNN
Full displacement/stress field prediction.
```
Input Features (12D per node):
├── Node coordinates (x, y, z)
├── Material properties (E, nu, rho)
├── Boundary conditions (fixed/free per DOF)
└── Load information (force magnitude, direction)
Output (per node):
├── Displacement (6 DOF: Tx, Ty, Tz, Rx, Ry, Rz)
└── Von Mises stress (1 value)
```
**Use When**: You need field visualization or complex derived quantities.
### Ensemble Models
Multiple models for uncertainty quantification.
```python
# Run N models
predictions = [model_i(x) for model_i in ensemble]
# Statistics
mean_prediction = np.mean(predictions)
uncertainty = np.std(predictions)
# Decision
if uncertainty > threshold:
result = run_fea(x) # Fall back to FEA
else:
result = mean_prediction
```
---
## Hybrid FEA/Neural Workflow
### Phase 1: FEA Exploration (50-100 trials)
- Run standard FEA optimization
- Export training data automatically
- Build landscape understanding
### Phase 2: Neural Training
- Parse collected data
- Train parametric predictor
- Validate accuracy
### Phase 3: Neural Acceleration (1000s of trials)
- Use neural network for rapid exploration
- Periodic FEA validation
- Retrain if distribution shifts
### Phase 4: FEA Refinement (10-20 trials)
- Validate top candidates with FEA
- Ensure results are physically accurate
- Generate final Pareto front
---
## Training Pipeline
### Step 1: Collect Training Data
Run optimization with export enabled:
```bash
python run_optimization.py --train --trials 100
```
### Step 2: Parse to Neural Format
```bash
cd atomizer-field
python batch_parser.py ../atomizer_field_training_data/my_study
```
### Step 3: Train Model
**Parametric Predictor** (recommended):
```bash
python train_parametric.py \
--train_dir ../training_data/parsed \
--val_dir ../validation_data/parsed \
--epochs 200 \
--hidden_channels 128 \
--num_layers 4
```
**Field Predictor**:
```bash
python train.py \
--train_dir ../training_data/parsed \
--epochs 200 \
--model FieldPredictorGNN \
--hidden_channels 128 \
--num_layers 6 \
--physics_loss_weight 0.3
```
### Step 4: Validate
```bash
python validate.py --checkpoint runs/my_model/checkpoint_best.pt
```
Expected output:
```
Validation Results:
├── Mean Absolute Error: 2.3% (mass), 1.8% (frequency)
├── R² Score: 0.987
├── Inference Time: 4.5ms ± 0.8ms
└── Physics Violations: 0.2%
```
### Step 5: Deploy
Update config to use trained model:
```json
{
"neural_surrogate": {
"enabled": true,
"model_checkpoint": "atomizer-field/runs/my_model/checkpoint_best.pt",
"confidence_threshold": 0.85
}
}
```
---
## Uncertainty Thresholds
| Uncertainty | Action |
|-------------|--------|
| < 5% | Use neural prediction |
| 5-15% | Use neural, flag for validation |
| > 15% | Fall back to FEA |
---
## Accuracy Expectations
| Problem Type | Expected R² | Samples Needed |
|--------------|-------------|----------------|
| Well-behaved | > 0.95 | 50-100 |
| Moderate nonlinear | > 0.90 | 100-200 |
| Highly nonlinear | > 0.85 | 200-500 |
---
## AtomizerField Components
```
atomizer-field/
├── neural_field_parser.py # BDF/OP2 parsing
├── field_predictor.py # Field GNN
├── parametric_predictor.py # Parametric GNN
├── train.py # Field training
├── train_parametric.py # Parametric training
├── validate.py # Model validation
├── physics_losses.py # Physics-informed loss
└── batch_parser.py # Batch data conversion
optimization_engine/
├── neural_surrogate.py # Atomizer integration
└── runner_with_neural.py # Neural runner
```
---
## Troubleshooting
| Symptom | Cause | Solution |
|---------|-------|----------|
| High prediction error | Insufficient training data | Collect more FEA samples |
| Out-of-distribution warnings | Design outside training range | Retrain with expanded range |
| Slow inference | Large mesh | Use parametric predictor instead |
| Physics violations | Low physics loss weight | Increase `physics_loss_weight` |
---
## Cross-References
- **System Protocol**: [SYS_14_NEURAL_ACCELERATION](../../docs/protocols/system/SYS_14_NEURAL_ACCELERATION.md)
- **Operations**: [OP_05_EXPORT_TRAINING_DATA](../../docs/protocols/operations/OP_05_EXPORT_TRAINING_DATA.md)
- **Core Skill**: [study-creation-core](../core/study-creation-core.md)

View File

@@ -0,0 +1,209 @@
# NX Documentation Lookup Module
## Overview
This module provides on-demand access to Siemens NX Open and Simcenter documentation via the Dalidou MCP server. Use these tools when building new extractors, NX automation scripts, or debugging NX-related issues.
## CRITICAL: When to AUTO-SEARCH Documentation
**You MUST call `siemens_docs_search` BEFORE writing any code that uses NX Open APIs.**
### Automatic Search Triggers
| User Request | Action Required |
|--------------|-----------------|
| "Create extractor for {X}" | → `siemens_docs_search("{X} NXOpen")` |
| "Get {property} from part" | → `siemens_docs_search("{property} NXOpen.Part")` |
| "Extract {data} from FEM" | → `siemens_docs_search("{data} NXOpen.CAE")` |
| "How do I {action} in NX" | → `siemens_docs_search("{action} NXOpen")` |
| Any code with `NXOpen.*` | → Search before writing |
### Example: User asks "Create an extractor for inertia values"
```
STEP 1: Immediately search
→ siemens_docs_search("inertia mass properties NXOpen")
STEP 2: Review results, fetch details
→ siemens_docs_fetch("NXOpen.MeasureManager")
STEP 3: Now write code with correct API calls
```
**DO NOT guess NX Open API names.** Always search first.
## When to Load
Load this module when:
- Creating new NX Open scripts or extractors
- Working with `NXOpen.*` namespaces
- Debugging NX automation errors
- User mentions "NX API", "NX Open", "Simcenter docs"
- Building features that interact with NX/Simcenter
## Available MCP Tools
### `siemens_docs_search`
**Purpose**: Search across NX Open, Simcenter, and Teamcenter documentation
**When to use**:
- Finding which class/method performs a specific task
- Discovering available APIs for a feature
- Looking up Nastran card references
**Examples**:
```
siemens_docs_search("get node coordinates FEM")
siemens_docs_search("CQUAD4 element properties")
siemens_docs_search("NXOpen.CAE mesh creation")
siemens_docs_search("extract stress results OP2")
```
### `siemens_docs_fetch`
**Purpose**: Fetch a specific documentation page with full content
**When to use**:
- Need complete class reference
- Getting detailed method signatures
- Reading full examples
**Examples**:
```
siemens_docs_fetch("NXOpen.CAE.FemPart")
siemens_docs_fetch("Nastran Quick Reference CQUAD4")
```
### `siemens_auth_status`
**Purpose**: Check if the Siemens SSO session is valid
**When to use**:
- Before a series of documentation lookups
- When fetch requests fail
- Debugging connection issues
### `siemens_login`
**Purpose**: Re-authenticate with Siemens if session expired
**When to use**:
- After `siemens_auth_status` shows expired
- When documentation fetches return auth errors
## Workflow: Building New Extractor
When creating a new extractor that uses NX Open APIs:
### Step 1: Search for Relevant APIs
```
→ siemens_docs_search("element stress results OP2")
```
Review results to identify candidate classes/methods.
### Step 2: Fetch Detailed Documentation
```
→ siemens_docs_fetch("NXOpen.CAE.Result")
```
Get full class documentation with method signatures.
### Step 3: Understand Data Formats
```
→ siemens_docs_search("CQUAD4 stress output format")
```
Understand Nastran output structure.
### Step 4: Build Extractor
Following EXT_01 template, create the extractor with:
- Proper API calls based on documentation
- Docstring referencing the APIs used
- Error handling for common NX exceptions
### Step 5: Document API Usage
In the extractor docstring:
```python
def extract_element_stress(op2_path: Path) -> Dict:
"""
Extract element stress results from OP2 file.
NX Open APIs Used:
- NXOpen.CAE.Result.AskElementStress
- NXOpen.CAE.ResultAccess.AskResultValues
Nastran Cards:
- CQUAD4, CTRIA3 (shell elements)
- STRESS case control
"""
```
## Workflow: Debugging NX Errors
When encountering NX Open errors:
### Step 1: Search for Correct API
```
Error: AttributeError: 'FemPart' object has no attribute 'GetNodes'
→ siemens_docs_search("FemPart get nodes")
```
### Step 2: Fetch Correct Class Reference
```
→ siemens_docs_fetch("NXOpen.CAE.FemPart")
```
Find the actual method name and signature.
### Step 3: Apply Fix
Document the correction:
```python
# Wrong: femPart.GetNodes()
# Right: femPart.BaseFEModel.FemMesh.Nodes
```
## Common Search Patterns
| Task | Search Query |
|------|--------------|
| Mesh operations | `siemens_docs_search("NXOpen.CAE mesh")` |
| Result extraction | `siemens_docs_search("CAE result OP2")` |
| Geometry access | `siemens_docs_search("NXOpen.Features body")` |
| Material properties | `siemens_docs_search("Nastran MAT1 material")` |
| Load application | `siemens_docs_search("CAE load force")` |
| Constraint setup | `siemens_docs_search("CAE boundary condition")` |
| Expressions/Parameters | `siemens_docs_search("NXOpen Expression")` |
| Part manipulation | `siemens_docs_search("NXOpen.Part")` |
## Key NX Open Namespaces
| Namespace | Domain |
|-----------|--------|
| `NXOpen.CAE` | FEA, meshing, results |
| `NXOpen.Features` | Parametric features |
| `NXOpen.Assemblies` | Assembly operations |
| `NXOpen.Part` | Part-level operations |
| `NXOpen.UF` | User Function (legacy) |
| `NXOpen.GeometricUtilities` | Geometry helpers |
## Integration with Extractors
All extractors in `optimization_engine/extractors/` should:
1. **Search before coding**: Use `siemens_docs_search` to find correct APIs
2. **Document API usage**: List NX Open APIs in docstring
3. **Handle NX exceptions**: Catch `NXOpen.NXException` appropriately
4. **Follow 20-line rule**: If extraction is complex, check if existing extractor handles it
## Troubleshooting
| Issue | Solution |
|-------|----------|
| Auth errors | Run `siemens_auth_status`, then `siemens_login` if needed |
| No results | Try broader search terms, check namespace spelling |
| Incomplete docs | Fetch the parent class for full context |
| Network errors | Verify Dalidou is accessible: `ping dalidou.local` |
---
*Module Version: 1.0*
*MCP Server: dalidou.local:5000*

View File

@@ -0,0 +1,364 @@
# Zernike Optimization Module
**Last Updated**: December 5, 2025
**Version**: 1.0
**Type**: Optional Module
This module provides specialized guidance for telescope mirror and optical surface optimization using Zernike polynomial decomposition.
---
## When to Load
- User mentions "telescope", "mirror", "optical", "wavefront"
- Optimization involves surface deformation analysis
- Need to extract Zernike coefficients from FEA results
- Working with multi-subcase elevation angle comparisons
---
## Zernike Extractors (E8-E10)
| ID | Extractor | Function | Input | Output | Use Case |
|----|-----------|----------|-------|--------|----------|
| E8 | **Zernike WFE** | `extract_zernike_from_op2()` | `.op2` + `.bdf` | nm | Single subcase wavefront error |
| E9 | **Zernike Relative** | `extract_zernike_relative_rms()` | `.op2` + `.bdf` | nm | Compare target vs reference subcase |
| E10 | **Zernike Helpers** | `ZernikeObjectiveBuilder` | `.op2` | nm | Multi-subcase optimization builder |
---
## E8: Single Subcase Zernike Extraction
Extract Zernike coefficients and RMS metrics for a single subcase (e.g., one elevation angle).
```python
from optimization_engine.extractors.extract_zernike import extract_zernike_from_op2
# Extract Zernike coefficients and RMS metrics for a single subcase
result = extract_zernike_from_op2(
op2_file,
bdf_file=None, # Auto-detect from op2 location
subcase="20", # Subcase label (e.g., "20" = 20 deg elevation)
displacement_unit="mm"
)
global_rms = result['global_rms_nm'] # Total surface RMS in nm
filtered_rms = result['filtered_rms_nm'] # RMS with low orders removed
coefficients = result['coefficients'] # List of 50 Zernike coefficients
```
**Return Dictionary**:
```python
{
'global_rms_nm': 45.2, # Total surface RMS (nm)
'filtered_rms_nm': 12.8, # RMS with J1-J4 (piston, tip, tilt, defocus) removed
'coefficients': [0.0, 12.3, ...], # 50 Zernike coefficients (Noll indexing)
'n_nodes': 5432, # Number of surface nodes
'rms_per_mode': {...} # RMS contribution per Zernike mode
}
```
**When to Use**:
- Single elevation angle analysis
- Polishing orientation (zenith) wavefront error
- Absolute surface quality metrics
---
## E9: Relative RMS Between Subcases
Compare wavefront error between two subcases (e.g., 40° vs 20° reference).
```python
from optimization_engine.extractors.extract_zernike import extract_zernike_relative_rms
# Compare wavefront error between subcases (e.g., 40 deg vs 20 deg reference)
result = extract_zernike_relative_rms(
op2_file,
bdf_file=None,
target_subcase="40", # Target orientation
reference_subcase="20", # Reference (usually polishing orientation)
displacement_unit="mm"
)
relative_rms = result['relative_filtered_rms_nm'] # Differential WFE in nm
delta_coeffs = result['delta_coefficients'] # Coefficient differences
```
**Return Dictionary**:
```python
{
'relative_filtered_rms_nm': 8.7, # Differential WFE (target - reference)
'delta_coefficients': [...], # Coefficient differences
'target_rms_nm': 52.3, # Target subcase absolute RMS
'reference_rms_nm': 45.2, # Reference subcase absolute RMS
'improvement_percent': -15.7 # Negative = worse than reference
}
```
**When to Use**:
- Comparing performance across elevation angles
- Minimizing deformation relative to polishing orientation
- Multi-angle telescope mirror optimization
---
## E10: Multi-Subcase Objective Builder
Build objectives for multiple subcases in a single extractor (most efficient for complex optimization).
```python
from optimization_engine.extractors.zernike_helpers import ZernikeObjectiveBuilder
# Build objectives for multiple subcases in one extractor
builder = ZernikeObjectiveBuilder(
op2_finder=lambda: model_dir / "ASSY_M1-solution_1.op2"
)
# Add relative objectives (target vs reference)
builder.add_relative_objective(
"40", "20", # 40° vs 20° reference
metric="relative_filtered_rms_nm",
weight=5.0
)
builder.add_relative_objective(
"60", "20", # 60° vs 20° reference
metric="relative_filtered_rms_nm",
weight=5.0
)
# Add absolute objective for polishing orientation
builder.add_subcase_objective(
"90", # Zenith (polishing orientation)
metric="rms_filter_j1to3", # Only remove piston, tip, tilt
weight=1.0
)
# Evaluate all at once (efficient - parses OP2 only once)
results = builder.evaluate_all()
# Returns: {'rel_40_vs_20': 4.2, 'rel_60_vs_20': 8.7, 'rms_90': 15.3}
```
**When to Use**:
- Multi-objective telescope optimization
- Multiple elevation angles to optimize
- Weighted combination of absolute and relative WFE
---
## Zernike Modes Reference
| Noll Index | Name | Physical Meaning | Correctability |
|------------|------|------------------|----------------|
| J1 | Piston | Constant offset | Easily corrected |
| J2 | Tip | X-tilt | Easily corrected |
| J3 | Tilt | Y-tilt | Easily corrected |
| J4 | Defocus | Power error | Easily corrected |
| J5 | Astigmatism (0°) | Cylindrical error | Correctable |
| J6 | Astigmatism (45°) | Cylindrical error | Correctable |
| J7 | Coma (x) | Off-axis aberration | Harder to correct |
| J8 | Coma (y) | Off-axis aberration | Harder to correct |
| J9-J10 | Trefoil | Triangular error | Hard to correct |
| J11+ | Higher order | Complex aberrations | Very hard to correct |
**Filtering Convention**:
- `filtered_rms`: Removes J1-J4 (piston, tip, tilt, defocus) - standard
- `rms_filter_j1to3`: Removes only J1-J3 (keeps defocus) - for focus-sensitive applications
---
## Common Zernike Optimization Patterns
### Pattern 1: Minimize Relative WFE Across Elevations
```python
# Objective: Minimize max relative WFE across all elevation angles
objectives = [
{"name": "rel_40_vs_20", "goal": "minimize"},
{"name": "rel_60_vs_20", "goal": "minimize"},
]
# Use weighted sum or multi-objective
def objective(trial):
results = builder.evaluate_all()
return (results['rel_40_vs_20'], results['rel_60_vs_20'])
```
### Pattern 2: Single Elevation + Mass
```python
# Objective: Minimize WFE at 45° while minimizing mass
objectives = [
{"name": "wfe_45", "goal": "minimize"}, # Wavefront error
{"name": "mass", "goal": "minimize"}, # Mirror mass
]
```
### Pattern 3: Weighted Multi-Angle
```python
# Weighted combination of multiple angles
def combined_wfe(trial):
results = builder.evaluate_all()
weighted_wfe = (
5.0 * results['rel_40_vs_20'] +
5.0 * results['rel_60_vs_20'] +
1.0 * results['rms_90']
)
return weighted_wfe
```
---
## Telescope Mirror Study Configuration
```json
{
"study_name": "m1_mirror_optimization",
"description": "Minimize wavefront error across elevation angles",
"objectives": [
{
"name": "wfe_40_vs_20",
"goal": "minimize",
"unit": "nm",
"extraction": {
"action": "extract_zernike_relative_rms",
"params": {
"target_subcase": "40",
"reference_subcase": "20"
}
}
}
],
"simulation": {
"analysis_types": ["static"],
"subcases": ["20", "40", "60", "90"],
"solution_name": null
}
}
```
---
## Performance Considerations
1. **Parse OP2 Once**: Use `ZernikeObjectiveBuilder` to parse the OP2 file only once per trial
2. **Subcase Labels**: Match exact subcase labels from NX simulation
3. **Node Selection**: Zernike extraction uses surface nodes only (auto-detected from BDF)
4. **Memory**: Large meshes (>50k nodes) may require chunked processing
---
## Troubleshooting
| Symptom | Cause | Solution |
|---------|-------|----------|
| "Subcase not found" | Wrong subcase label | Check NX .sim for exact labels |
| High J1-J4 coefficients | Rigid body motion not constrained | Check boundary conditions |
| NaN in coefficients | Insufficient nodes for polynomial order | Reduce max Zernike order |
| Inconsistent RMS | Different node sets per subcase | Verify mesh consistency |
| "Billion nm" RMS values | Node merge failed in AFEM | Check `MergeOccurrenceNodes = True` |
| Corrupt OP2 data | All-zero displacements | Validate OP2 before processing |
---
## Assembly FEM (AFEM) Structure for Mirrors
Telescope mirror assemblies in NX typically consist of:
```
ASSY_M1.prt # Master assembly part
ASSY_M1_assyfem1.afm # Assembly FEM container
ASSY_M1_assyfem1_sim1.sim # Simulation file (solve this)
M1_Blank.prt # Mirror blank part
M1_Blank_fem1.fem # Mirror blank mesh
M1_Vertical_Support_Skeleton.prt # Support structure
```
**Key Point**: Expressions in master `.prt` propagate through assembly → AFEM updates automatically.
---
## Multi-Subcase Gravity Analysis
For telescope mirrors, analyze multiple gravity orientations:
| Subcase | Elevation Angle | Purpose |
|---------|-----------------|---------|
| 1 | 90° (zenith) | Polishing orientation - manufacturing reference |
| 2 | 20° | Low elevation - reference for relative metrics |
| 3 | 40° | Mid-low elevation |
| 4 | 60° | Mid-high elevation |
**CRITICAL**: NX subcase numbers don't always match angle labels! Use explicit mapping:
```json
"subcase_labels": {
"1": "90deg",
"2": "20deg",
"3": "40deg",
"4": "60deg"
}
```
---
## Lessons Learned (M1 Mirror V1-V9)
### 1. TPE Sampler Seed Issue
**Problem**: Resuming study with fixed seed causes duplicate parameters.
**Solution**:
```python
if is_new_study:
sampler = TPESampler(seed=42)
else:
sampler = TPESampler() # No seed for resume
```
### 2. OP2 Data Validation
**Always validate before processing**:
```python
unique_values = len(np.unique(disp_z))
if unique_values < 10:
raise RuntimeError("CORRUPT OP2: insufficient unique values")
if np.abs(disp_z).max() > 1e6:
raise RuntimeError("CORRUPT OP2: unrealistic displacement")
```
### 3. Reference Subcase Selection
Use lowest operational elevation (typically 20°) as reference. Higher elevations show positive relative WFE as gravity effects increase.
### 4. Optical Convention
For mirror surface to wavefront error:
```python
WFE = 2 * surface_displacement # Reflection doubles path difference
wfe_nm = 2.0 * displacement_mm * 1e6 # Convert mm to nm
```
---
## Typical Mirror Design Variables
| Parameter | Description | Typical Range |
|-----------|-------------|---------------|
| `whiffle_min` | Whiffle tree minimum dimension | 35-55 mm |
| `whiffle_outer_to_vertical` | Whiffle arm angle | 68-80 deg |
| `inner_circular_rib_dia` | Rib diameter | 480-620 mm |
| `lateral_inner_angle` | Lateral support angle | 25-28.5 deg |
| `blank_backface_angle` | Mirror blank geometry | 3.5-5.0 deg |
---
## Cross-References
- **Extractor Catalog**: [extractors-catalog module](./extractors-catalog.md)
- **System Protocol**: [SYS_12_EXTRACTOR_LIBRARY](../../docs/protocols/system/SYS_12_EXTRACTOR_LIBRARY.md)
- **Core Skill**: [study-creation-core](../core/study-creation-core.md)