feat: Add MLP surrogate with Turbo Mode for 100x faster optimization

Neural Acceleration (MLP Surrogate):
- Add run_nn_optimization.py with hybrid FEA/NN workflow
- MLP architecture: 4-layer (64->128->128->64) with BatchNorm/Dropout
- Three workflow modes:
  - --all: Sequential export->train->optimize->validate
  - --hybrid-loop: Iterative Train->NN->Validate->Retrain cycle
  - --turbo: Aggressive single-best validation (RECOMMENDED)
- Turbo mode: 5000 NN trials + 50 FEA validations in ~12 minutes
- Separate nn_study.db to avoid overloading dashboard

Performance Results (bracket_pareto_3obj study):
- NN prediction errors: mass 1-5%, stress 1-4%, stiffness 5-15%
- Found minimum mass designs at boundary (angle~30deg, thick~30mm)
- 100x speedup vs pure FEA exploration

Protocol Operating System:
- Add .claude/skills/ with Bootstrap, Cheatsheet, Context Loader
- Add docs/protocols/ with operations (OP_01-06) and system (SYS_10-14)
- Update SYS_14_NEURAL_ACCELERATION.md with MLP Turbo Mode docs

NX Automation:
- Add optimization_engine/hooks/ for NX CAD/CAE automation
- Add study_wizard.py for guided study creation
- Fix FEM mesh update: load idealized part before UpdateFemodel()

New Study:
- bracket_pareto_3obj: 3-objective Pareto (mass, stress, stiffness)
- 167 FEA trials + 5000 NN trials completed
- Demonstrates full hybrid workflow

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
Antoine
2025-12-06 20:01:59 -05:00
parent 0cb2808c44
commit 602560c46a
70 changed files with 31018 additions and 289 deletions

View File

@@ -0,0 +1,206 @@
# Atomizer LLM Bootstrap
**Version**: 1.0
**Purpose**: First file any LLM session reads. Provides instant orientation and task routing.
---
## Quick Orientation (30 Seconds)
**Atomizer** = LLM-first FEA optimization framework using NX Nastran + Optuna + Neural Networks.
**Your Role**: Help users set up, run, and analyze structural optimization studies through conversation.
**Core Philosophy**: "Talk, don't click." Users describe what they want; you configure and execute.
---
## Task Classification Tree
When a user request arrives, classify it:
```
User Request
├─► CREATE something?
│ ├─ "new study", "set up", "create", "optimize this"
│ └─► Load: OP_01_CREATE_STUDY.md + core/study-creation-core.md
├─► RUN something?
│ ├─ "start", "run", "execute", "begin optimization"
│ └─► Load: OP_02_RUN_OPTIMIZATION.md
├─► CHECK status?
│ ├─ "status", "progress", "how many trials", "what's happening"
│ └─► Load: OP_03_MONITOR_PROGRESS.md
├─► ANALYZE results?
│ ├─ "results", "best design", "compare", "pareto"
│ └─► Load: OP_04_ANALYZE_RESULTS.md
├─► DEBUG/FIX error?
│ ├─ "error", "failed", "not working", "crashed"
│ └─► Load: OP_06_TROUBLESHOOT.md
├─► CONFIGURE settings?
│ ├─ "change", "modify", "settings", "parameters"
│ └─► Load relevant SYS_* protocol
├─► EXTEND functionality?
│ ├─ "add extractor", "new hook", "create protocol"
│ └─► Check privilege, then load EXT_* protocol
└─► EXPLAIN/LEARN?
├─ "what is", "how does", "explain"
└─► Load relevant SYS_* protocol for reference
```
---
## Protocol Routing Table
| User Intent | Keywords | Protocol | Skill to Load | Privilege |
|-------------|----------|----------|---------------|-----------|
| Create study | "new", "set up", "create", "optimize" | OP_01 | **create-study-wizard.md** | user |
| Run optimization | "start", "run", "execute", "begin" | OP_02 | - | user |
| Monitor progress | "status", "progress", "trials", "check" | OP_03 | - | user |
| Analyze results | "results", "best", "compare", "pareto" | OP_04 | - | user |
| Export training data | "export", "training data", "neural" | OP_05 | modules/neural-acceleration.md | user |
| Debug issues | "error", "failed", "not working", "help" | OP_06 | - | user |
| Understand IMSO | "protocol 10", "IMSO", "adaptive" | SYS_10 | - | user |
| Multi-objective | "pareto", "NSGA", "multi-objective" | SYS_11 | - | user |
| Extractors | "extractor", "displacement", "stress" | SYS_12 | modules/extractors-catalog.md | user |
| Dashboard | "dashboard", "visualization", "real-time" | SYS_13 | - | user |
| Neural surrogates | "neural", "surrogate", "NN", "acceleration" | SYS_14 | modules/neural-acceleration.md | user |
| Add extractor | "create extractor", "new physics" | EXT_01 | - | power_user |
| Add hook | "create hook", "lifecycle", "callback" | EXT_02 | - | power_user |
| Add protocol | "create protocol", "new protocol" | EXT_03 | - | admin |
| Add skill | "create skill", "new skill" | EXT_04 | - | admin |
---
## Role Detection
Determine user's privilege level:
| Role | How to Detect | Can Do | Cannot Do |
|------|---------------|--------|-----------|
| **user** | Default for all sessions | Run studies, monitor, analyze, configure | Create extractors, modify protocols |
| **power_user** | User states they're a developer, or session context indicates | Create extractors, add hooks | Create protocols, modify skills |
| **admin** | Explicit declaration, admin config present | Full access | - |
**Default**: Assume `user` unless explicitly told otherwise.
---
## Context Loading Rules
After classifying the task, load context in this order:
### 1. Always Loaded (via CLAUDE.md)
- This file (00_BOOTSTRAP.md)
- Python environment rules
- Code reuse protocol
### 2. Load Per Task Type
See `02_CONTEXT_LOADER.md` for complete loading rules.
**Quick Reference**:
```
CREATE_STUDY → create-study-wizard.md (PRIMARY)
→ Use: from optimization_engine.study_wizard import StudyWizard, create_study
→ modules/extractors-catalog.md (if asks about extractors)
→ modules/zernike-optimization.md (if telescope/mirror)
→ modules/neural-acceleration.md (if >50 trials)
RUN_OPTIMIZATION → OP_02_RUN_OPTIMIZATION.md
→ SYS_10_IMSO.md (if adaptive)
→ SYS_13_DASHBOARD_TRACKING.md (if monitoring)
DEBUG → OP_06_TROUBLESHOOT.md
→ Relevant SYS_* based on error type
```
---
## Execution Framework
For ANY task, follow this pattern:
```
1. ANNOUNCE → State what you're about to do
2. VALIDATE → Check prerequisites are met
3. EXECUTE → Perform the action
4. VERIFY → Confirm success
5. REPORT → Summarize what was done
6. SUGGEST → Offer logical next steps
```
See `PROTOCOL_EXECUTION.md` for detailed execution rules.
---
## Emergency Quick Paths
### "I just want to run an optimization"
1. Do you have a `.prt` and `.sim` file? → Yes: OP_01 → OP_02
2. Getting errors? → OP_06
3. Want to see progress? → OP_03
### "Something broke"
1. Read the error message
2. Load OP_06_TROUBLESHOOT.md
3. Follow diagnostic flowchart
### "What did my optimization find?"
1. Load OP_04_ANALYZE_RESULTS.md
2. Query the study database
3. Generate report
---
## Protocol Directory Map
```
docs/protocols/
├── operations/ # Layer 2: How-to guides
│ ├── OP_01_CREATE_STUDY.md
│ ├── OP_02_RUN_OPTIMIZATION.md
│ ├── OP_03_MONITOR_PROGRESS.md
│ ├── OP_04_ANALYZE_RESULTS.md
│ ├── OP_05_EXPORT_TRAINING_DATA.md
│ └── OP_06_TROUBLESHOOT.md
├── system/ # Layer 3: Core specifications
│ ├── SYS_10_IMSO.md
│ ├── SYS_11_MULTI_OBJECTIVE.md
│ ├── SYS_12_EXTRACTOR_LIBRARY.md
│ ├── SYS_13_DASHBOARD_TRACKING.md
│ └── SYS_14_NEURAL_ACCELERATION.md
└── extensions/ # Layer 4: Extensibility guides
├── EXT_01_CREATE_EXTRACTOR.md
├── EXT_02_CREATE_HOOK.md
├── EXT_03_CREATE_PROTOCOL.md
├── EXT_04_CREATE_SKILL.md
└── templates/
```
---
## Key Constraints (Always Apply)
1. **Python Environment**: Always use `conda activate atomizer`
2. **Never modify master files**: Copy NX files to study working directory first
3. **Code reuse**: Check `optimization_engine/extractors/` before writing new extraction code
4. **Validation**: Always validate config before running optimization
5. **Documentation**: Every study needs README.md and STUDY_REPORT.md
---
## Next Steps After Bootstrap
1. If you know the task type → Go to relevant OP_* or SYS_* protocol
2. If unclear → Ask user clarifying question
3. If complex task → Read `01_CHEATSHEET.md` for quick reference
4. If need detailed loading rules → Read `02_CONTEXT_LOADER.md`

View File

@@ -0,0 +1,230 @@
# Atomizer Quick Reference Cheatsheet
**Version**: 1.0
**Purpose**: Rapid lookup for common operations. "I want X → Use Y"
---
## Task → Protocol Quick Lookup
| I want to... | Use Protocol | Key Command/Action |
|--------------|--------------|-------------------|
| Create a new optimization study | OP_01 | Generate `optimization_config.json` + `run_optimization.py` |
| Run an optimization | OP_02 | `conda activate atomizer && python run_optimization.py` |
| Check optimization progress | OP_03 | Query `study.db` or check dashboard at `localhost:3000` |
| See best results | OP_04 | `optuna-dashboard sqlite:///study.db` or dashboard |
| Export neural training data | OP_05 | `python run_optimization.py --export-training` |
| Fix an error | OP_06 | Read error log → follow diagnostic tree |
| Add custom physics extractor | EXT_01 | Create in `optimization_engine/extractors/` |
| Add lifecycle hook | EXT_02 | Create in `optimization_engine/plugins/` |
---
## Extractor Quick Reference
| Physics | Extractor | Function Call |
|---------|-----------|---------------|
| Max displacement | E1 | `extract_displacement(op2_file, subcase=1)` |
| Natural frequency | E2 | `extract_frequency(op2_file, subcase=1, mode_number=1)` |
| Von Mises stress | E3 | `extract_solid_stress(op2_file, subcase=1, element_type='cquad4')` |
| BDF mass | E4 | `extract_mass_from_bdf(bdf_file)` |
| CAD expression mass | E5 | `extract_mass_from_expression(prt_file, expression_name='p173')` |
| Field data | E6 | `FieldDataExtractor(field_file, result_column, aggregation)` |
| Stiffness (k=F/δ) | E7 | `StiffnessCalculator(...)` |
| Zernike WFE | E8 | `extract_zernike_from_op2(op2_file, bdf_file, subcase)` |
| Zernike relative | E9 | `extract_zernike_relative_rms(op2_file, bdf_file, target, ref)` |
| Zernike builder | E10 | `ZernikeObjectiveBuilder(op2_finder)` |
| Part mass + material | E11 | `extract_part_mass_material(prt_file)` → mass, volume, material |
**Full details**: See `SYS_12_EXTRACTOR_LIBRARY.md` or `modules/extractors-catalog.md`
---
## Protocol Selection Guide
### Single Objective Optimization
```
Question: Do you have ONE goal to minimize/maximize?
├─ Yes, simple problem (smooth, <10 params)
│ └─► Protocol 10 + CMA-ES or GP-BO sampler
├─ Yes, complex problem (noisy, many params)
│ └─► Protocol 10 + TPE sampler
└─ Not sure about problem characteristics?
└─► Protocol 10 with adaptive characterization (default)
```
### Multi-Objective Optimization
```
Question: Do you have 2-3 competing goals?
├─ Yes (e.g., minimize mass AND minimize stress)
│ └─► Protocol 11 + NSGA-II sampler
└─ Pareto front needed?
└─► Protocol 11 (returns best_trials, not best_trial)
```
### Neural Network Acceleration
```
Question: Do you need >50 trials OR surrogate model?
├─ Yes
│ └─► Protocol 14 (configure surrogate_settings in config)
└─ Training data export needed?
└─► OP_05_EXPORT_TRAINING_DATA.md
```
---
## Configuration Quick Reference
### optimization_config.json Structure
```json
{
"study_name": "my_study",
"design_variables": [
{"name": "thickness", "min": 1.0, "max": 10.0, "unit": "mm"}
],
"objectives": [
{"name": "mass", "goal": "minimize", "unit": "kg"}
],
"constraints": [
{"name": "max_stress", "type": "<=", "threshold": 250, "unit": "MPa"}
],
"optimization_settings": {
"protocol": "protocol_10_single_objective",
"sampler": "TPESampler",
"n_trials": 50
},
"simulation": {
"model_file": "model.prt",
"sim_file": "model.sim",
"solver": "nastran"
}
}
```
### Sampler Quick Selection
| Sampler | Use When | Protocol |
|---------|----------|----------|
| `TPESampler` | Default, robust to noise | P10 |
| `CMAESSampler` | Smooth, unimodal problems | P10 |
| `GPSampler` | Expensive FEA, few trials | P10 |
| `NSGAIISampler` | Multi-objective (2-3 goals) | P11 |
| `RandomSampler` | Characterization phase only | P10 |
---
## Study File Structure
```
studies/{study_name}/
├── 1_setup/
│ ├── model/ # NX files (.prt, .sim, .fem)
│ └── optimization_config.json
├── 2_results/
│ ├── study.db # Optuna SQLite database
│ ├── optimizer_state.json # Real-time state (P13)
│ └── trial_logs/
├── README.md # MANDATORY: Engineering blueprint
├── STUDY_REPORT.md # MANDATORY: Results tracking
└── run_optimization.py # Entrypoint script
```
---
## Common Commands
```bash
# Activate environment (ALWAYS FIRST)
conda activate atomizer
# Run optimization
python run_optimization.py
# Run with specific trial count
python run_optimization.py --n-trials 100
# Resume interrupted optimization
python run_optimization.py --resume
# Export training data for neural network
python run_optimization.py --export-training
# View results in Optuna dashboard
optuna-dashboard sqlite:///2_results/study.db
# Check study status
python -c "import optuna; s=optuna.load_study('my_study', 'sqlite:///2_results/study.db'); print(f'Trials: {len(s.trials)}')"
```
---
## Error Quick Fixes
| Error | Likely Cause | Quick Fix |
|-------|--------------|-----------|
| "No module named optuna" | Wrong environment | `conda activate atomizer` |
| "NX session timeout" | Model too complex | Increase `timeout` in config |
| "OP2 file not found" | Solve failed | Check NX log for errors |
| "No feasible solutions" | Constraints too tight | Relax constraint thresholds |
| "NSGA-II requires >1 objective" | Wrong protocol | Use P10 for single-objective |
| "Expression not found" | Wrong parameter name | Verify expression names in NX |
| **All trials identical results** | **Missing `*_i.prt`** | **Copy idealized part to study folder!** |
**Full troubleshooting**: See `OP_06_TROUBLESHOOT.md`
---
## CRITICAL: NX FEM Mesh Update
**If all optimization trials produce identical results, the mesh is NOT updating!**
### Required Files for Mesh Updates
```
studies/{study}/1_setup/model/
├── Model.prt # Geometry
├── Model_fem1_i.prt # Idealized part ← MUST EXIST!
├── Model_fem1.fem # FEM
└── Model_sim1.sim # Simulation
```
### Why It Matters
The `*_i.prt` (idealized part) MUST be:
1. **Present** in the study folder
2. **Loaded** before `UpdateFemodel()` (already implemented in `solve_simulation.py`)
Without it, `UpdateFemodel()` runs but the mesh doesn't change!
---
## Privilege Levels
| Level | Can Create Studies | Can Add Extractors | Can Add Protocols |
|-------|-------------------|-------------------|------------------|
| user | ✓ | ✗ | ✗ |
| power_user | ✓ | ✓ | ✗ |
| admin | ✓ | ✓ | ✓ |
---
## Dashboard URLs
| Service | URL | Purpose |
|---------|-----|---------|
| Atomizer Dashboard | `http://localhost:3000` | Real-time optimization monitoring |
| Optuna Dashboard | `http://localhost:8080` | Trial history, parameter importance |
| API Backend | `http://localhost:5000` | REST API for dashboard |
---
## Protocol Numbers Reference
| # | Name | Purpose |
|---|------|---------|
| 10 | IMSO | Intelligent Multi-Strategy Optimization (adaptive) |
| 11 | Multi-Objective | NSGA-II for Pareto optimization |
| 12 | - | (Reserved) |
| 13 | Dashboard | Real-time tracking and visualization |
| 14 | Neural | Surrogate model acceleration |

View File

@@ -0,0 +1,308 @@
# Atomizer Context Loader
**Version**: 1.0
**Purpose**: Define what documentation to load based on task type. Ensures LLM sessions have exactly the context needed.
---
## Context Loading Philosophy
1. **Minimal by default**: Don't load everything; load what's needed
2. **Expand on demand**: Load additional modules when signals detected
3. **Single source of truth**: Each concept defined in ONE place
4. **Layer progression**: Bootstrap → Operations → System → Extensions
---
## Task-Based Loading Rules
### CREATE_STUDY
**Trigger Keywords**: "new", "set up", "create", "optimize", "study"
**Always Load**:
```
.claude/skills/core/study-creation-core.md
```
**Load If**:
| Condition | Load |
|-----------|------|
| User asks about extractors | `modules/extractors-catalog.md` |
| Telescope/mirror/optics mentioned | `modules/zernike-optimization.md` |
| >50 trials OR "neural" OR "surrogate" | `modules/neural-acceleration.md` |
| Multi-objective (2+ goals) | `docs/protocols/system/SYS_11_MULTI_OBJECTIVE.md` |
**Example Context Stack**:
```
# Simple bracket optimization
core/study-creation-core.md
# Mirror optimization with neural acceleration
core/study-creation-core.md
modules/zernike-optimization.md
modules/neural-acceleration.md
```
---
### RUN_OPTIMIZATION
**Trigger Keywords**: "start", "run", "execute", "begin", "launch"
**Always Load**:
```
docs/protocols/operations/OP_02_RUN_OPTIMIZATION.md
```
**Load If**:
| Condition | Load |
|-----------|------|
| "adaptive" OR "characterization" | `docs/protocols/system/SYS_10_IMSO.md` |
| "dashboard" OR "real-time" | `docs/protocols/system/SYS_13_DASHBOARD_TRACKING.md` |
| "resume" OR "continue" | OP_02 has resume section |
| Errors occur | `docs/protocols/operations/OP_06_TROUBLESHOOT.md` |
---
### MONITOR_PROGRESS
**Trigger Keywords**: "status", "progress", "how many", "trials", "check"
**Always Load**:
```
docs/protocols/operations/OP_03_MONITOR_PROGRESS.md
```
**Load If**:
| Condition | Load |
|-----------|------|
| Dashboard questions | `docs/protocols/system/SYS_13_DASHBOARD_TRACKING.md` |
| Pareto/multi-objective | `docs/protocols/system/SYS_11_MULTI_OBJECTIVE.md` |
---
### ANALYZE_RESULTS
**Trigger Keywords**: "results", "best", "compare", "pareto", "report"
**Always Load**:
```
docs/protocols/operations/OP_04_ANALYZE_RESULTS.md
```
**Load If**:
| Condition | Load |
|-----------|------|
| Multi-objective/Pareto | `docs/protocols/system/SYS_11_MULTI_OBJECTIVE.md` |
| Surrogate accuracy | `docs/protocols/system/SYS_14_NEURAL_ACCELERATION.md` |
---
### EXPORT_TRAINING_DATA
**Trigger Keywords**: "export", "training data", "neural network data"
**Always Load**:
```
docs/protocols/operations/OP_05_EXPORT_TRAINING_DATA.md
modules/neural-acceleration.md
```
---
### TROUBLESHOOT
**Trigger Keywords**: "error", "failed", "not working", "crashed", "help"
**Always Load**:
```
docs/protocols/operations/OP_06_TROUBLESHOOT.md
```
**Load If**:
| Error Type | Load |
|------------|------|
| NX/solve errors | NX solver section of core skill |
| Extractor errors | `modules/extractors-catalog.md` |
| Dashboard errors | `docs/protocols/system/SYS_13_DASHBOARD_TRACKING.md` |
| Neural errors | `docs/protocols/system/SYS_14_NEURAL_ACCELERATION.md` |
---
### UNDERSTAND_PROTOCOL
**Trigger Keywords**: "what is", "how does", "explain", "protocol"
**Load Based on Topic**:
| Topic | Load |
|-------|------|
| Protocol 10 / IMSO / adaptive | `docs/protocols/system/SYS_10_IMSO.md` |
| Protocol 11 / multi-objective / NSGA | `docs/protocols/system/SYS_11_MULTI_OBJECTIVE.md` |
| Extractors / physics extraction | `docs/protocols/system/SYS_12_EXTRACTOR_LIBRARY.md` |
| Protocol 13 / dashboard / real-time | `docs/protocols/system/SYS_13_DASHBOARD_TRACKING.md` |
| Protocol 14 / neural / surrogate | `docs/protocols/system/SYS_14_NEURAL_ACCELERATION.md` |
---
### EXTEND_FUNCTIONALITY
**Trigger Keywords**: "create extractor", "add hook", "new protocol", "extend"
**Requires**: Privilege check first (see 00_BOOTSTRAP.md)
| Extension Type | Load | Privilege |
|----------------|------|-----------|
| New extractor | `docs/protocols/extensions/EXT_01_CREATE_EXTRACTOR.md` | power_user |
| New hook | `docs/protocols/extensions/EXT_02_CREATE_HOOK.md` | power_user |
| New protocol | `docs/protocols/extensions/EXT_03_CREATE_PROTOCOL.md` | admin |
| New skill | `docs/protocols/extensions/EXT_04_CREATE_SKILL.md` | admin |
**Always Load for Extractors**:
```
modules/nx-docs-lookup.md # NX API documentation via MCP
```
---
### NX_DEVELOPMENT
**Trigger Keywords**: "NX Open", "NXOpen", "NX API", "Simcenter", "Nastran card", "NX script"
**Always Load**:
```
modules/nx-docs-lookup.md
```
**MCP Tools Available**:
| Tool | Purpose |
|------|---------|
| `siemens_docs_search` | Search NX Open, Simcenter, Teamcenter docs |
| `siemens_docs_fetch` | Fetch specific documentation page |
| `siemens_auth_status` | Check Siemens SSO session status |
| `siemens_login` | Re-authenticate if session expired |
**Use When**:
- Building new extractors that use NX Open APIs
- Debugging NX automation errors
- Looking up Nastran card formats
- Finding correct method signatures
---
## Signal Detection Patterns
Use these patterns to detect when to load additional modules:
### Zernike/Mirror Detection
```
Signals: "mirror", "telescope", "wavefront", "WFE", "Zernike",
"RMS", "polishing", "optical", "M1", "surface error"
Action: Load modules/zernike-optimization.md
```
### Neural Acceleration Detection
```
Signals: "neural", "surrogate", "NN", "machine learning",
"acceleration", ">50 trials", "fast", "GNN"
Action: Load modules/neural-acceleration.md
```
### Multi-Objective Detection
```
Signals: Two or more objectives with different goals,
"pareto", "tradeoff", "NSGA", "multi-objective",
"minimize X AND maximize Y"
Action: Load SYS_11_MULTI_OBJECTIVE.md
```
### High-Complexity Detection
```
Signals: >10 design variables, "complex", "many parameters",
"adaptive", "characterization", "landscape"
Action: Load SYS_10_IMSO.md
```
### NX Open / Simcenter Detection
```
Signals: "NX Open", "NXOpen", "NX API", "FemPart", "CAE.",
"Nastran", "CQUAD", "CTRIA", "MAT1", "PSHELL",
"mesh", "solver", "OP2", "BDF", "Simcenter"
Action: Load modules/nx-docs-lookup.md
Use MCP tools: siemens_docs_search, siemens_docs_fetch
```
---
## Context Stack Examples
### Example 1: Simple Bracket Optimization
```
User: "Help me optimize my bracket for minimum weight"
Load Stack:
1. core/study-creation-core.md # Core study creation logic
```
### Example 2: Telescope Mirror with Neural
```
User: "I need to optimize my M1 mirror's wavefront error with 200 trials"
Load Stack:
1. core/study-creation-core.md # Core study creation
2. modules/zernike-optimization.md # Zernike-specific patterns
3. modules/neural-acceleration.md # Neural acceleration for 200 trials
```
### Example 3: Multi-Objective Structural
```
User: "Minimize mass AND maximize stiffness for my beam"
Load Stack:
1. core/study-creation-core.md # Core study creation
2. SYS_11_MULTI_OBJECTIVE.md # Multi-objective protocol
```
### Example 4: Debug Session
```
User: "My optimization failed with NX timeout error"
Load Stack:
1. OP_06_TROUBLESHOOT.md # Troubleshooting guide
```
### Example 5: Create Custom Extractor
```
User: "I need to extract thermal gradients from my results"
Load Stack:
1. EXT_01_CREATE_EXTRACTOR.md # Extractor creation guide
2. modules/extractors-catalog.md # Reference existing patterns
```
---
## Loading Priority Order
When multiple modules could apply, load in this order:
1. **Core skill** (always first for creation tasks)
2. **Primary operation protocol** (OP_*)
3. **Required system protocols** (SYS_*)
4. **Optional modules** (modules/*)
5. **Extension protocols** (EXT_*) - only if extending
---
## Anti-Patterns (Don't Do)
1. **Don't load everything**: Only load what's needed for the task
2. **Don't load extensions for users**: Check privilege first
3. **Don't skip core skill**: For study creation, always load core first
4. **Don't mix incompatible protocols**: P10 (single-obj) vs P11 (multi-obj)
5. **Don't load deprecated docs**: Only use docs/protocols/* structure

View File

@@ -0,0 +1,398 @@
# Developer Documentation Skill
**Version**: 1.0
**Purpose**: Self-documenting system for Atomizer development. Use this skill to systematically document new features, protocols, extractors, and changes.
---
## Overview
This skill enables **automatic documentation maintenance** during development. When you develop new features, use these commands to keep documentation in sync with code.
---
## Quick Commands for Developers
### Document New Feature
**Tell Claude**:
```
"Document the new {feature} I just added"
```
Claude will:
1. Analyze the code changes
2. Determine which docs need updating
3. Update protocol files
4. Update CLAUDE.md if needed
5. Bump version numbers
6. Create changelog entry
### Document New Extractor
**Tell Claude**:
```
"I created a new extractor: extract_thermal.py. Document it."
```
Claude will:
1. Read the extractor code
2. Add entry to SYS_12_EXTRACTOR_LIBRARY.md
3. Add to extractors-catalog.md module
4. Update __init__.py exports
5. Create test file template
### Document Protocol Change
**Tell Claude**:
```
"I modified Protocol 10 to add {feature}. Update docs."
```
Claude will:
1. Read the code changes
2. Update SYS_10_IMSO.md
3. Bump version number
4. Add to Version History
5. Update cross-references
### Full Documentation Audit
**Tell Claude**:
```
"Audit documentation for {component/study/protocol}"
```
Claude will:
1. Check all related docs
2. Identify stale content
3. Flag missing documentation
4. Suggest updates
---
## Documentation Workflow
### When You Add Code
```
┌─────────────────────────────────────────────────┐
│ 1. WRITE CODE │
│ - New extractor, hook, or feature │
└─────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────┐
│ 2. TELL CLAUDE │
│ "Document the new {feature} I added" │
└─────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────┐
│ 3. CLAUDE UPDATES │
│ - Protocol files │
│ - Skill modules │
│ - Version numbers │
│ - Cross-references │
└─────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────┐
│ 4. REVIEW & COMMIT │
│ - Review changes │
│ - Commit code + docs together │
└─────────────────────────────────────────────────┘
```
---
## Documentation Update Rules
### File → Document Mapping
| If You Change... | Update These Docs |
|------------------|-------------------|
| `optimization_engine/extractors/*` | SYS_12, extractors-catalog.md |
| `optimization_engine/intelligent_optimizer.py` | SYS_10_IMSO.md |
| `optimization_engine/plugins/*` | EXT_02_CREATE_HOOK.md |
| `atomizer-dashboard/*` | SYS_13_DASHBOARD_TRACKING.md |
| `atomizer-field/*` | SYS_14_NEURAL_ACCELERATION.md |
| Any multi-objective code | SYS_11_MULTI_OBJECTIVE.md |
| Study creation workflow | OP_01_CREATE_STUDY.md |
| Run workflow | OP_02_RUN_OPTIMIZATION.md |
### Version Bumping Rules
| Change Type | Version Bump | Example |
|-------------|--------------|---------|
| Bug fix | Patch (+0.0.1) | 1.0.0 → 1.0.1 |
| New feature (backwards compatible) | Minor (+0.1.0) | 1.0.0 → 1.1.0 |
| Breaking change | Major (+1.0.0) | 1.0.0 → 2.0.0 |
### Required Updates for New Extractor
1. **SYS_12_EXTRACTOR_LIBRARY.md**:
- Add to Quick Reference table (assign E{N} ID)
- Add detailed section with code example
2. **skills/modules/extractors-catalog.md** (when created):
- Add entry with copy-paste code snippet
3. **optimization_engine/extractors/__init__.py**:
- Add import and export
4. **Tests**:
- Create `tests/test_extract_{name}.py`
### Required Updates for New Protocol
1. **docs/protocols/system/SYS_{N}_{NAME}.md**:
- Create full protocol document
2. **docs/protocols/README.md**:
- Add to navigation tables
3. **.claude/skills/01_CHEATSHEET.md**:
- Add to quick lookup table
4. **.claude/skills/02_CONTEXT_LOADER.md**:
- Add loading rules
5. **CLAUDE.md**:
- Add reference if major feature
---
## Self-Documentation Commands
### "Document this change"
Claude analyzes recent changes and updates relevant docs.
**Input**: Description of what you changed
**Output**: Updated protocol files, version bumps, changelog
### "Create protocol for {feature}"
Claude creates a new protocol document following the template.
**Input**: Feature name and description
**Output**: New SYS_* or EXT_* document
### "Verify documentation for {component}"
Claude checks that docs match code.
**Input**: Component name
**Output**: List of discrepancies and suggested fixes
### "Generate changelog since {date/commit}"
Claude creates a changelog from git history.
**Input**: Date or commit reference
**Output**: Formatted changelog
---
## Protocol Document Template
When creating new protocols, use this structure:
```markdown
# {LAYER}_{NUMBER}_{NAME}.md
<!--
PROTOCOL: {Full Name}
LAYER: {Operations|System|Extensions}
VERSION: 1.0
STATUS: Active
LAST_UPDATED: {YYYY-MM-DD}
PRIVILEGE: {user|power_user|admin}
LOAD_WITH: [{dependencies}]
-->
## Overview
{1-3 sentence description}
## When to Use
| Trigger | Action |
|---------|--------|
## Quick Reference
{Tables, key parameters}
## Detailed Specification
{Full content}
## Examples
{Working examples}
## Troubleshooting
| Symptom | Cause | Solution |
## Cross-References
- Depends On: []
- Used By: []
## Version History
| Version | Date | Changes |
|---------|------|---------|
| 1.0 | {DATE} | Initial release |
```
---
## Changelog Format
When updating protocols, add to Version History:
```markdown
## Version History
| Version | Date | Changes |
|---------|------|---------|
| 1.2.0 | 2025-12-05 | Added thermal gradient support |
| 1.1.0 | 2025-12-01 | Improved error handling |
| 1.0.0 | 2025-11-20 | Initial release |
```
---
## Integration with Git
### Commit Message Format
When documenting:
```
docs: Update SYS_12 with thermal extractor
- Added E11 thermal gradient extractor
- Updated Quick Reference table
- Added code example in Detailed section
- Bumped version to 1.1.0
```
### Pre-Commit Checklist
Before committing code changes:
- [ ] Related docs updated
- [ ] Version numbers bumped
- [ ] Cross-references valid
- [ ] Examples tested
- [ ] CLAUDE.md updated if major feature
---
## Example: Document New Extractor
**You say**:
```
"I just created extract_thermal_gradient.py in optimization_engine/extractors/.
It extracts thermal gradients from temperature fields. Document it."
```
**Claude does**:
1. Reads `extract_thermal_gradient.py`
2. Updates `SYS_12_EXTRACTOR_LIBRARY.md`:
```markdown
| E11 | Thermal Gradient | `extract_thermal_gradient()` | .op2 | K/mm |
```
3. Adds detailed section with code example
4. Updates `__init__.py`:
```python
from .extract_thermal_gradient import extract_thermal_gradient
__all__ = [..., 'extract_thermal_gradient']
```
5. Bumps SYS_12 version: 1.0 → 1.1
6. Reports what was updated
---
## Example: Document Protocol Change
**You say**:
```
"I added a new stopping criterion to Protocol 10 based on improvement rate.
Update the documentation."
```
**Claude does**:
1. Reads relevant code changes
2. Updates `SYS_10_IMSO.md`:
- Adds to Configuration section
- Updates Architecture diagram if needed
- Adds to Quick Reference
3. Bumps version: 2.1 → 2.2
4. Adds Version History entry:
```markdown
| 2.2 | 2025-12-05 | Added improvement rate stopping criterion |
```
5. Updates cross-references if needed
---
## Keeping Docs in Sync
### Daily Development
```
Morning: Start coding
├─► Write new feature
├─► Test feature
├─► "Claude, document the {feature} I just added"
└─► Commit code + docs together
```
### Weekly Audit
```
Friday:
├─► "Claude, audit documentation for recent changes"
├─► Review flagged issues
└─► Fix any stale documentation
```
### Release Preparation
```
Before release:
├─► "Claude, generate changelog since last release"
├─► "Claude, verify all protocol versions are consistent"
└─► Final review and version bump
```
---
## Summary
**To keep documentation in sync**:
1. **After coding**: Tell Claude what you changed
2. **Be specific**: "I added X to Y" works better than "update docs"
3. **Commit together**: Code and docs in same commit
4. **Regular audits**: Weekly check for stale docs
**Claude handles**:
- Finding which docs need updates
- Following the template structure
- Version bumping
- Cross-reference updates
- Changelog generation
**You handle**:
- Telling Claude what changed
- Reviewing Claude's updates
- Final commit

View File

@@ -0,0 +1,361 @@
# Protocol Execution Framework (PEF)
**Version**: 1.0
**Purpose**: Meta-protocol defining how LLM sessions execute Atomizer protocols. The "protocol for using protocols."
---
## Core Execution Pattern
For ANY task, follow this 6-step pattern:
```
┌─────────────────────────────────────────────────────────────┐
│ 1. ANNOUNCE │
│ State what you're about to do in plain language │
│ "I'll create an optimization study for your bracket..." │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ 2. VALIDATE │
│ Check prerequisites are met │
│ - Required files exist? │
│ - Environment ready? │
│ - User has confirmed understanding? │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ 3. EXECUTE │
│ Perform the action following protocol steps │
│ - Load required context per 02_CONTEXT_LOADER.md │
│ - Follow protocol step-by-step │
│ - Handle errors with OP_06 patterns │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ 4. VERIFY │
│ Confirm success │
│ - Files created correctly? │
│ - No errors in output? │
│ - Results make sense? │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ 5. REPORT │
│ Summarize what was done │
│ - List files created/modified │
│ - Show key results │
│ - Note any warnings │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ 6. SUGGEST │
│ Offer logical next steps │
│ - What should user do next? │
│ - Related operations available? │
│ - Dashboard URL if relevant? │
└─────────────────────────────────────────────────────────────┘
```
---
## Task Classification Rules
Before executing, classify the user's request:
### Step 1: Identify Task Category
```python
TASK_CATEGORIES = {
"CREATE": {
"keywords": ["new", "create", "set up", "optimize", "study", "build"],
"protocol": "OP_01_CREATE_STUDY",
"privilege": "user"
},
"RUN": {
"keywords": ["start", "run", "execute", "begin", "launch"],
"protocol": "OP_02_RUN_OPTIMIZATION",
"privilege": "user"
},
"MONITOR": {
"keywords": ["status", "progress", "check", "how many", "trials"],
"protocol": "OP_03_MONITOR_PROGRESS",
"privilege": "user"
},
"ANALYZE": {
"keywords": ["results", "best", "compare", "pareto", "report"],
"protocol": "OP_04_ANALYZE_RESULTS",
"privilege": "user"
},
"EXPORT": {
"keywords": ["export", "training data", "neural data"],
"protocol": "OP_05_EXPORT_TRAINING_DATA",
"privilege": "user"
},
"DEBUG": {
"keywords": ["error", "failed", "not working", "crashed", "help"],
"protocol": "OP_06_TROUBLESHOOT",
"privilege": "user"
},
"EXTEND": {
"keywords": ["add extractor", "create hook", "new protocol"],
"protocol": "EXT_*",
"privilege": "power_user+"
}
}
```
### Step 2: Check Privilege
```python
def check_privilege(task_category, user_role):
required = TASK_CATEGORIES[task_category]["privilege"]
privilege_hierarchy = ["user", "power_user", "admin"]
if privilege_hierarchy.index(user_role) >= privilege_hierarchy.index(required):
return True
else:
# Inform user they need higher privilege
return False
```
### Step 3: Load Context
Follow rules in `02_CONTEXT_LOADER.md` to load appropriate documentation.
---
## Validation Checkpoints
Before executing any protocol step, validate:
### Pre-Study Creation
- [ ] Model files exist (`.prt`, `.sim`)
- [ ] Working directory is writable
- [ ] User has described objectives clearly
- [ ] Conda environment is atomizer
### Pre-Run
- [ ] `optimization_config.json` exists and is valid
- [ ] `run_optimization.py` exists
- [ ] Model files copied to `1_setup/model/`
- [ ] No conflicting process running
### Pre-Analysis
- [ ] `study.db` exists with completed trials
- [ ] No optimization currently running
### Pre-Extension (power_user+)
- [ ] User has confirmed their role
- [ ] Extension doesn't duplicate existing functionality
- [ ] Tests can be written for new code
---
## Error Recovery Protocol
When something fails during execution:
### Step 1: Identify Failure Point
```
Which step failed?
├─ File creation? → Check permissions, disk space
├─ NX solve? → Check NX log, timeout, expressions
├─ Extraction? → Check OP2 exists, subcase correct
├─ Database? → Check SQLite file, trial count
└─ Unknown? → Capture full error, check OP_06
```
### Step 2: Attempt Recovery
```python
RECOVERY_ACTIONS = {
"file_permission": "Check directory permissions, try different location",
"nx_timeout": "Increase timeout in config, simplify model",
"nx_expression_error": "Verify expression names match NX model",
"op2_missing": "Check NX solve completed successfully",
"extractor_error": "Verify correct subcase and element types",
"database_locked": "Wait for other process to finish, or kill stale process",
}
```
### Step 3: Escalate if Needed
If recovery fails:
1. Log the error with full context
2. Inform user of the issue
3. Suggest manual intervention if appropriate
4. Offer to retry after user fixes underlying issue
---
## Protocol Combination Rules
Some protocols work together, others conflict:
### Valid Combinations
```
OP_01 + SYS_10 # Create study with IMSO
OP_01 + SYS_11 # Create multi-objective study
OP_01 + SYS_14 # Create study with neural acceleration
OP_02 + SYS_13 # Run with dashboard tracking
OP_04 + SYS_11 # Analyze multi-objective results
```
### Invalid Combinations
```
SYS_10 + SYS_11 # Single-obj IMSO with multi-obj NSGA (pick one)
TPESampler + SYS_11 # TPE is single-objective; use NSGAIISampler
EXT_* without privilege # Extensions require power_user or admin
```
### Automatic Protocol Inference
```
If objectives.length == 1:
→ Use Protocol 10 (single-objective)
→ Sampler: TPE, CMA-ES, or GP
If objectives.length > 1:
→ Use Protocol 11 (multi-objective)
→ Sampler: NSGA-II (mandatory)
If n_trials > 50 OR surrogate_settings present:
→ Add Protocol 14 (neural acceleration)
```
---
## Execution Logging
During execution, maintain awareness of:
### Session State
```python
session_state = {
"current_study": None, # Active study name
"loaded_protocols": [], # Protocols currently loaded
"completed_steps": [], # Steps completed this session
"pending_actions": [], # Actions waiting for user
"last_error": None, # Most recent error if any
}
```
### User Communication
- Always explain what you're doing
- Show progress for long operations
- Warn before destructive actions
- Confirm before expensive operations (many trials)
---
## Confirmation Requirements
Some actions require explicit user confirmation:
### Always Confirm
- [ ] Deleting files or studies
- [ ] Overwriting existing study
- [ ] Running >100 trials
- [ ] Modifying master NX files (FORBIDDEN - but confirm user understands)
- [ ] Creating extension (power_user+)
### Confirm If Uncertain
- [ ] Ambiguous objective (minimize or maximize?)
- [ ] Multiple possible extractors
- [ ] Complex multi-solution setup
### No Confirmation Needed
- [ ] Creating new study in empty directory
- [ ] Running validation checks
- [ ] Reading/analyzing results
- [ ] Checking status
---
## Output Format Standards
When reporting results:
### Study Creation Output
```
Created study: {study_name}
Files generated:
- studies/{study_name}/1_setup/optimization_config.json
- studies/{study_name}/run_optimization.py
- studies/{study_name}/README.md
- studies/{study_name}/STUDY_REPORT.md
Configuration:
- Design variables: {count}
- Objectives: {list}
- Constraints: {list}
- Protocol: {protocol}
- Trials: {n_trials}
Next steps:
1. Copy your NX files to studies/{study_name}/1_setup/model/
2. Run: conda activate atomizer && python run_optimization.py
3. Monitor: http://localhost:3000
```
### Run Status Output
```
Study: {study_name}
Status: {running|completed|failed}
Trials: {completed}/{total}
Best value: {value} ({objective_name})
Elapsed: {time}
Dashboard: http://localhost:3000
```
### Error Output
```
Error: {error_type}
Message: {error_message}
Location: {file}:{line}
Diagnosis:
{explanation}
Recovery:
{steps to fix}
Reference: OP_06_TROUBLESHOOT.md
```
---
## Quality Checklist
Before considering any task complete:
### For Study Creation
- [ ] `optimization_config.json` validates successfully
- [ ] `run_optimization.py` has no syntax errors
- [ ] `README.md` has all 11 required sections
- [ ] `STUDY_REPORT.md` template created
- [ ] No code duplication (used extractors from library)
### For Execution
- [ ] Optimization started without errors
- [ ] Dashboard shows real-time updates (if enabled)
- [ ] Trials are progressing
### For Analysis
- [ ] Best result(s) identified
- [ ] Constraints satisfied
- [ ] Report generated if requested
### For Extensions
- [ ] New code added to correct location
- [ ] `__init__.py` updated with exports
- [ ] Documentation updated
- [ ] Tests written (or noted as TODO)

View File

@@ -1,7 +1,7 @@
# Analyze Model Skill
**Last Updated**: November 25, 2025
**Version**: 1.0 - Model Analysis and Feature Extraction
**Last Updated**: December 6, 2025
**Version**: 2.0 - Added Comprehensive Model Introspection
You are helping the user understand their NX model's structure and identify optimization opportunities.
@@ -11,7 +11,8 @@ Extract and present information about an NX model to help the user:
1. Identify available parametric expressions (potential design variables)
2. Understand the simulation setup (analysis types, boundary conditions)
3. Discover material properties
4. Recommend optimization strategies based on model characteristics
4. Identify extractable results from OP2 files
5. Recommend optimization strategies based on model characteristics
## Triggers
@@ -20,28 +21,107 @@ Extract and present information about an NX model to help the user:
- "show me the expressions"
- "look at my NX model"
- "what parameters are available"
- "introspect my model"
- "what results are available"
## Prerequisites
- User must provide path to NX model files (.prt, .sim, .fem)
- NX must be available on the system (configured in config.py)
- Model files must be valid NX format
- User must provide path to NX model files (.prt, .sim, .fem) or study directory
- NX must be available on the system for part/sim introspection
- OP2 introspection works without NX (pure Python)
## Information Gathering
Ask these questions if not already provided:
1. **Model Location**:
- "Where is your NX model? (path to .prt file)"
- "Where is your NX model? (path to .prt file or study directory)"
- Default: Look in `studies/*/1_setup/model/`
2. **Analysis Interest**:
- "What type of optimization are you considering?" (optional)
- This helps focus the analysis on relevant aspects
---
## MANDATORY: Model Introspection
**ALWAYS use the introspection module for comprehensive model analysis:**
```python
from optimization_engine.hooks.nx_cad.model_introspection import (
introspect_part,
introspect_simulation,
introspect_op2,
introspect_study
)
# Option 1: Introspect entire study directory (recommended)
study_info = introspect_study("studies/my_study/")
# Option 2: Introspect individual files
part_info = introspect_part("path/to/model.prt")
sim_info = introspect_simulation("path/to/model.sim")
op2_info = introspect_op2("path/to/results.op2")
```
### What Introspection Extracts
| Source | Information Extracted |
|--------|----------------------|
| `.prt` | Expressions (count, values, types), bodies, mass, material, features |
| `.sim` | Solutions, boundary conditions, loads, materials, mesh info, output requests |
| `.op2` | Available results (displacement, stress, strain, SPC forces, etc.), subcases |
### Introspection Report Generation
**MANDATORY**: Generate `MODEL_INTROSPECTION.md` for every study:
```python
# Generate and save introspection report
study_info = introspect_study(study_dir)
# Create markdown report
report = generate_introspection_report(study_info)
with open(study_dir / "MODEL_INTROSPECTION.md", "w") as f:
f.write(report)
```
---
## Execution Steps
### Step 1: Validate Model Files
### Step 1: Run Comprehensive Introspection
**Use the introspection module (MANDATORY)**:
```python
from optimization_engine.hooks.nx_cad.model_introspection import introspect_study
# Introspect the entire study
result = introspect_study("studies/my_study/")
if result["success"]:
# Part information
for part in result["data"]["parts"]:
print(f"Part: {part['file']}")
print(f" Expressions: {part['data'].get('expression_count', 0)}")
print(f" Bodies: {part['data'].get('body_count', 0)}")
# Simulation information
for sim in result["data"]["simulations"]:
print(f"Simulation: {sim['file']}")
print(f" Solutions: {sim['data'].get('solution_count', 0)}")
# OP2 results
for op2 in result["data"]["results"]:
print(f"OP2: {op2['file']}")
available = op2['data'].get('available_results', {})
print(f" Displacement: {available.get('displacement', False)}")
print(f" Stress: {available.get('stress', False)}")
```
### Step 2: Validate Model Files
Check that required files exist:
@@ -95,26 +175,21 @@ def validate_model_files(model_path: Path) -> dict:
return result
```
### Step 2: Extract Expressions
### Step 3: Extract Expressions (via Introspection)
Use NX Python API to extract all parametric expressions:
The introspection module extracts expressions automatically:
```python
# This requires running a journal inside NX
# Use the expression extractor from optimization_engine
from optimization_engine.hooks.nx_cad.model_introspection import introspect_part
from optimization_engine.extractors.expression_extractor import extract_all_expressions
expressions = extract_all_expressions(prt_file)
# Returns: [{'name': 'thickness', 'value': 2.0, 'unit': 'mm', 'formula': None}, ...]
result = introspect_part("path/to/model.prt")
if result["success"]:
expressions = result["data"].get("expressions", [])
for expr in expressions:
print(f" {expr['name']}: {expr['value']} {expr.get('unit', '')}")
```
**Manual Extraction Method** (if NX API not available):
1. Read the .prt file header for expression metadata
2. Look for common parameter naming patterns
3. Ask user to provide expression names from NX
### Step 3: Classify Expressions
### Step 4: Classify Expressions
Categorize expressions by likely purpose:
@@ -154,53 +229,121 @@ Based on analysis, recommend:
## Output Format
Present analysis in structured format:
Present analysis using the **MODEL_INTROSPECTION.md** format:
```markdown
# Model Introspection Report
**Study**: {study_name}
**Generated**: {date}
**Introspection Version**: 1.0
---
## 1. Files Discovered
| Type | File | Status |
|------|------|--------|
| Part (.prt) | {prt_file} | ✓ Found |
| Simulation (.sim) | {sim_file} | ✓ Found |
| FEM (.fem) | {fem_file} | ✓ Found |
| Results (.op2) | {op2_file} | ✓ Found |
---
## 2. Part Information
### Expressions (Potential Design Variables)
| Name | Value | Unit | Type | Optimization Candidate |
|------|-------|------|------|------------------------|
| thickness | 2.0 | mm | User | ✓ High |
| hole_diameter | 10.0 | mm | User | ✓ High |
| p173_mass | 0.125 | kg | Reference | Read-only |
### Mass Properties
| Property | Value | Unit |
|----------|-------|------|
| Mass | 0.125 | kg |
| Material | Aluminum 6061-T6 | - |
---
## 3. Simulation Information
### Solutions
| Solution | Type | Nastran SOL | Status |
|----------|------|-------------|--------|
| Solution 1 | Static | SOL 101 | ✓ Active |
| Solution 2 | Modal | SOL 103 | ✓ Active |
### Boundary Conditions
| Name | Type | Applied To |
|------|------|------------|
| Fixed_Root | SPC | Face_1 |
### Loads
| Name | Type | Magnitude | Direction |
|------|------|-----------|-----------|
| Tip_Force | FORCE | 500 N | -Z |
---
## 4. Available Results (from OP2)
| Result Type | Available | Subcases |
|-------------|-----------|----------|
| Displacement | ✓ | 1 |
| SPC Forces | ✓ | 1 |
| Stress (CHEXA) | ✓ | 1 |
| Stress (CPENTA) | ✓ | 1 |
| Strain Energy | ✗ | - |
| Frequencies | ✓ | 2 |
---
## 5. Optimization Recommendations
### Suggested Objectives
| Objective | Extractor | Source |
|-----------|-----------|--------|
| Minimize mass | E4: `extract_mass_from_bdf` | .dat |
| Maximize stiffness | E1: `extract_displacement` → k=F/δ | .op2 |
### Suggested Constraints
| Constraint | Type | Threshold | Extractor |
|------------|------|-----------|-----------|
| Max stress | less_than | 250 MPa | E3: `extract_solid_stress` |
### Recommended Protocol
- **Protocol 11 (Multi-Objective NSGA-II)** - Multiple competing objectives
- Multi-Solution: **Yes** (static + modal)
---
*Ready to create optimization study? Say "create study" to proceed.*
```
MODEL ANALYSIS REPORT
=====================
Model: {model_name}
Location: {model_path}
### Saving the Report
FILES FOUND
-----------
✓ Part file: {prt_file}
✓ Simulation: {sim_file}
✓ FEM mesh: {fem_file}
**MANDATORY**: Save the introspection report to the study directory:
PARAMETRIC EXPRESSIONS
----------------------
| Name | Current Value | Unit | Category | Optimization Candidate |
|------|---------------|------|----------|----------------------|
| thickness | 2.0 | mm | Structural | ✓ High |
| hole_diameter | 10.0 | mm | Geometric | ✓ High |
| fillet_radius | 3.0 | mm | Geometric | ✓ Medium |
| length | 100.0 | mm | Dimensional | ? Check constraints |
```python
from pathlib import Path
SIMULATION SETUP
----------------
Analysis Types: Static (SOL 101), Modal (SOL 103)
Material: Aluminum 6061-T6 (E=68.9 GPa, ρ=2700 kg/m³)
Loads:
- Force: 500 N at tip
- Constraint: Fixed at root
RECOMMENDATIONS
---------------
Suggested Objectives:
- Minimize mass (extract from p173 expression or FEM)
- Maximize first natural frequency
Suggested Constraints:
- Max von Mises stress < 276 MPa (Al 6061 yield)
- Max displacement < {user to specify}
Recommended Protocol: Protocol 11 (Multi-Objective NSGA-II)
- Reason: Multiple competing objectives (mass vs frequency)
Ready to create optimization study? Say "create study" to proceed.
```
def save_introspection_report(study_dir: Path, report_content: str):
"""Save MODEL_INTROSPECTION.md to study directory."""
report_path = study_dir / "MODEL_INTROSPECTION.md"
with open(report_path, 'w') as f:
f.write(report_content)
print(f"Saved introspection report: {report_path}")
## Error Handling

View File

@@ -0,0 +1,738 @@
# Study Creation Core Skill
**Last Updated**: December 6, 2025
**Version**: 2.3 - Added Model Introspection
**Type**: Core Skill
You are helping the user create a complete Atomizer optimization study from a natural language description.
**CRITICAL**: This skill is your SINGLE SOURCE OF TRUTH. DO NOT improvise or look at other studies for patterns. Use ONLY the patterns documented here and in the loaded modules.
---
## Module Loading
This core skill is always loaded. Additional modules are loaded based on context:
| Module | Load When | Path |
|--------|-----------|------|
| **extractors-catalog** | Always (for reference) | `modules/extractors-catalog.md` |
| **zernike-optimization** | "telescope", "mirror", "optical", "wavefront" | `modules/zernike-optimization.md` |
| **neural-acceleration** | >50 trials, "neural", "surrogate", "fast" | `modules/neural-acceleration.md` |
---
## MANDATORY: Model Introspection at Study Creation
**ALWAYS run introspection when creating a study or when user asks:**
```python
from optimization_engine.hooks.nx_cad.model_introspection import (
introspect_part,
introspect_simulation,
introspect_op2,
introspect_study
)
# Introspect entire study directory (recommended)
study_info = introspect_study("studies/my_study/")
# Or introspect individual files
part_info = introspect_part("path/to/model.prt")
sim_info = introspect_simulation("path/to/model.sim")
op2_info = introspect_op2("path/to/results.op2")
```
### Introspection Extracts
| Source | Information |
|--------|-------------|
| `.prt` | Expressions (count, values, types), bodies, mass, material, features |
| `.sim` | Solutions, boundary conditions, loads, materials, mesh info, output requests |
| `.op2` | Available results (displacement, stress, strain, SPC forces, etc.), subcases |
### Generate Introspection Report
**MANDATORY**: Save `MODEL_INTROSPECTION.md` to study directory at creation:
```python
# After introspection, generate and save report
study_info = introspect_study(study_dir)
# Generate markdown report and save to studies/{study_name}/MODEL_INTROSPECTION.md
```
---
## MANDATORY DOCUMENTATION CHECKLIST
**EVERY study MUST have these files. A study is NOT complete without them:**
| File | Purpose | When Created |
|------|---------|--------------|
| `MODEL_INTROSPECTION.md` | **Model Analysis** - Expressions, solutions, available results | At study creation |
| `README.md` | **Engineering Blueprint** - Full mathematical formulation | At study creation |
| `STUDY_REPORT.md` | **Results Tracking** - Progress, best designs, recommendations | At study creation (template) |
**README.md Requirements (11 sections)**:
1. Engineering Problem (objective, physical system)
2. Mathematical Formulation (objectives, design variables, constraints with LaTeX)
3. Optimization Algorithm (config, properties, return format)
4. Simulation Pipeline (trial execution flow diagram)
5. Result Extraction Methods (extractor details, code snippets)
6. Neural Acceleration (surrogate config, expected performance)
7. Study File Structure (directory tree)
8. Results Location (output files)
9. Quick Start (commands)
10. Configuration Reference (config.json mapping)
11. References
**FAILURE MODE**: If you create a study without MODEL_INTROSPECTION.md, README.md, and STUDY_REPORT.md, the study is incomplete.
---
## PR.3 NXSolver Interface
**Module**: `optimization_engine.nx_solver`
```python
from optimization_engine.nx_solver import NXSolver
nx_solver = NXSolver(
nastran_version="2412", # NX version
timeout=600, # Max solve time (seconds)
use_journal=True, # Use journal mode (recommended)
enable_session_management=True,
study_name="my_study"
)
```
**Main Method - `run_simulation()`**:
```python
result = nx_solver.run_simulation(
sim_file=sim_file, # Path to .sim file
working_dir=model_dir, # Working directory
expression_updates=design_vars, # Dict: {'param_name': value}
solution_name=None, # None = solve ALL solutions
cleanup=True # Remove temp files after
)
# Returns:
# {
# 'success': bool,
# 'op2_file': Path,
# 'log_file': Path,
# 'elapsed_time': float,
# 'errors': list,
# 'solution_name': str
# }
```
**CRITICAL**: For multi-solution workflows (static + modal), set `solution_name=None`.
---
## PR.4 Sampler Configurations
| Sampler | Use Case | Import | Config |
|---------|----------|--------|--------|
| **NSGAIISampler** | Multi-objective (2-3 objectives) | `from optuna.samplers import NSGAIISampler` | `NSGAIISampler(population_size=20, mutation_prob=0.1, crossover_prob=0.9, seed=42)` |
| **TPESampler** | Single-objective | `from optuna.samplers import TPESampler` | `TPESampler(seed=42)` |
| **CmaEsSampler** | Single-objective, continuous | `from optuna.samplers import CmaEsSampler` | `CmaEsSampler(seed=42)` |
---
## PR.5 Study Creation Patterns
**Multi-Objective (NSGA-II)**:
```python
study = optuna.create_study(
study_name=study_name,
storage=f"sqlite:///{results_dir / 'study.db'}",
sampler=NSGAIISampler(population_size=20, seed=42),
directions=['minimize', 'maximize'], # [obj1_dir, obj2_dir]
load_if_exists=True
)
```
**Single-Objective (TPE)**:
```python
study = optuna.create_study(
study_name=study_name,
storage=f"sqlite:///{results_dir / 'study.db'}",
sampler=TPESampler(seed=42),
direction='minimize', # or 'maximize'
load_if_exists=True
)
```
---
## PR.6 Objective Function Return Formats
**Multi-Objective** (directions=['minimize', 'minimize']):
```python
def objective(trial) -> Tuple[float, float]:
# ... extraction ...
return (obj1, obj2) # Both positive, framework handles direction
```
**Multi-Objective with maximize** (directions=['maximize', 'minimize']):
```python
def objective(trial) -> Tuple[float, float]:
# ... extraction ...
return (-stiffness, mass) # -stiffness so minimize → maximize
```
**Single-Objective**:
```python
def objective(trial) -> float:
# ... extraction ...
return objective_value
```
---
## PR.7 Hook System
**Available Hook Points** (from `optimization_engine.plugins.hooks`):
| Hook Point | When | Context Keys |
|------------|------|--------------|
| `PRE_MESH` | Before meshing | `trial_number, design_variables, sim_file` |
| `POST_MESH` | After mesh | `trial_number, design_variables, sim_file` |
| `PRE_SOLVE` | Before solve | `trial_number, design_variables, sim_file, working_dir` |
| `POST_SOLVE` | After solve | `trial_number, design_variables, op2_file, working_dir` |
| `POST_EXTRACTION` | After extraction | `trial_number, design_variables, results, working_dir` |
| `POST_CALCULATION` | After calculations | `trial_number, objectives, constraints, feasible` |
| `CUSTOM_OBJECTIVE` | Custom objectives | `trial_number, design_variables, extracted_results` |
See [EXT_02_CREATE_HOOK](../../docs/protocols/extensions/EXT_02_CREATE_HOOK.md) for creating custom hooks.
---
## PR.8 Structured Logging (MANDATORY)
**Always use structured logging**:
```python
from optimization_engine.logger import get_logger
logger = get_logger(study_name, study_dir=results_dir)
# Study lifecycle
logger.study_start(study_name, n_trials, "NSGAIISampler")
logger.study_complete(study_name, total_trials, successful_trials)
# Trial lifecycle
logger.trial_start(trial.number, design_vars)
logger.trial_complete(trial.number, objectives_dict, constraints_dict, feasible)
logger.trial_failed(trial.number, error_message)
# General logging
logger.info("message")
logger.warning("message")
logger.error("message", exc_info=True)
```
---
## Study Structure
```
studies/{study_name}/
├── 1_setup/ # INPUT: Configuration & Model
│ ├── model/ # WORKING COPY of NX Files
│ │ ├── {Model}.prt # Parametric part
│ │ ├── {Model}_sim1.sim # Simulation setup
│ │ └── *.dat, *.op2, *.f06 # Solver outputs
│ ├── optimization_config.json # Study configuration
│ └── workflow_config.json # Workflow metadata
├── 2_results/ # OUTPUT: Results
│ ├── study.db # Optuna SQLite database
│ └── optimization_history.json # Trial history
├── run_optimization.py # Main entry point
├── reset_study.py # Database reset
├── README.md # Engineering blueprint
└── STUDY_REPORT.md # Results report template
```
---
## CRITICAL: Model File Protection
**NEVER modify the user's original/master model files.** Always work on copies.
```python
import shutil
from pathlib import Path
def setup_working_copy(source_dir: Path, model_dir: Path, file_patterns: list):
"""Copy model files from user's source to study working directory."""
model_dir.mkdir(parents=True, exist_ok=True)
for pattern in file_patterns:
for src_file in source_dir.glob(pattern):
dst_file = model_dir / src_file.name
if not dst_file.exists():
shutil.copy2(src_file, dst_file)
```
---
## Interactive Discovery Process
### Step 1: Problem Understanding
**Ask clarifying questions**:
- "What component are you optimizing?"
- "What do you want to optimize?" (minimize/maximize)
- "What limits must be satisfied?" (constraints)
- "What parameters can be changed?" (design variables)
- "Where are your NX files?"
### Step 2: Protocol Selection
| Scenario | Protocol | Sampler |
|----------|----------|---------|
| Single objective + constraints | Protocol 10 | TPE/CMA-ES |
| 2-3 objectives | Protocol 11 | NSGA-II |
| >50 trials, need speed | Protocol 14 | + Neural |
### Step 3: Extractor Mapping
Map user needs to extractors from [extractors-catalog module](../modules/extractors-catalog.md):
| Need | Extractor |
|------|-----------|
| Displacement | E1: `extract_displacement` |
| Stress | E3: `extract_solid_stress` |
| Frequency | E2: `extract_frequency` |
| Mass (FEM) | E4: `extract_mass_from_bdf` |
| Mass (CAD) | E5: `extract_mass_from_expression` |
### Step 4: Multi-Solution Detection
If user needs BOTH:
- Static results (stress, displacement)
- Modal results (frequency)
Then set `solution_name=None` to solve ALL solutions.
---
## File Generation
### 1. optimization_config.json
```json
{
"study_name": "{study_name}",
"description": "{concise description}",
"optimization_settings": {
"protocol": "protocol_11_multi_objective",
"n_trials": 30,
"sampler": "NSGAIISampler",
"timeout_per_trial": 600
},
"design_variables": [
{
"parameter": "{nx_expression_name}",
"bounds": [min, max],
"description": "{what this controls}"
}
],
"objectives": [
{
"name": "{objective_name}",
"goal": "minimize",
"weight": 1.0,
"description": "{what this measures}"
}
],
"constraints": [
{
"name": "{constraint_name}",
"type": "less_than",
"threshold": value,
"description": "{engineering justification}"
}
],
"simulation": {
"model_file": "{Model}.prt",
"sim_file": "{Model}_sim1.sim",
"solver": "nastran"
}
}
```
### 2. run_optimization.py Template
```python
"""
{Study Name} Optimization
{Brief description}
"""
from pathlib import Path
import sys
import json
import argparse
from typing import Tuple
project_root = Path(__file__).resolve().parents[2]
sys.path.insert(0, str(project_root))
import optuna
from optuna.samplers import NSGAIISampler # or TPESampler
from optimization_engine.nx_solver import NXSolver
from optimization_engine.logger import get_logger
# Import extractors - USE ONLY FROM extractors-catalog module
from optimization_engine.extractors.extract_displacement import extract_displacement
from optimization_engine.extractors.bdf_mass_extractor import extract_mass_from_bdf
def load_config(config_file: Path) -> dict:
with open(config_file, 'r') as f:
return json.load(f)
def objective(trial: optuna.Trial, config: dict, nx_solver: NXSolver,
model_dir: Path, logger) -> Tuple[float, float]:
"""Multi-objective function. Returns (obj1, obj2)."""
# 1. Sample design variables
design_vars = {}
for var in config['design_variables']:
param_name = var['parameter']
bounds = var['bounds']
design_vars[param_name] = trial.suggest_float(param_name, bounds[0], bounds[1])
logger.trial_start(trial.number, design_vars)
try:
# 2. Run simulation
sim_file = model_dir / config['simulation']['sim_file']
result = nx_solver.run_simulation(
sim_file=sim_file,
working_dir=model_dir,
expression_updates=design_vars,
solution_name=None, # Solve ALL solutions
cleanup=True
)
if not result['success']:
logger.trial_failed(trial.number, f"Simulation failed")
return (float('inf'), float('inf'))
op2_file = result['op2_file']
# 3. Extract results
disp_result = extract_displacement(op2_file, subcase=1)
max_displacement = disp_result['max_displacement']
dat_file = model_dir / config['simulation'].get('dat_file', 'model.dat')
mass_kg = extract_mass_from_bdf(str(dat_file))
# 4. Calculate objectives
applied_force = 1000.0 # N
stiffness = applied_force / max(abs(max_displacement), 1e-6)
# 5. Set trial attributes
trial.set_user_attr('stiffness', stiffness)
trial.set_user_attr('mass', mass_kg)
objectives = {'stiffness': stiffness, 'mass': mass_kg}
logger.trial_complete(trial.number, objectives, {}, True)
return (-stiffness, mass_kg) # Negate stiffness to maximize
except Exception as e:
logger.trial_failed(trial.number, str(e))
return (float('inf'), float('inf'))
def main():
parser = argparse.ArgumentParser(description='{Study Name} Optimization')
stage_group = parser.add_mutually_exclusive_group()
stage_group.add_argument('--discover', action='store_true')
stage_group.add_argument('--validate', action='store_true')
stage_group.add_argument('--test', action='store_true')
stage_group.add_argument('--train', action='store_true')
stage_group.add_argument('--run', action='store_true')
parser.add_argument('--trials', type=int, default=100)
parser.add_argument('--resume', action='store_true')
parser.add_argument('--enable-nn', action='store_true')
args = parser.parse_args()
study_dir = Path(__file__).parent
config_path = study_dir / "1_setup" / "optimization_config.json"
model_dir = study_dir / "1_setup" / "model"
results_dir = study_dir / "2_results"
results_dir.mkdir(exist_ok=True)
study_name = "{study_name}"
logger = get_logger(study_name, study_dir=results_dir)
config = load_config(config_path)
nx_solver = NXSolver()
storage = f"sqlite:///{results_dir / 'study.db'}"
sampler = NSGAIISampler(population_size=20, seed=42)
logger.study_start(study_name, args.trials, "NSGAIISampler")
if args.resume:
study = optuna.load_study(study_name=study_name, storage=storage, sampler=sampler)
else:
study = optuna.create_study(
study_name=study_name,
storage=storage,
sampler=sampler,
directions=['minimize', 'minimize'],
load_if_exists=True
)
study.optimize(
lambda trial: objective(trial, config, nx_solver, model_dir, logger),
n_trials=args.trials,
show_progress_bar=True
)
n_successful = len([t for t in study.trials if t.state == optuna.trial.TrialState.COMPLETE])
logger.study_complete(study_name, len(study.trials), n_successful)
if __name__ == "__main__":
main()
```
### 3. reset_study.py
```python
"""Reset {study_name} optimization study by deleting database."""
import optuna
from pathlib import Path
study_dir = Path(__file__).parent
storage = f"sqlite:///{study_dir / '2_results' / 'study.db'}"
study_name = "{study_name}"
try:
optuna.delete_study(study_name=study_name, storage=storage)
print(f"[OK] Deleted study: {study_name}")
except KeyError:
print(f"[WARNING] Study '{study_name}' not found")
except Exception as e:
print(f"[ERROR] Error: {e}")
```
---
## Common Patterns
### Pattern 1: Mass Minimization with Constraints
```
Objective: Minimize mass
Constraints: Stress < limit, Displacement < limit
Protocol: Protocol 10 (single-objective TPE)
Extractors: E4/E5, E3, E1
Multi-Solution: No (static only)
```
### Pattern 2: Mass vs Stiffness Trade-off
```
Objectives: Minimize mass, Maximize stiffness
Constraints: Stress < limit
Protocol: Protocol 11 (multi-objective NSGA-II)
Extractors: E4/E5, E1 (for stiffness = F/δ), E3
Multi-Solution: No (static only)
```
### Pattern 3: Mass vs Frequency Trade-off
```
Objectives: Minimize mass, Maximize frequency
Constraints: Stress < limit, Displacement < limit
Protocol: Protocol 11 (multi-objective NSGA-II)
Extractors: E4/E5, E2, E3, E1
Multi-Solution: Yes (static + modal)
```
---
## Validation Integration
### Pre-Flight Check
```python
def preflight_check():
"""Validate study setup before running."""
from optimization_engine.validators import validate_study
result = validate_study(STUDY_NAME)
if not result.is_ready_to_run:
print("[X] Study validation failed!")
print(result)
sys.exit(1)
print("[OK] Pre-flight check passed!")
return True
```
### Validation Checklist
- [ ] All design variables have valid bounds (min < max)
- [ ] All objectives have proper extraction methods
- [ ] All constraints have thresholds defined
- [ ] Protocol matches objective count
- [ ] Part file (.prt) exists in model directory
- [ ] Simulation file (.sim) exists
---
## Output Format
After completing study creation, provide:
**Summary Table**:
```
Study Created: {study_name}
Protocol: {protocol}
Objectives: {list}
Constraints: {list}
Design Variables: {list}
Multi-Solution: {Yes/No}
```
**File Checklist**:
```
✓ studies/{study_name}/1_setup/optimization_config.json
✓ studies/{study_name}/1_setup/workflow_config.json
✓ studies/{study_name}/run_optimization.py
✓ studies/{study_name}/reset_study.py
✓ studies/{study_name}/MODEL_INTROSPECTION.md # MANDATORY - Model analysis
✓ studies/{study_name}/README.md
✓ studies/{study_name}/STUDY_REPORT.md
```
**Next Steps**:
```
1. Place your NX files in studies/{study_name}/1_setup/model/
2. Test with: python run_optimization.py --test
3. Monitor: http://localhost:3003
4. Full run: python run_optimization.py --run --trials {n_trials}
```
---
## Critical Reminders
1. **Multi-Objective Return Format**: Return tuple with positive values, use `directions` for semantics
2. **Multi-Solution**: Set `solution_name=None` for static + modal workflows
3. **Always use centralized extractors** from `optimization_engine/extractors/`
4. **Never modify master model files** - always work on copies
5. **Structured logging is mandatory** - use `get_logger()`
---
## Assembly FEM (AFEM) Workflow
For complex assemblies with `.afm` files, the update sequence is critical:
```
.prt (geometry) → _fem1.fem (component mesh) → .afm (assembly mesh) → .sim (solution)
```
### The 4-Step Update Process
1. **Update Expressions in Geometry (.prt)**
- Open part, update expressions, DoUpdate(), Save
2. **Update ALL Linked Geometry Parts** (CRITICAL!)
- Open each linked part, DoUpdate(), Save
- **Skipping this causes corrupt results ("billion nm" RMS)**
3. **Update Component FEMs (.fem)**
- UpdateFemodel() regenerates mesh from updated geometry
4. **Update Assembly FEM (.afm)**
- UpdateFemodel(), merge coincident nodes at interfaces
### Assembly Configuration
```json
{
"nx_settings": {
"expression_part": "M1_Blank",
"component_fems": ["M1_Blank_fem1.fem", "M1_Support_fem1.fem"],
"afm_file": "ASSY_M1_assyfem1.afm"
}
}
```
---
## Multi-Solution Solve Protocol
When simulation has multiple solutions (static + modal), use `SolveAllSolutions` API:
### Critical: Foreground Mode Required
```python
# WRONG - Returns immediately, async
theCAESimSolveManager.SolveChainOfSolutions(
psolutions1,
SolveMode.Background # Returns before complete!
)
# CORRECT - Waits for completion
theCAESimSolveManager.SolveAllSolutions(
SolveOption.Solve,
SetupCheckOption.CompleteCheckAndOutputErrors,
SolveMode.Foreground, # Blocks until complete
False
)
```
### When to Use
- `solution_name=None` passed to `NXSolver.run_simulation()`
- Multiple solutions that must all complete
- Multi-objective requiring results from different analysis types
### Solution Monitor Control
Solution monitor is automatically disabled when solving multiple solutions to prevent window pile-up:
```python
propertyTable.SetBooleanPropertyValue("solution monitor", False)
```
### Verification
After solve, verify:
- Both `.dat` files written (one per solution)
- Both `.op2` files created with updated timestamps
- Results are unique per trial (frequency values vary)
---
## Cross-References
- **Operations Protocol**: [OP_01_CREATE_STUDY](../../docs/protocols/operations/OP_01_CREATE_STUDY.md)
- **Extractors Module**: [extractors-catalog](../modules/extractors-catalog.md)
- **Zernike Module**: [zernike-optimization](../modules/zernike-optimization.md)
- **Neural Module**: [neural-acceleration](../modules/neural-acceleration.md)
- **System Protocols**: [SYS_10_IMSO](../../docs/protocols/system/SYS_10_IMSO.md), [SYS_11_MULTI_OBJECTIVE](../../docs/protocols/system/SYS_11_MULTI_OBJECTIVE.md)

View File

@@ -0,0 +1,402 @@
# Create Study Wizard Skill
**Version**: 3.0 - StudyWizard Integration
**Last Updated**: 2025-12-06
You are helping the user create a complete Atomizer optimization study using the powerful `StudyWizard` class.
---
## Quick Reference
```python
from optimization_engine.study_wizard import StudyWizard, create_study, list_extractors
# Option 1: One-liner for simple studies
create_study(
study_name="my_study",
description="Optimize bracket for stiffness",
prt_file="path/to/model.prt",
design_variables=[
{"parameter": "thickness", "bounds": [5, 20], "units": "mm"}
],
objectives=[
{"name": "stiffness", "goal": "maximize", "extractor": "extract_displacement"}
],
constraints=[
{"name": "mass", "type": "less_than", "threshold": 0.5, "extractor": "extract_mass_from_bdf", "units": "kg"}
]
)
# Option 2: Step-by-step with full control
wizard = StudyWizard("my_study", "Optimize bracket")
wizard.set_model_files("path/to/model.prt")
wizard.introspect() # Discover expressions, solutions
wizard.add_design_variable("thickness", bounds=(5, 20), units="mm")
wizard.add_objective("mass", goal="minimize", extractor="extract_mass_from_bdf")
wizard.add_constraint("stress", type="less_than", threshold=250, extractor="extract_solid_stress", units="MPa")
wizard.generate()
```
---
## Trigger Phrases
Use this skill when user says:
- "create study", "new study", "set up study", "create optimization"
- "optimize my [part/model/bracket/component]"
- "help me minimize [mass/weight/cost]"
- "help me maximize [stiffness/strength/frequency]"
- "I want to find the best [design/parameters]"
---
## Workflow Steps
### Step 1: Gather Requirements
Ask the user (if not already provided):
1. **Model files**: "Where is your NX model? (path to .prt file)"
2. **Optimization goal**: "What do you want to optimize?"
- Minimize mass/weight
- Maximize stiffness
- Target a specific frequency
- Multi-objective trade-off
3. **Constraints**: "What limits must be respected?"
- Max stress < yield/safety factor
- Max displacement < tolerance
- Mass budget
### Step 2: Introspect Model
```python
from optimization_engine.study_wizard import StudyWizard
wizard = StudyWizard("study_name", "Description")
wizard.set_model_files("path/to/model.prt")
result = wizard.introspect()
# Show user what was found
print(f"Found {len(result.expressions)} expressions:")
for expr in result.expressions[:10]:
print(f" {expr['name']}: {expr.get('value', 'N/A')}")
print(f"\nFound {len(result.solutions)} solutions:")
for sol in result.solutions:
print(f" {sol['name']}")
# Suggest design variables
suggestions = result.suggest_design_variables()
for s in suggestions:
print(f" {s['name']}: {s['current_value']} -> bounds {s['suggested_bounds']}")
```
### Step 3: Configure Study
```python
# Add design variables from introspection suggestions
for dv in selected_design_variables:
wizard.add_design_variable(
parameter=dv['name'],
bounds=dv['bounds'],
units=dv.get('units', ''),
description=dv.get('description', '')
)
# Add objectives
wizard.add_objective(
name="mass",
goal="minimize",
extractor="extract_mass_from_bdf",
description="Minimize total bracket mass"
)
wizard.add_objective(
name="stiffness",
goal="maximize",
extractor="extract_displacement",
params={"invert_for_stiffness": True},
description="Maximize structural stiffness"
)
# Add constraints
wizard.add_constraint(
name="max_stress",
constraint_type="less_than",
threshold=250,
extractor="extract_solid_stress",
units="MPa",
description="Keep stress below yield/4"
)
# Set protocol based on objectives
if len(wizard.objectives) > 1:
wizard.set_protocol("protocol_11_multi") # NSGA-II
else:
wizard.set_protocol("protocol_10_single") # TPE
wizard.set_trials(100)
```
### Step 4: Generate Study
```python
files = wizard.generate()
print("Study generated successfully!")
print(f"Location: {wizard.study_dir}")
print("\nNext steps:")
print(" 1. cd", wizard.study_dir)
print(" 2. python run_optimization.py --discover")
print(" 3. python run_optimization.py --validate")
print(" 4. python run_optimization.py --run --trials 100")
```
---
## Available Extractors
| Extractor | What it extracts | Input | Output |
|-----------|------------------|-------|--------|
| `extract_mass_from_bdf` | Total mass | .dat/.bdf | kg |
| `extract_part_mass` | CAD mass | .prt | kg |
| `extract_displacement` | Max displacement | .op2 | mm |
| `extract_solid_stress` | Von Mises stress | .op2 | MPa |
| `extract_principal_stress` | Principal stresses | .op2 | MPa |
| `extract_strain_energy` | Strain energy | .op2 | J |
| `extract_spc_forces` | Reaction forces | .op2 | N |
| `extract_frequency` | Natural frequencies | .op2 | Hz |
| `get_first_frequency` | First mode frequency | .f06 | Hz |
| `extract_temperature` | Nodal temperatures | .op2 | K/°C |
| `extract_modal_mass` | Modal effective mass | .f06 | kg |
| `extract_zernike_from_op2` | Zernike WFE | .op2+.bdf | nm |
**List all extractors programmatically**:
```python
from optimization_engine.study_wizard import list_extractors
for name, info in list_extractors().items():
print(f"{name}: {info['description']}")
```
---
## Common Optimization Patterns
### Pattern 1: Minimize Mass with Stress Constraint
```python
create_study(
study_name="lightweight_bracket",
description="Minimize mass while keeping stress below yield",
prt_file="Bracket.prt",
design_variables=[
{"parameter": "wall_thickness", "bounds": [2, 10], "units": "mm"},
{"parameter": "rib_count", "bounds": [2, 8], "units": "count"}
],
objectives=[
{"name": "mass", "goal": "minimize", "extractor": "extract_mass_from_bdf"}
],
constraints=[
{"name": "stress", "type": "less_than", "threshold": 250,
"extractor": "extract_solid_stress", "units": "MPa"}
],
protocol="protocol_10_single"
)
```
### Pattern 2: Multi-Objective Stiffness vs Mass
```python
create_study(
study_name="pareto_bracket",
description="Trade-off between stiffness and mass",
prt_file="Bracket.prt",
design_variables=[
{"parameter": "thickness", "bounds": [5, 25], "units": "mm"},
{"parameter": "support_angle", "bounds": [20, 70], "units": "degrees"}
],
objectives=[
{"name": "stiffness", "goal": "maximize", "extractor": "extract_displacement"},
{"name": "mass", "goal": "minimize", "extractor": "extract_mass_from_bdf"}
],
constraints=[
{"name": "mass_limit", "type": "less_than", "threshold": 0.5,
"extractor": "extract_mass_from_bdf", "units": "kg"}
],
protocol="protocol_11_multi",
n_trials=150
)
```
### Pattern 3: Frequency-Targeted Modal Optimization
```python
create_study(
study_name="modal_bracket",
description="Tune first natural frequency to target",
prt_file="Bracket.prt",
design_variables=[
{"parameter": "thickness", "bounds": [3, 15], "units": "mm"},
{"parameter": "length", "bounds": [50, 150], "units": "mm"}
],
objectives=[
{"name": "frequency_error", "goal": "minimize",
"extractor": "get_first_frequency",
"params": {"target": 100}} # Target 100 Hz
],
constraints=[
{"name": "mass", "type": "less_than", "threshold": 0.3,
"extractor": "extract_mass_from_bdf", "units": "kg"}
]
)
```
### Pattern 4: Thermal Optimization
```python
create_study(
study_name="heat_sink",
description="Minimize max temperature",
prt_file="HeatSink.prt",
design_variables=[
{"parameter": "fin_height", "bounds": [10, 50], "units": "mm"},
{"parameter": "fin_count", "bounds": [5, 20], "units": "count"}
],
objectives=[
{"name": "max_temp", "goal": "minimize", "extractor": "get_max_temperature"}
],
constraints=[
{"name": "mass", "type": "less_than", "threshold": 0.2,
"extractor": "extract_mass_from_bdf", "units": "kg"}
]
)
```
---
## Protocol Selection Guide
| Scenario | Protocol | Sampler |
|----------|----------|---------|
| Single objective | `protocol_10_single` | TPESampler |
| Multiple objectives (Pareto) | `protocol_11_multi` | NSGAIISampler |
| Smooth design space | `protocol_10_single` | CmaEsSampler |
| Discrete variables | `protocol_10_single` | TPESampler |
---
## Files Generated
The wizard generates a complete study structure:
```
studies/{study_name}/
├── 1_setup/
│ ├── model/ # NX model files (copied)
│ ├── optimization_config.json
│ └── workflow_config.json
├── 2_results/ # Created on first run
├── run_optimization.py # Main script with staged workflow
├── reset_study.py # Reset results
├── README.md # Engineering documentation
├── STUDY_REPORT.md # Results tracking template
└── MODEL_INTROSPECTION.md # Model analysis report
```
---
## Staged Workflow
After generation, guide user through staged validation:
```bash
# Stage 1: Discover model outputs
python run_optimization.py --discover
# Stage 2: Validate with single trial
python run_optimization.py --validate
# Stage 3: Integration test (3 trials)
python run_optimization.py --test
# Stage 4: Full optimization
python run_optimization.py --run --trials 100
# Resume if interrupted
python run_optimization.py --run --trials 50 --resume
```
---
## Error Handling
### Common Issues and Solutions
| Error | Cause | Solution |
|-------|-------|----------|
| "Part file not found" | Wrong path | Use absolute path or relative to studies/ |
| "No expressions found" | Part not parametric | Add expressions in NX Modeling |
| "Simulation failed" | NX not running | Start NX before running |
| "Extractor not found" | Typo in name | Use `list_extractors()` to see valid names |
---
## Example Conversation
**User**: "I want to optimize my bracket for minimum weight while keeping stress low"
**Claude**: "I'll help you create a weight optimization study. Let me gather some information:
1. Where is your NX model file?
2. What's the maximum allowable stress (e.g., yield/4)?
3. Are there specific parameters you want to vary (thickness, angles, etc.)?"
**User**: "The model is at studies/my_bracket/model/Bracket.prt. Max stress should be 250 MPa. I want to vary the wall_thickness and support_angle."
**Claude**: "I'll create the study and introspect your model to confirm the parameters exist."
```python
from optimization_engine.study_wizard import StudyWizard
wizard = StudyWizard(
study_name="my_bracket_optimization",
description="Minimize bracket mass with stress constraint"
)
wizard.set_model_files("studies/my_bracket/model/Bracket.prt")
# Introspect to verify parameters
result = wizard.introspect()
print(f"Found expressions: {result.get_expression_names()}")
# Configure
wizard.add_design_variable("wall_thickness", bounds=(2, 15), units="mm")
wizard.add_design_variable("support_angle", bounds=(20, 60), units="degrees")
wizard.add_objective("mass", goal="minimize", extractor="extract_mass_from_bdf")
wizard.add_constraint("stress", constraint_type="less_than", threshold=250,
extractor="extract_solid_stress", units="MPa")
wizard.set_trials(100)
files = wizard.generate()
```
**Claude**: "Study created! Here's what I generated:
- `studies/my_bracket_optimization/run_optimization.py`
- `studies/my_bracket_optimization/README.md`
- Configuration files in `1_setup/`
Next steps:
1. `cd studies/my_bracket_optimization`
2. `python run_optimization.py --discover`
3. `python run_optimization.py --validate`
4. `python run_optimization.py --run --trials 100`"
---
## Cross-References
- **Extractor Library**: `docs/protocols/system/SYS_12_EXTRACTOR_LIBRARY.md`
- **Protocol 10 (IMSO)**: `docs/protocols/system/SYS_10_IMSO.md`
- **Protocol 11 (Multi-Objective)**: `docs/protocols/system/SYS_11_MULTI_OBJECTIVE.md`
- **StudyWizard Source**: `optimization_engine/study_wizard.py`

View File

@@ -1,7 +1,7 @@
# Create Optimization Study Skill
**Last Updated**: December 4, 2025
**Version**: 2.1 - Added Mandatory Documentation Requirements
**Last Updated**: December 6, 2025
**Version**: 2.2 - Added Model Introspection
You are helping the user create a complete Atomizer optimization study from a natural language description.
@@ -9,12 +9,50 @@ You are helping the user create a complete Atomizer optimization study from a na
---
## MANDATORY: Model Introspection
**ALWAYS run introspection when user provides NX files or asks for model analysis:**
```python
from optimization_engine.hooks.nx_cad.model_introspection import (
introspect_part,
introspect_simulation,
introspect_op2,
introspect_study
)
# Introspect entire study directory (recommended)
study_info = introspect_study("studies/my_study/")
# Or introspect individual files
part_info = introspect_part("path/to/model.prt")
sim_info = introspect_simulation("path/to/model.sim")
op2_info = introspect_op2("path/to/results.op2")
```
### What Introspection Provides
| Source | Information Extracted |
|--------|----------------------|
| `.prt` | Expressions (potential design variables), bodies, mass, material, features |
| `.sim` | Solutions (SOL types), boundary conditions, loads, materials, mesh info, output requests |
| `.op2` | Available results (displacement, stress, strain, SPC forces, frequencies), subcases |
### Generate MODEL_INTROSPECTION.md
**MANDATORY**: Save introspection report at study creation:
- Location: `studies/{study_name}/MODEL_INTROSPECTION.md`
- Contains: All expressions, solutions, available results, optimization recommendations
---
## MANDATORY DOCUMENTATION CHECKLIST
**EVERY study MUST have these files. A study is NOT complete without them:**
| File | Purpose | When Created |
|------|---------|--------------|
| `MODEL_INTROSPECTION.md` | **Model Analysis** - Expressions, solutions, available results | At study creation |
| `README.md` | **Engineering Blueprint** - Full mathematical formulation, design variables, objectives, algorithm config | At study creation |
| `STUDY_REPORT.md` | **Results Tracking** - Progress, best designs, surrogate accuracy, recommendations | At study creation (template) |
@@ -2053,6 +2091,7 @@ Multi-Solution: {Yes/No}
✓ studies/{study_name}/1_setup/workflow_config.json
✓ studies/{study_name}/run_optimization.py
✓ studies/{study_name}/reset_study.py
✓ studies/{study_name}/MODEL_INTROSPECTION.md # MANDATORY - Model analysis
✓ studies/{study_name}/README.md # Engineering blueprint
✓ studies/{study_name}/STUDY_REPORT.md # MANDATORY - Results report template
[✓] studies/{study_name}/NX_FILE_MODIFICATIONS_REQUIRED.md (if needed)

View File

@@ -0,0 +1,325 @@
# Guided Study Creation Wizard
**Version**: 1.0
**Purpose**: Interactive conversational wizard for creating new optimization studies from scratch.
---
## Overview
This skill provides a step-by-step guided experience for users who want to create a new optimization study. It asks focused questions to gather requirements, then generates the complete study configuration.
---
## Wizard Flow
### Phase 1: Understanding the Problem (Discovery)
Start with open-ended questions to understand what the user wants to optimize:
**Opening Prompt:**
```
I'll help you set up a new optimization study. Let's start with the basics:
1. **What are you trying to optimize?**
- Describe the physical system (e.g., "a telescope mirror", "a UAV arm", "a bracket")
2. **What's your goal?**
- Minimize weight? Maximize stiffness? Minimize stress? Multiple objectives?
3. **Do you have an NX model ready?**
- If yes, where is it located?
- If no, we can discuss what's needed
```
### Phase 2: Model Analysis (If NX model provided)
If user provides a model path:
1. **Check the model exists**
```python
# Verify path
model_path = Path(user_provided_path)
if model_path.exists():
# Proceed with analysis
else:
# Ask for correct path
```
2. **Extract expressions (design parameters)**
- List all NX expressions that could be design variables
- Ask user to confirm which ones to optimize
3. **Identify simulation setup**
- What solution types are present? (static, modal, buckling)
- What results are available?
### Phase 3: Define Objectives & Constraints
Ask focused questions:
```
Based on your model, I can see these results are available:
- Displacement (from static solution)
- Von Mises stress (from static solution)
- Natural frequency (from modal solution)
- Mass (from geometry)
**Questions:**
1. **Primary Objective** - What do you want to minimize/maximize?
Examples: "minimize tip displacement", "minimize mass"
2. **Secondary Objectives** (optional) - Any other goals?
Examples: "also minimize stress", "maximize first frequency"
3. **Constraints** - What limits must be respected?
Examples: "stress < 200 MPa", "frequency > 50 Hz", "mass < 2 kg"
```
### Phase 4: Define Design Space
For each design variable identified:
```
For parameter `{param_name}` (current value: {current_value}):
- **Minimum value**: (default: -20% of current)
- **Maximum value**: (default: +20% of current)
- **Type**: continuous or discrete?
```
### Phase 5: Optimization Settings
```
**Optimization Configuration:**
1. **Number of trials**: How thorough should the search be?
- Quick exploration: 50-100 trials
- Standard: 100-200 trials
- Thorough: 200-500 trials
- With neural acceleration: 500+ trials
2. **Protocol Selection** (I'll recommend based on your setup):
- Single objective → Protocol 10 (IMSO)
- Multi-objective (2-3 goals) → Protocol 11 (NSGA-II)
- Large-scale with NN → Protocol 12 (Hybrid)
3. **Neural Network Acceleration**:
- Enable if n_trials > 100 and you want faster iterations
```
### Phase 6: Summary & Confirmation
Present the complete configuration for user approval:
```
## Study Configuration Summary
**Study Name**: {study_name}
**Location**: studies/{study_name}/
**Model**: {model_path}
**Design Variables** ({n_vars} parameters):
| Parameter | Min | Max | Type |
|-----------|-----|-----|------|
| {name1} | {min1} | {max1} | continuous |
| ... | ... | ... | ... |
**Objectives**:
- {objective1}: {direction1}
- {objective2}: {direction2} (if multi-objective)
**Constraints**:
- {constraint1}
- {constraint2}
**Settings**:
- Protocol: {protocol}
- Trials: {n_trials}
- Sampler: {sampler}
- Neural Acceleration: {enabled/disabled}
---
Does this look correct?
- Type "yes" to generate the study files
- Type "change X" to modify a specific setting
- Type "start over" to begin again
```
### Phase 7: Generation
Once confirmed, generate:
1. Create study directory structure
2. Copy model files to working directory
3. Generate `optimization_config.json`
4. Generate `run_optimization.py`
5. Validate everything works
```
✓ Study created successfully!
**Next Steps:**
1. Review the generated files in studies/{study_name}/
2. Run a quick validation: `python run_optimization.py --validate`
3. Start optimization: `python run_optimization.py --start`
Or just tell me "start the optimization" and I'll handle it!
```
---
## Question Templates
### For Understanding Goals
- "What problem are you trying to solve?"
- "What makes a 'good' design for your application?"
- "Are there any hard limits that must not be exceeded?"
- "Is this a weight reduction study, a performance study, or both?"
### For Design Variables
- "Which dimensions or parameters should I vary?"
- "Are there any parameters that must stay fixed?"
- "What are reasonable bounds for {parameter}?"
- "Should {parameter} be continuous or discrete (specific values only)?"
### For Constraints
- "What's the maximum stress this component can handle?"
- "Is there a minimum stiffness requirement?"
- "Are there weight limits?"
- "What frequency should the structure avoid (resonance concerns)?"
### For Optimization Settings
- "How much time can you allocate to this study?"
- "Do you need a quick exploration or thorough optimization?"
- "Is this a preliminary study or final optimization?"
---
## Default Configurations by Use Case
### Structural Weight Minimization
```json
{
"objectives": [
{"name": "mass", "direction": "minimize", "target": null}
],
"constraints": [
{"name": "max_stress", "type": "<=", "value": 200e6, "unit": "Pa"},
{"name": "max_displacement", "type": "<=", "value": 0.001, "unit": "m"}
],
"n_trials": 150,
"sampler": "TPE"
}
```
### Multi-Objective (Weight vs Performance)
```json
{
"objectives": [
{"name": "mass", "direction": "minimize"},
{"name": "max_displacement", "direction": "minimize"}
],
"n_trials": 200,
"sampler": "NSGA-II"
}
```
### Modal Optimization (Frequency Tuning)
```json
{
"objectives": [
{"name": "first_frequency", "direction": "maximize"}
],
"constraints": [
{"name": "mass", "type": "<=", "value": 5.0, "unit": "kg"}
],
"n_trials": 150,
"sampler": "TPE"
}
```
### Telescope Mirror (Zernike WFE)
```json
{
"objectives": [
{"name": "filtered_rms", "direction": "minimize", "unit": "nm"}
],
"constraints": [
{"name": "mass", "type": "<=", "value": null}
],
"extractor": "ZernikeExtractor",
"n_trials": 200,
"sampler": "NSGA-II"
}
```
---
## Error Handling
### Model Not Found
```
I couldn't find a model at that path. Let's verify:
- Current directory: {cwd}
- You specified: {user_path}
Could you check the path and try again?
Tip: Use an absolute path like "C:/Users/.../model.prt"
```
### No Expressions Found
```
I couldn't find any parametric expressions in this model.
For optimization, we need parameters defined as NX expressions.
Would you like me to explain how to add expressions to your model?
```
### Invalid Constraint
```
That constraint doesn't match any available results.
Available results from your model:
- {result1}
- {result2}
Which of these would you like to constrain?
```
---
## Integration with Dashboard
When running from the Atomizer dashboard with a connected Claude terminal:
1. **No study selected** → Offer to create a new study
2. **Study selected** → Use that study's context, offer to modify or run
The dashboard will display the study once created, showing real-time progress.
---
## Quick Commands
For users who know what they want:
- `create study {name} from {model_path}` - Skip to model analysis
- `quick setup` - Use all defaults, just confirm
- `copy study {existing} as {new}` - Clone an existing study as starting point
---
## Remember
- **Be conversational** - This is a wizard, not a form
- **Offer sensible defaults** - Don't make users specify everything
- **Validate as you go** - Catch issues early
- **Explain decisions** - Say why you recommend certain settings
- **Keep it focused** - One question at a time, don't overwhelm

View File

@@ -0,0 +1,289 @@
# Extractors Catalog Module
**Last Updated**: December 5, 2025
**Version**: 1.0
**Type**: Optional Module
This module documents all available extractors in the Atomizer framework. Load this when the user asks about result extraction or needs to understand what extractors are available.
---
## When to Load
- User asks "what extractors are available?"
- User needs to extract results from OP2/BDF files
- Setting up a new study with custom extraction needs
- Debugging extraction issues
---
## PR.1 Extractor Catalog
| ID | Extractor | Module | Function | Input | Output | Returns |
|----|-----------|--------|----------|-------|--------|---------|
| E1 | **Displacement** | `optimization_engine.extractors.extract_displacement` | `extract_displacement(op2_file, subcase=1)` | `.op2` | mm | `{'max_displacement': float, 'max_disp_node': int, 'max_disp_x/y/z': float}` |
| E2 | **Frequency** | `optimization_engine.extractors.extract_frequency` | `extract_frequency(op2_file, subcase=1, mode_number=1)` | `.op2` | Hz | `{'frequency': float, 'mode_number': int, 'eigenvalue': float, 'all_frequencies': list}` |
| E3 | **Von Mises Stress** | `optimization_engine.extractors.extract_von_mises_stress` | `extract_solid_stress(op2_file, subcase=1, element_type='cquad4')` | `.op2` | MPa | `{'max_von_mises': float, 'max_stress_element': int}` |
| E4 | **BDF Mass** | `optimization_engine.extractors.bdf_mass_extractor` | `extract_mass_from_bdf(bdf_file)` | `.dat`/`.bdf` | kg | `float` (mass in kg) |
| E5 | **CAD Expression Mass** | `optimization_engine.extractors.extract_mass_from_expression` | `extract_mass_from_expression(prt_file, expression_name='p173')` | `.prt` + `_temp_mass.txt` | kg | `float` (mass in kg) |
| E6 | **Field Data** | `optimization_engine.extractors.field_data_extractor` | `FieldDataExtractor(field_file, result_column, aggregation)` | `.fld`/`.csv` | varies | `{'value': float, 'stats': dict}` |
| E7 | **Stiffness** | `optimization_engine.extractors.stiffness_calculator` | `StiffnessCalculator(field_file, op2_file, force_component, displacement_component)` | `.fld` + `.op2` | N/mm | `{'stiffness': float, 'displacement': float, 'force': float}` |
| E11 | **Part Mass & Material** | `optimization_engine.extractors.extract_part_mass_material` | `extract_part_mass_material(prt_file)` | `.prt` | kg + dict | `{'mass_kg': float, 'volume_mm3': float, 'material': {'name': str}, ...}` |
**For Zernike extractors (E8-E10)**, see the [zernike-optimization module](./zernike-optimization.md).
---
## PR.2 Extractor Code Snippets (COPY-PASTE)
### E1: Displacement Extraction
```python
from optimization_engine.extractors.extract_displacement import extract_displacement
disp_result = extract_displacement(op2_file, subcase=1)
max_displacement = disp_result['max_displacement'] # mm
max_node = disp_result['max_disp_node'] # Node ID
```
**Return Dictionary**:
```python
{
'max_displacement': 0.523, # Maximum magnitude (mm)
'max_disp_node': 1234, # Node ID with max displacement
'max_disp_x': 0.123, # X component at max node
'max_disp_y': 0.456, # Y component at max node
'max_disp_z': 0.234 # Z component at max node
}
```
### E2: Frequency Extraction
```python
from optimization_engine.extractors.extract_frequency import extract_frequency
# Get first mode frequency
freq_result = extract_frequency(op2_file, subcase=1, mode_number=1)
frequency = freq_result['frequency'] # Hz
# Get all frequencies
all_freqs = freq_result['all_frequencies'] # List of all mode frequencies
```
**Return Dictionary**:
```python
{
'frequency': 125.4, # Requested mode frequency (Hz)
'mode_number': 1, # Mode number requested
'eigenvalue': 6.21e5, # Eigenvalue (rad/s)^2
'all_frequencies': [125.4, 234.5, 389.2, ...] # All mode frequencies
}
```
### E3: Stress Extraction
```python
from optimization_engine.extractors.extract_von_mises_stress import extract_solid_stress
# For shell elements (CQUAD4, CTRIA3)
stress_result = extract_solid_stress(op2_file, subcase=1, element_type='cquad4')
# For solid elements (CTETRA, CHEXA)
stress_result = extract_solid_stress(op2_file, subcase=1, element_type='ctetra')
max_stress = stress_result['max_von_mises'] # MPa
```
**Return Dictionary**:
```python
{
'max_von_mises': 187.5, # Maximum von Mises stress (MPa)
'max_stress_element': 5678, # Element ID with max stress
'mean_stress': 45.2, # Mean stress across all elements
'stress_distribution': {...} # Optional: full distribution data
}
```
### E4: BDF Mass Extraction
```python
from optimization_engine.extractors.bdf_mass_extractor import extract_mass_from_bdf
mass_kg = extract_mass_from_bdf(str(dat_file)) # kg
```
**Note**: Calculates mass from element properties and material density in the BDF/DAT file.
### E5: CAD Expression Mass
```python
from optimization_engine.extractors.extract_mass_from_expression import extract_mass_from_expression
mass_kg = extract_mass_from_expression(model_file, expression_name="p173") # kg
```
**Note**: Requires `_temp_mass.txt` to be written by solve journal. The expression name is the NX expression that contains the mass value.
### E6: Field Data Extraction
```python
from optimization_engine.extractors.field_data_extractor import FieldDataExtractor
# Create extractor
extractor = FieldDataExtractor(
field_file="results.fld",
result_column="Temperature",
aggregation="max" # or "min", "mean", "sum"
)
result = extractor.extract()
value = result['value'] # Aggregated value
stats = result['stats'] # Full statistics
```
### E7: Stiffness Calculation
```python
# Simple stiffness from displacement (most common)
applied_force = 1000.0 # N - MUST MATCH YOUR MODEL'S APPLIED LOAD
stiffness = applied_force / max(abs(max_displacement), 1e-6) # N/mm
# Or using StiffnessCalculator for complex cases
from optimization_engine.extractors.stiffness_calculator import StiffnessCalculator
calc = StiffnessCalculator(
field_file="displacement.fld",
op2_file="results.op2",
force_component="Fz",
displacement_component="Tz"
)
result = calc.calculate()
stiffness = result['stiffness'] # N/mm
```
### E11: Part Mass & Material Extraction
```python
from optimization_engine.extractors import extract_part_mass_material, extract_part_mass
# Full extraction with all properties
result = extract_part_mass_material(prt_file)
mass_kg = result['mass_kg'] # kg
volume = result['volume_mm3'] # mm³
area = result['surface_area_mm2'] # mm²
cog = result['center_of_gravity_mm'] # [x, y, z] mm
material = result['material']['name'] # e.g., "Aluminum_2014"
# Simple mass-only extraction
mass_kg = extract_part_mass(prt_file) # kg
```
**Return Dictionary**:
```python
{
'mass_kg': 0.1098, # Mass in kg
'mass_g': 109.84, # Mass in grams
'volume_mm3': 39311.99, # Volume in mm³
'surface_area_mm2': 10876.71, # Surface area in mm²
'center_of_gravity_mm': [0, 42.3, 39.6], # CoG in mm
'material': {
'name': 'Aluminum_2014', # Material name (or None)
'density': None, # Density if available
'density_unit': 'kg/mm^3'
},
'num_bodies': 1 # Number of solid bodies
}
```
**Prerequisites**: Run the NX journal first to create the temp file:
```bash
run_journal.exe nx_journals/extract_part_mass_material.py -args model.prt
```
---
## Extractor Selection Guide
| Need | Extractor | When to Use |
|------|-----------|-------------|
| Max deflection | E1 | Static analysis displacement check |
| Natural frequency | E2 | Modal analysis, resonance avoidance |
| Peak stress | E3 | Strength validation, fatigue life |
| FEM mass | E4 | When mass is from mesh elements |
| CAD mass | E5 | When mass is from NX expression |
| Temperature/Custom | E6 | Thermal or custom field results |
| k = F/δ | E7 | Stiffness maximization |
| Wavefront error | E8-E10 | Telescope/mirror optimization |
| Part mass + material | E11 | Direct from .prt file with material info |
---
## Engineering Result Types
| Result Type | Nastran SOL | Output File | Extractor |
|-------------|-------------|-------------|-----------|
| Static Stress | SOL 101 | `.op2` | E3: `extract_solid_stress` |
| Displacement | SOL 101 | `.op2` | E1: `extract_displacement` |
| Natural Frequency | SOL 103 | `.op2` | E2: `extract_frequency` |
| Buckling Load | SOL 105 | `.op2` | `extract_buckling` |
| Modal Shapes | SOL 103 | `.op2` | `extract_mode_shapes` |
| Mass | - | `.dat`/`.bdf` | E4: `bdf_mass_extractor` |
| Stiffness | SOL 101 | `.fld` + `.op2` | E7: `stiffness_calculator` |
---
## Common Objective Formulations
### Stiffness Maximization
- k = F/δ (force/displacement)
- Maximize k or minimize 1/k (compliance)
- Requires consistent load magnitude across trials
### Mass Minimization
- Extract from BDF element properties + material density
- Units: typically kg (NX uses kg-mm-s)
### Stress Constraints
- Von Mises < σ_yield / safety_factor
- Account for stress concentrations
### Frequency Constraints
- f₁ > threshold (avoid resonance)
- Often paired with mass minimization
---
## Adding New Extractors
When the study needs result extraction not covered by existing extractors (E1-E10):
```
STEP 1: Check existing extractors in this catalog
├── If exists → IMPORT and USE it (done!)
└── If missing → Continue to STEP 2
STEP 2: Create extractor in optimization_engine/extractors/
├── File: extract_{feature}.py
├── Follow existing extractor patterns
└── Include comprehensive docstrings
STEP 3: Add to __init__.py
└── Export functions in optimization_engine/extractors/__init__.py
STEP 4: Update this module
├── Add to Extractor Catalog table
└── Add code snippet
STEP 5: Document in SYS_12_EXTRACTOR_LIBRARY.md
```
See [EXT_01_CREATE_EXTRACTOR](../../docs/protocols/extensions/EXT_01_CREATE_EXTRACTOR.md) for full guide.
---
## Cross-References
- **System Protocol**: [SYS_12_EXTRACTOR_LIBRARY](../../docs/protocols/system/SYS_12_EXTRACTOR_LIBRARY.md)
- **Extension Guide**: [EXT_01_CREATE_EXTRACTOR](../../docs/protocols/extensions/EXT_01_CREATE_EXTRACTOR.md)
- **Zernike Extractors**: [zernike-optimization module](./zernike-optimization.md)
- **Core Skill**: [study-creation-core](../core/study-creation-core.md)

View File

@@ -0,0 +1,340 @@
# Neural Acceleration Module
**Last Updated**: December 5, 2025
**Version**: 1.0
**Type**: Optional Module
This module provides guidance for AtomizerField neural network surrogate acceleration, enabling 1000x faster optimization by replacing expensive FEA evaluations with instant neural predictions.
---
## When to Load
- User needs >50 optimization trials
- User mentions "neural", "surrogate", "NN", "machine learning"
- User wants faster optimization
- Exporting training data for neural networks
---
## Overview
**Key Innovation**: Train once on FEA data, then explore 50,000+ designs in the time it takes to run 50 FEA trials.
| Metric | Traditional FEA | Neural Network | Improvement |
|--------|-----------------|----------------|-------------|
| Time per evaluation | 10-30 minutes | 4.5 milliseconds | **2,000-500,000x** |
| Trials per hour | 2-6 | 800,000+ | **1000x** |
| Design exploration | ~50 designs | ~50,000 designs | **1000x** |
---
## Training Data Export (PR.9)
Enable training data export in your optimization config:
```json
{
"training_data_export": {
"enabled": true,
"export_dir": "atomizer_field_training_data/my_study"
}
}
```
### Using TrainingDataExporter
```python
from optimization_engine.training_data_exporter import TrainingDataExporter
training_exporter = TrainingDataExporter(
export_dir=export_dir,
study_name=study_name,
design_variable_names=['param1', 'param2'],
objective_names=['stiffness', 'mass'],
constraint_names=['mass_limit'],
metadata={'atomizer_version': '2.0', 'optimization_algorithm': 'NSGA-II'}
)
# In objective function:
training_exporter.export_trial(
trial_number=trial.number,
design_variables=design_vars,
results={'objectives': {...}, 'constraints': {...}},
simulation_files={'dat_file': dat_path, 'op2_file': op2_path}
)
# After optimization:
training_exporter.finalize()
```
### Training Data Structure
```
atomizer_field_training_data/{study_name}/
├── trial_0001/
│ ├── input/model.bdf # Nastran input (mesh + params)
│ ├── output/model.op2 # Binary results
│ └── metadata.json # Design params + objectives
├── trial_0002/
│ └── ...
└── study_summary.json # Study-level metadata
```
**Recommended**: 100-500 FEA samples for good generalization.
---
## Neural Configuration
### Full Configuration Example
```json
{
"study_name": "bracket_neural_optimization",
"surrogate_settings": {
"enabled": true,
"model_type": "parametric_gnn",
"model_path": "models/bracket_surrogate.pt",
"confidence_threshold": 0.85,
"validation_frequency": 10,
"fallback_to_fea": true
},
"training_data_export": {
"enabled": true,
"export_dir": "atomizer_field_training_data/bracket_study",
"export_bdf": true,
"export_op2": true,
"export_fields": ["displacement", "stress"]
},
"neural_optimization": {
"initial_fea_trials": 50,
"neural_trials": 5000,
"retraining_interval": 500,
"uncertainty_threshold": 0.15
}
}
```
### Configuration Parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `enabled` | bool | false | Enable neural surrogate |
| `model_type` | string | "parametric_gnn" | Model architecture |
| `model_path` | string | - | Path to trained model |
| `confidence_threshold` | float | 0.85 | Min confidence for predictions |
| `validation_frequency` | int | 10 | FEA validation every N trials |
| `fallback_to_fea` | bool | true | Use FEA when uncertain |
---
## Model Types
### Parametric Predictor GNN (Recommended)
Direct optimization objective prediction - fastest option.
```
Design Parameters (ND) → Design Encoder (MLP) → GNN Backbone → Scalar Heads
Output (objectives):
├── mass (grams)
├── frequency (Hz)
├── max_displacement (mm)
└── max_stress (MPa)
```
**Use When**: You only need scalar objectives, not full field predictions.
### Field Predictor GNN
Full displacement/stress field prediction.
```
Input Features (12D per node):
├── Node coordinates (x, y, z)
├── Material properties (E, nu, rho)
├── Boundary conditions (fixed/free per DOF)
└── Load information (force magnitude, direction)
Output (per node):
├── Displacement (6 DOF: Tx, Ty, Tz, Rx, Ry, Rz)
└── Von Mises stress (1 value)
```
**Use When**: You need field visualization or complex derived quantities.
### Ensemble Models
Multiple models for uncertainty quantification.
```python
# Run N models
predictions = [model_i(x) for model_i in ensemble]
# Statistics
mean_prediction = np.mean(predictions)
uncertainty = np.std(predictions)
# Decision
if uncertainty > threshold:
result = run_fea(x) # Fall back to FEA
else:
result = mean_prediction
```
---
## Hybrid FEA/Neural Workflow
### Phase 1: FEA Exploration (50-100 trials)
- Run standard FEA optimization
- Export training data automatically
- Build landscape understanding
### Phase 2: Neural Training
- Parse collected data
- Train parametric predictor
- Validate accuracy
### Phase 3: Neural Acceleration (1000s of trials)
- Use neural network for rapid exploration
- Periodic FEA validation
- Retrain if distribution shifts
### Phase 4: FEA Refinement (10-20 trials)
- Validate top candidates with FEA
- Ensure results are physically accurate
- Generate final Pareto front
---
## Training Pipeline
### Step 1: Collect Training Data
Run optimization with export enabled:
```bash
python run_optimization.py --train --trials 100
```
### Step 2: Parse to Neural Format
```bash
cd atomizer-field
python batch_parser.py ../atomizer_field_training_data/my_study
```
### Step 3: Train Model
**Parametric Predictor** (recommended):
```bash
python train_parametric.py \
--train_dir ../training_data/parsed \
--val_dir ../validation_data/parsed \
--epochs 200 \
--hidden_channels 128 \
--num_layers 4
```
**Field Predictor**:
```bash
python train.py \
--train_dir ../training_data/parsed \
--epochs 200 \
--model FieldPredictorGNN \
--hidden_channels 128 \
--num_layers 6 \
--physics_loss_weight 0.3
```
### Step 4: Validate
```bash
python validate.py --checkpoint runs/my_model/checkpoint_best.pt
```
Expected output:
```
Validation Results:
├── Mean Absolute Error: 2.3% (mass), 1.8% (frequency)
├── R² Score: 0.987
├── Inference Time: 4.5ms ± 0.8ms
└── Physics Violations: 0.2%
```
### Step 5: Deploy
Update config to use trained model:
```json
{
"neural_surrogate": {
"enabled": true,
"model_checkpoint": "atomizer-field/runs/my_model/checkpoint_best.pt",
"confidence_threshold": 0.85
}
}
```
---
## Uncertainty Thresholds
| Uncertainty | Action |
|-------------|--------|
| < 5% | Use neural prediction |
| 5-15% | Use neural, flag for validation |
| > 15% | Fall back to FEA |
---
## Accuracy Expectations
| Problem Type | Expected R² | Samples Needed |
|--------------|-------------|----------------|
| Well-behaved | > 0.95 | 50-100 |
| Moderate nonlinear | > 0.90 | 100-200 |
| Highly nonlinear | > 0.85 | 200-500 |
---
## AtomizerField Components
```
atomizer-field/
├── neural_field_parser.py # BDF/OP2 parsing
├── field_predictor.py # Field GNN
├── parametric_predictor.py # Parametric GNN
├── train.py # Field training
├── train_parametric.py # Parametric training
├── validate.py # Model validation
├── physics_losses.py # Physics-informed loss
└── batch_parser.py # Batch data conversion
optimization_engine/
├── neural_surrogate.py # Atomizer integration
└── runner_with_neural.py # Neural runner
```
---
## Troubleshooting
| Symptom | Cause | Solution |
|---------|-------|----------|
| High prediction error | Insufficient training data | Collect more FEA samples |
| Out-of-distribution warnings | Design outside training range | Retrain with expanded range |
| Slow inference | Large mesh | Use parametric predictor instead |
| Physics violations | Low physics loss weight | Increase `physics_loss_weight` |
---
## Cross-References
- **System Protocol**: [SYS_14_NEURAL_ACCELERATION](../../docs/protocols/system/SYS_14_NEURAL_ACCELERATION.md)
- **Operations**: [OP_05_EXPORT_TRAINING_DATA](../../docs/protocols/operations/OP_05_EXPORT_TRAINING_DATA.md)
- **Core Skill**: [study-creation-core](../core/study-creation-core.md)

View File

@@ -0,0 +1,209 @@
# NX Documentation Lookup Module
## Overview
This module provides on-demand access to Siemens NX Open and Simcenter documentation via the Dalidou MCP server. Use these tools when building new extractors, NX automation scripts, or debugging NX-related issues.
## CRITICAL: When to AUTO-SEARCH Documentation
**You MUST call `siemens_docs_search` BEFORE writing any code that uses NX Open APIs.**
### Automatic Search Triggers
| User Request | Action Required |
|--------------|-----------------|
| "Create extractor for {X}" | → `siemens_docs_search("{X} NXOpen")` |
| "Get {property} from part" | → `siemens_docs_search("{property} NXOpen.Part")` |
| "Extract {data} from FEM" | → `siemens_docs_search("{data} NXOpen.CAE")` |
| "How do I {action} in NX" | → `siemens_docs_search("{action} NXOpen")` |
| Any code with `NXOpen.*` | → Search before writing |
### Example: User asks "Create an extractor for inertia values"
```
STEP 1: Immediately search
→ siemens_docs_search("inertia mass properties NXOpen")
STEP 2: Review results, fetch details
→ siemens_docs_fetch("NXOpen.MeasureManager")
STEP 3: Now write code with correct API calls
```
**DO NOT guess NX Open API names.** Always search first.
## When to Load
Load this module when:
- Creating new NX Open scripts or extractors
- Working with `NXOpen.*` namespaces
- Debugging NX automation errors
- User mentions "NX API", "NX Open", "Simcenter docs"
- Building features that interact with NX/Simcenter
## Available MCP Tools
### `siemens_docs_search`
**Purpose**: Search across NX Open, Simcenter, and Teamcenter documentation
**When to use**:
- Finding which class/method performs a specific task
- Discovering available APIs for a feature
- Looking up Nastran card references
**Examples**:
```
siemens_docs_search("get node coordinates FEM")
siemens_docs_search("CQUAD4 element properties")
siemens_docs_search("NXOpen.CAE mesh creation")
siemens_docs_search("extract stress results OP2")
```
### `siemens_docs_fetch`
**Purpose**: Fetch a specific documentation page with full content
**When to use**:
- Need complete class reference
- Getting detailed method signatures
- Reading full examples
**Examples**:
```
siemens_docs_fetch("NXOpen.CAE.FemPart")
siemens_docs_fetch("Nastran Quick Reference CQUAD4")
```
### `siemens_auth_status`
**Purpose**: Check if the Siemens SSO session is valid
**When to use**:
- Before a series of documentation lookups
- When fetch requests fail
- Debugging connection issues
### `siemens_login`
**Purpose**: Re-authenticate with Siemens if session expired
**When to use**:
- After `siemens_auth_status` shows expired
- When documentation fetches return auth errors
## Workflow: Building New Extractor
When creating a new extractor that uses NX Open APIs:
### Step 1: Search for Relevant APIs
```
→ siemens_docs_search("element stress results OP2")
```
Review results to identify candidate classes/methods.
### Step 2: Fetch Detailed Documentation
```
→ siemens_docs_fetch("NXOpen.CAE.Result")
```
Get full class documentation with method signatures.
### Step 3: Understand Data Formats
```
→ siemens_docs_search("CQUAD4 stress output format")
```
Understand Nastran output structure.
### Step 4: Build Extractor
Following EXT_01 template, create the extractor with:
- Proper API calls based on documentation
- Docstring referencing the APIs used
- Error handling for common NX exceptions
### Step 5: Document API Usage
In the extractor docstring:
```python
def extract_element_stress(op2_path: Path) -> Dict:
"""
Extract element stress results from OP2 file.
NX Open APIs Used:
- NXOpen.CAE.Result.AskElementStress
- NXOpen.CAE.ResultAccess.AskResultValues
Nastran Cards:
- CQUAD4, CTRIA3 (shell elements)
- STRESS case control
"""
```
## Workflow: Debugging NX Errors
When encountering NX Open errors:
### Step 1: Search for Correct API
```
Error: AttributeError: 'FemPart' object has no attribute 'GetNodes'
→ siemens_docs_search("FemPart get nodes")
```
### Step 2: Fetch Correct Class Reference
```
→ siemens_docs_fetch("NXOpen.CAE.FemPart")
```
Find the actual method name and signature.
### Step 3: Apply Fix
Document the correction:
```python
# Wrong: femPart.GetNodes()
# Right: femPart.BaseFEModel.FemMesh.Nodes
```
## Common Search Patterns
| Task | Search Query |
|------|--------------|
| Mesh operations | `siemens_docs_search("NXOpen.CAE mesh")` |
| Result extraction | `siemens_docs_search("CAE result OP2")` |
| Geometry access | `siemens_docs_search("NXOpen.Features body")` |
| Material properties | `siemens_docs_search("Nastran MAT1 material")` |
| Load application | `siemens_docs_search("CAE load force")` |
| Constraint setup | `siemens_docs_search("CAE boundary condition")` |
| Expressions/Parameters | `siemens_docs_search("NXOpen Expression")` |
| Part manipulation | `siemens_docs_search("NXOpen.Part")` |
## Key NX Open Namespaces
| Namespace | Domain |
|-----------|--------|
| `NXOpen.CAE` | FEA, meshing, results |
| `NXOpen.Features` | Parametric features |
| `NXOpen.Assemblies` | Assembly operations |
| `NXOpen.Part` | Part-level operations |
| `NXOpen.UF` | User Function (legacy) |
| `NXOpen.GeometricUtilities` | Geometry helpers |
## Integration with Extractors
All extractors in `optimization_engine/extractors/` should:
1. **Search before coding**: Use `siemens_docs_search` to find correct APIs
2. **Document API usage**: List NX Open APIs in docstring
3. **Handle NX exceptions**: Catch `NXOpen.NXException` appropriately
4. **Follow 20-line rule**: If extraction is complex, check if existing extractor handles it
## Troubleshooting
| Issue | Solution |
|-------|----------|
| Auth errors | Run `siemens_auth_status`, then `siemens_login` if needed |
| No results | Try broader search terms, check namespace spelling |
| Incomplete docs | Fetch the parent class for full context |
| Network errors | Verify Dalidou is accessible: `ping dalidou.local` |
---
*Module Version: 1.0*
*MCP Server: dalidou.local:5000*

View File

@@ -0,0 +1,364 @@
# Zernike Optimization Module
**Last Updated**: December 5, 2025
**Version**: 1.0
**Type**: Optional Module
This module provides specialized guidance for telescope mirror and optical surface optimization using Zernike polynomial decomposition.
---
## When to Load
- User mentions "telescope", "mirror", "optical", "wavefront"
- Optimization involves surface deformation analysis
- Need to extract Zernike coefficients from FEA results
- Working with multi-subcase elevation angle comparisons
---
## Zernike Extractors (E8-E10)
| ID | Extractor | Function | Input | Output | Use Case |
|----|-----------|----------|-------|--------|----------|
| E8 | **Zernike WFE** | `extract_zernike_from_op2()` | `.op2` + `.bdf` | nm | Single subcase wavefront error |
| E9 | **Zernike Relative** | `extract_zernike_relative_rms()` | `.op2` + `.bdf` | nm | Compare target vs reference subcase |
| E10 | **Zernike Helpers** | `ZernikeObjectiveBuilder` | `.op2` | nm | Multi-subcase optimization builder |
---
## E8: Single Subcase Zernike Extraction
Extract Zernike coefficients and RMS metrics for a single subcase (e.g., one elevation angle).
```python
from optimization_engine.extractors.extract_zernike import extract_zernike_from_op2
# Extract Zernike coefficients and RMS metrics for a single subcase
result = extract_zernike_from_op2(
op2_file,
bdf_file=None, # Auto-detect from op2 location
subcase="20", # Subcase label (e.g., "20" = 20 deg elevation)
displacement_unit="mm"
)
global_rms = result['global_rms_nm'] # Total surface RMS in nm
filtered_rms = result['filtered_rms_nm'] # RMS with low orders removed
coefficients = result['coefficients'] # List of 50 Zernike coefficients
```
**Return Dictionary**:
```python
{
'global_rms_nm': 45.2, # Total surface RMS (nm)
'filtered_rms_nm': 12.8, # RMS with J1-J4 (piston, tip, tilt, defocus) removed
'coefficients': [0.0, 12.3, ...], # 50 Zernike coefficients (Noll indexing)
'n_nodes': 5432, # Number of surface nodes
'rms_per_mode': {...} # RMS contribution per Zernike mode
}
```
**When to Use**:
- Single elevation angle analysis
- Polishing orientation (zenith) wavefront error
- Absolute surface quality metrics
---
## E9: Relative RMS Between Subcases
Compare wavefront error between two subcases (e.g., 40° vs 20° reference).
```python
from optimization_engine.extractors.extract_zernike import extract_zernike_relative_rms
# Compare wavefront error between subcases (e.g., 40 deg vs 20 deg reference)
result = extract_zernike_relative_rms(
op2_file,
bdf_file=None,
target_subcase="40", # Target orientation
reference_subcase="20", # Reference (usually polishing orientation)
displacement_unit="mm"
)
relative_rms = result['relative_filtered_rms_nm'] # Differential WFE in nm
delta_coeffs = result['delta_coefficients'] # Coefficient differences
```
**Return Dictionary**:
```python
{
'relative_filtered_rms_nm': 8.7, # Differential WFE (target - reference)
'delta_coefficients': [...], # Coefficient differences
'target_rms_nm': 52.3, # Target subcase absolute RMS
'reference_rms_nm': 45.2, # Reference subcase absolute RMS
'improvement_percent': -15.7 # Negative = worse than reference
}
```
**When to Use**:
- Comparing performance across elevation angles
- Minimizing deformation relative to polishing orientation
- Multi-angle telescope mirror optimization
---
## E10: Multi-Subcase Objective Builder
Build objectives for multiple subcases in a single extractor (most efficient for complex optimization).
```python
from optimization_engine.extractors.zernike_helpers import ZernikeObjectiveBuilder
# Build objectives for multiple subcases in one extractor
builder = ZernikeObjectiveBuilder(
op2_finder=lambda: model_dir / "ASSY_M1-solution_1.op2"
)
# Add relative objectives (target vs reference)
builder.add_relative_objective(
"40", "20", # 40° vs 20° reference
metric="relative_filtered_rms_nm",
weight=5.0
)
builder.add_relative_objective(
"60", "20", # 60° vs 20° reference
metric="relative_filtered_rms_nm",
weight=5.0
)
# Add absolute objective for polishing orientation
builder.add_subcase_objective(
"90", # Zenith (polishing orientation)
metric="rms_filter_j1to3", # Only remove piston, tip, tilt
weight=1.0
)
# Evaluate all at once (efficient - parses OP2 only once)
results = builder.evaluate_all()
# Returns: {'rel_40_vs_20': 4.2, 'rel_60_vs_20': 8.7, 'rms_90': 15.3}
```
**When to Use**:
- Multi-objective telescope optimization
- Multiple elevation angles to optimize
- Weighted combination of absolute and relative WFE
---
## Zernike Modes Reference
| Noll Index | Name | Physical Meaning | Correctability |
|------------|------|------------------|----------------|
| J1 | Piston | Constant offset | Easily corrected |
| J2 | Tip | X-tilt | Easily corrected |
| J3 | Tilt | Y-tilt | Easily corrected |
| J4 | Defocus | Power error | Easily corrected |
| J5 | Astigmatism (0°) | Cylindrical error | Correctable |
| J6 | Astigmatism (45°) | Cylindrical error | Correctable |
| J7 | Coma (x) | Off-axis aberration | Harder to correct |
| J8 | Coma (y) | Off-axis aberration | Harder to correct |
| J9-J10 | Trefoil | Triangular error | Hard to correct |
| J11+ | Higher order | Complex aberrations | Very hard to correct |
**Filtering Convention**:
- `filtered_rms`: Removes J1-J4 (piston, tip, tilt, defocus) - standard
- `rms_filter_j1to3`: Removes only J1-J3 (keeps defocus) - for focus-sensitive applications
---
## Common Zernike Optimization Patterns
### Pattern 1: Minimize Relative WFE Across Elevations
```python
# Objective: Minimize max relative WFE across all elevation angles
objectives = [
{"name": "rel_40_vs_20", "goal": "minimize"},
{"name": "rel_60_vs_20", "goal": "minimize"},
]
# Use weighted sum or multi-objective
def objective(trial):
results = builder.evaluate_all()
return (results['rel_40_vs_20'], results['rel_60_vs_20'])
```
### Pattern 2: Single Elevation + Mass
```python
# Objective: Minimize WFE at 45° while minimizing mass
objectives = [
{"name": "wfe_45", "goal": "minimize"}, # Wavefront error
{"name": "mass", "goal": "minimize"}, # Mirror mass
]
```
### Pattern 3: Weighted Multi-Angle
```python
# Weighted combination of multiple angles
def combined_wfe(trial):
results = builder.evaluate_all()
weighted_wfe = (
5.0 * results['rel_40_vs_20'] +
5.0 * results['rel_60_vs_20'] +
1.0 * results['rms_90']
)
return weighted_wfe
```
---
## Telescope Mirror Study Configuration
```json
{
"study_name": "m1_mirror_optimization",
"description": "Minimize wavefront error across elevation angles",
"objectives": [
{
"name": "wfe_40_vs_20",
"goal": "minimize",
"unit": "nm",
"extraction": {
"action": "extract_zernike_relative_rms",
"params": {
"target_subcase": "40",
"reference_subcase": "20"
}
}
}
],
"simulation": {
"analysis_types": ["static"],
"subcases": ["20", "40", "60", "90"],
"solution_name": null
}
}
```
---
## Performance Considerations
1. **Parse OP2 Once**: Use `ZernikeObjectiveBuilder` to parse the OP2 file only once per trial
2. **Subcase Labels**: Match exact subcase labels from NX simulation
3. **Node Selection**: Zernike extraction uses surface nodes only (auto-detected from BDF)
4. **Memory**: Large meshes (>50k nodes) may require chunked processing
---
## Troubleshooting
| Symptom | Cause | Solution |
|---------|-------|----------|
| "Subcase not found" | Wrong subcase label | Check NX .sim for exact labels |
| High J1-J4 coefficients | Rigid body motion not constrained | Check boundary conditions |
| NaN in coefficients | Insufficient nodes for polynomial order | Reduce max Zernike order |
| Inconsistent RMS | Different node sets per subcase | Verify mesh consistency |
| "Billion nm" RMS values | Node merge failed in AFEM | Check `MergeOccurrenceNodes = True` |
| Corrupt OP2 data | All-zero displacements | Validate OP2 before processing |
---
## Assembly FEM (AFEM) Structure for Mirrors
Telescope mirror assemblies in NX typically consist of:
```
ASSY_M1.prt # Master assembly part
ASSY_M1_assyfem1.afm # Assembly FEM container
ASSY_M1_assyfem1_sim1.sim # Simulation file (solve this)
M1_Blank.prt # Mirror blank part
M1_Blank_fem1.fem # Mirror blank mesh
M1_Vertical_Support_Skeleton.prt # Support structure
```
**Key Point**: Expressions in master `.prt` propagate through assembly → AFEM updates automatically.
---
## Multi-Subcase Gravity Analysis
For telescope mirrors, analyze multiple gravity orientations:
| Subcase | Elevation Angle | Purpose |
|---------|-----------------|---------|
| 1 | 90° (zenith) | Polishing orientation - manufacturing reference |
| 2 | 20° | Low elevation - reference for relative metrics |
| 3 | 40° | Mid-low elevation |
| 4 | 60° | Mid-high elevation |
**CRITICAL**: NX subcase numbers don't always match angle labels! Use explicit mapping:
```json
"subcase_labels": {
"1": "90deg",
"2": "20deg",
"3": "40deg",
"4": "60deg"
}
```
---
## Lessons Learned (M1 Mirror V1-V9)
### 1. TPE Sampler Seed Issue
**Problem**: Resuming study with fixed seed causes duplicate parameters.
**Solution**:
```python
if is_new_study:
sampler = TPESampler(seed=42)
else:
sampler = TPESampler() # No seed for resume
```
### 2. OP2 Data Validation
**Always validate before processing**:
```python
unique_values = len(np.unique(disp_z))
if unique_values < 10:
raise RuntimeError("CORRUPT OP2: insufficient unique values")
if np.abs(disp_z).max() > 1e6:
raise RuntimeError("CORRUPT OP2: unrealistic displacement")
```
### 3. Reference Subcase Selection
Use lowest operational elevation (typically 20°) as reference. Higher elevations show positive relative WFE as gravity effects increase.
### 4. Optical Convention
For mirror surface to wavefront error:
```python
WFE = 2 * surface_displacement # Reflection doubles path difference
wfe_nm = 2.0 * displacement_mm * 1e6 # Convert mm to nm
```
---
## Typical Mirror Design Variables
| Parameter | Description | Typical Range |
|-----------|-------------|---------------|
| `whiffle_min` | Whiffle tree minimum dimension | 35-55 mm |
| `whiffle_outer_to_vertical` | Whiffle arm angle | 68-80 deg |
| `inner_circular_rib_dia` | Rib diameter | 480-620 mm |
| `lateral_inner_angle` | Lateral support angle | 25-28.5 deg |
| `blank_backface_angle` | Mirror blank geometry | 3.5-5.0 deg |
---
## Cross-References
- **Extractor Catalog**: [extractors-catalog module](./extractors-catalog.md)
- **System Protocol**: [SYS_12_EXTRACTOR_LIBRARY](../../docs/protocols/system/SYS_12_EXTRACTOR_LIBRARY.md)
- **Core Skill**: [study-creation-core](../core/study-creation-core.md)