Files
Atomizer/docs/ATOMIZER_PODCAST_BRIEFING.md
Anto01 5e64cfb211 docs: Update podcast briefing with simulation focus and protocol evolution
Key changes based on feedback:
- Reposition as "optimizer & NX configurator" not "LLM-first"
- Add Part 2: Study Characterization & Performance Learning
- Add Part 3: Protocol Evolution workflow (Research → Review → Approve)
- Add Part 4: MCP-first development approach with documentation hierarchy
- Emphasize simulation optimization over CAD/mesh concerns
- Add LAC knowledge accumulation for parameter-performance relationships
- Add privilege levels for protocol approval (user/power_user/admin)
- Update sound bites and core messaging

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-31 13:29:52 -05:00

681 lines
34 KiB
Markdown

# Atomizer: Intelligent FEA Optimization & NX Configuration Framework
## Complete Technical Briefing Document for Podcast Generation
**Document Version:** 2.0
**Generated:** December 31, 2025
**Purpose:** NotebookLM/AI Podcast Source Material
---
# PART 1: PROJECT OVERVIEW & PHILOSOPHY
## What is Atomizer?
Atomizer is an **intelligent optimization engine and NX configurator** designed to bridge the gap between state-of-the-art simulation methods and performant, production-ready FEA workflows. It's not about CAD manipulation or mesh generation - those are setup concerns. Atomizer focuses on what matters: **making advanced simulation methods accessible and effective**.
### The Core Problem We Solve
State-of-the-art optimization algorithms exist in academic papers. Performant FEA simulations exist in commercial tools like NX Nastran. But bridging these two worlds requires:
- Deep knowledge of optimization theory (TPE, CMA-ES, Bayesian methods)
- Understanding of simulation physics and solver behavior
- Experience with what works for different problem types
- Infrastructure for running hundreds of automated trials
Most engineers don't have time to become experts in all these domains. **Atomizer is that bridge.**
### The Core Philosophy: "Optimize Smarter, Not Harder"
Traditional structural optimization is painful because:
- Engineers pick algorithms without knowing which is best for their problem
- Every new study starts from scratch - no accumulated knowledge
- Commercial tools offer generic methods, not physics-appropriate ones
- Simulation expertise and optimization expertise rarely coexist
Atomizer solves this by:
1. **Characterizing each study** to understand its optimization landscape
2. **Selecting methods automatically** based on problem characteristics
3. **Learning from every study** what works and what doesn't
4. **Building a knowledge base** of parameter-performance relationships
### What Atomizer Is NOT
- It's not a CAD tool - geometry modeling happens in NX
- It's not a mesh generator - meshing is handled by NX Pre/Post
- It's not replacing the engineer's judgment - it's amplifying it
- It's not a black box - every decision is traceable and explainable
### Target Audience
- **FEA Engineers** who want to run serious optimization campaigns
- **Simulation specialists** tired of manual trial-and-error
- **Research teams** exploring design spaces systematically
- **Anyone** who needs to find optimal designs faster
### Key Differentiators from Commercial Tools
| Feature | OptiStruct/HEEDS | optiSLang | Atomizer |
|---------|------------------|-----------|----------|
| Algorithm selection | Manual | Manual | **Automatic (IMSO)** |
| Learning from history | None | None | **LAC persistent memory** |
| Study characterization | Basic | Basic | **Full landscape analysis** |
| Neural acceleration | Limited | Basic | **GNN + MLP + Gradient** |
| Protocol validation | None | None | **Research → Review → Approve** |
| Documentation source | Static manuals | Static manuals | **MCP-first, live lookups** |
---
# PART 2: STUDY CHARACTERIZATION & PERFORMANCE LEARNING
## The Heart of Atomizer: Understanding What Works
The most valuable thing Atomizer does is **learn what makes studies succeed**. This isn't just recording results - it's building a deep understanding of the relationship between:
- **Study parameters** (geometry type, design variable count, constraint complexity)
- **Optimization methods** (which algorithm, what settings)
- **Performance outcomes** (convergence speed, solution quality, feasibility rate)
### Study Characterization Process
When Atomizer runs an optimization, it doesn't just optimize - it **characterizes**:
```
┌─────────────────────────────────────────────────────────────────┐
│ STUDY CHARACTERIZATION │
├─────────────────────────────────────────────────────────────────┤
│ │
│ PROBLEM FINGERPRINT: │
│ • Geometry type (bracket, beam, mirror, shell, assembly) │
│ • Number of design variables (1-5, 6-10, 11+) │
│ • Objective physics (stress, frequency, displacement, WFE) │
│ • Constraint types (upper/lower bounds, ratios) │
│ • Solver type (SOL 101, 103, 105, 111, 112) │
│ │
│ LANDSCAPE METRICS (computed during characterization phase): │
│ • Smoothness score (0-1): How continuous is the response? │
│ • Multimodality: How many distinct good regions exist? │
│ • Parameter correlations: Which variables matter most? │
│ • Noise level: How much solver variation exists? │
│ • Dimensionality impact: How does space grow with variables? │
│ │
│ PERFORMANCE OUTCOME: │
│ • Trials to convergence │
│ • Best objective achieved │
│ • Constraint satisfaction rate │
│ • Algorithm that won (if IMSO used) │
│ │
└─────────────────────────────────────────────────────────────────┘
```
### Learning What Works: The LAC System
LAC (Learning Atomizer Core) stores the relationship between study characteristics and outcomes:
```
knowledge_base/lac/
├── optimization_memory/ # Performance by geometry type
│ ├── bracket.jsonl # "For brackets with 4-6 vars, TPE converges in ~60 trials"
│ ├── beam.jsonl # "Beam frequency problems are smooth - CMA-ES works well"
│ └── mirror.jsonl # "Zernike objectives need GP-BO for sample efficiency"
├── session_insights/
│ ├── success_pattern.jsonl # What configurations led to fast convergence
│ ├── failure.jsonl # What configurations failed and why
│ └── workaround.jsonl # Fixes for common issues
└── method_performance/
└── algorithm_selection.jsonl # Which algorithm won for which problem type
```
### Querying Historical Performance
Before starting a new study, Atomizer queries LAC:
```python
# What worked for similar problems?
similar_studies = lac.query_similar_optimizations(
geometry_type="bracket",
n_objectives=2,
n_design_vars=5,
physics=["stress", "mass"]
)
# Result: "For 2-objective bracket problems with 5 vars,
# NSGA-II with 80 trials typically finds a good Pareto front.
# GP-BO is overkill - the landscape is usually rugged."
# Get the recommended method
recommendation = lac.get_best_method_for(
geometry_type="bracket",
n_objectives=2,
constraint_types=["upper_bound"]
)
# Result: {"method": "NSGA-II", "n_trials": 80, "confidence": 0.87}
```
### Why This Matters
Commercial tools treat every optimization as if it's the first one ever run. **Atomizer treats every optimization as an opportunity to learn.**
After 100 studies:
- Atomizer knows that mirror problems need sample-efficient methods
- Atomizer knows that bracket stress problems are often rugged
- Atomizer knows that frequency optimization is usually smooth
- Atomizer knows which constraint formulations cause infeasibility
This isn't AI magic - it's **structured knowledge accumulation** that makes every future study faster and more reliable.
---
# PART 3: THE PROTOCOL OPERATING SYSTEM
## Structured, Traceable Operations
Atomizer operates through a 4-layer protocol system that ensures every action is:
- **Documented** - what should happen is written down
- **Traceable** - what actually happened is logged
- **Validated** - outcomes are checked against expectations
- **Improvable** - protocols can be updated based on experience
```
┌─────────────────────────────────────────────────────────────────┐
│ Layer 0: BOOTSTRAP │
│ Purpose: Task routing, session initialization │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Layer 1: OPERATIONS (OP_01 - OP_07) │
│ Create Study | Run Optimization | Monitor | Analyze | Export │
│ Troubleshoot | Disk Optimization │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Layer 2: SYSTEM (SYS_10 - SYS_17) │
│ IMSO | Multi-objective | Extractors | Dashboard │
│ Neural Acceleration | Method Selector | Study Insights │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Layer 3: EXTENSIONS (EXT_01 - EXT_04) │
│ Create Extractor | Create Hook | Create Protocol | Create Skill │
└─────────────────────────────────────────────────────────────────┘
```
## Protocol Evolution: Research → Review → Approve
**What happens when no protocol exists for your use case?**
This is where Atomizer's extensibility shines. The system has a structured workflow for adding new capabilities:
### The Protocol Evolution Workflow
```
┌─────────────────────────────────────────────────────────────────┐
│ STEP 1: IDENTIFY GAP │
│ ───────────────────────────────────────────────────────────── │
│ User: "I need to extract buckling load factors" │
│ Atomizer: "No existing extractor for buckling. Initiating │
│ new capability development." │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ STEP 2: RESEARCH PHASE │
│ ───────────────────────────────────────────────────────────── │
│ 1. Query MCP Siemens docs: "How does NX store buckling?" │
│ 2. Check pyNastran docs: "OP2 buckling result format" │
│ 3. Search NX Open TSE: Example journals for SOL 105 │
│ 4. Draft extractor implementation │
│ 5. Create test cases │
│ │
│ Output: Draft protocol + implementation + tests │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ STEP 3: PUSH TO APPROVAL BUCKET │
│ ───────────────────────────────────────────────────────────── │
│ Location: docs/protocols/pending/ │
│ │
│ Contents: │
│ • Protocol document (EXT_XX_BUCKLING_EXTRACTOR.md) │
│ • Implementation (extract_buckling.py) │
│ • Test suite (test_buckling_extractor.py) │
│ • Validation evidence (example outputs) │
│ │
│ Status: PENDING_REVIEW │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ STEP 4: PRIVILEGED REVIEW │
│ ───────────────────────────────────────────────────────────── │
│ Reviewer with "power_user" or "admin" privilege: │
│ │
│ Checks: │
│ ☐ Implementation follows extractor patterns │
│ ☐ Tests pass on multiple SOL 105 models │
│ ☐ Documentation is complete │
│ ☐ Error handling is robust │
│ ☐ No security concerns │
│ │
│ Decision: APPROVE / REQUEST_CHANGES / REJECT │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ STEP 5: INTEGRATION │
│ ───────────────────────────────────────────────────────────── │
│ On APPROVE: │
│ • Move to docs/protocols/system/ │
│ • Add to optimization_engine/extractors/__init__.py │
│ • Update SYS_12_EXTRACTOR_LIBRARY.md │
│ • Update .claude/skills/01_CHEATSHEET.md │
│ • Commit with: "feat: Add E23 buckling extractor" │
│ │
│ Status: ACTIVE - Now part of Atomizer ecosystem │
└─────────────────────────────────────────────────────────────────┘
```
### Privilege Levels
| Level | Can Do | Cannot Do |
|-------|--------|-----------|
| **user** | Use all OP_* protocols | Create/modify protocols |
| **power_user** | Use OP_* + EXT_01, EXT_02 | Approve new system protocols |
| **admin** | Everything | - |
This ensures:
- Anyone can propose new capabilities
- Only validated code enters the ecosystem
- Quality standards are maintained
- The system grows safely over time
---
# PART 4: MCP-FIRST DEVELOPMENT APPROACH
## When Functions Don't Exist: How Atomizer Develops New Capabilities
When Atomizer encounters a task without an existing extractor or protocol, it follows a **documentation-first development approach** using MCP (Model Context Protocol) tools.
### The Documentation Hierarchy
```
PRIMARY SOURCE (Always check first):
┌─────────────────────────────────────────────────────────────────┐
│ MCP Siemens Documentation Tools │
│ ───────────────────────────────────────────────────────────── │
│ • mcp__siemens-docs__nxopen_get_class │
│ → Get official NX Open class documentation │
│ → Example: Query "CaeResultType" for result access patterns │
│ │
│ • mcp__siemens-docs__nxopen_get_index │
│ → Browse class/function indexes │
│ → Find related classes for a capability │
│ │
│ • mcp__siemens-docs__siemens_docs_list │
│ → List all available documentation resources │
│ │
│ WHY PRIMARY: This is the official, up-to-date source. │
│ API calls verified against actual NX Open signatures. │
└─────────────────────────────────────────────────────────────────┘
SECONDARY SOURCES (Use when MCP doesn't have the answer):
┌─────────────────────────────────────────────────────────────────┐
│ pyNastran Documentation │
│ ───────────────────────────────────────────────────────────── │
│ For OP2/F06 result parsing patterns │
│ Example: How to access buckling eigenvalues from OP2 │
│ Location: pyNastran GitHub, readthedocs │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ NX Open TSE (Technical Support Examples) │
│ ───────────────────────────────────────────────────────────── │
│ Community examples and Siemens support articles │
│ Example: Working journal for exporting specific result types │
│ Location: Siemens Community, support articles │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Existing Atomizer Extractors │
│ ───────────────────────────────────────────────────────────── │
│ Pattern reference from similar implementations │
│ Example: How extract_frequency.py handles modal results │
│ Location: optimization_engine/extractors/ │
└─────────────────────────────────────────────────────────────────┘
```
### Example: Developing a New Extractor
User request: "I need to extract heat flux from thermal analysis results"
**Step 1: Query MCP First**
```python
# Query NX Open documentation
mcp__siemens-docs__nxopen_get_class("CaeResultComponent")
# Returns: Official documentation for result component access
mcp__siemens-docs__nxopen_get_class("HeatFluxComponent")
# Returns: Specific heat flux result access patterns
```
**Step 2: Check pyNastran for OP2 Parsing**
```python
# How does pyNastran represent thermal results?
# Check: model.thermalFlux or model.heatFlux structures
```
**Step 3: Reference Existing Extractors**
```python
# Look at extract_temperature.py for thermal result patterns
# Adapt the OP2 access pattern for heat flux
```
**Step 4: Implement with Verified API Calls**
```python
def extract_heat_flux(op2_file: Path, subcase: int = 1) -> Dict:
"""
Extract heat flux from SOL 153/159 thermal results.
API Reference: NX Open CaeResultComponent (via MCP)
OP2 Format: pyNastran thermal flux structures
"""
# Implementation using verified patterns
```
### Why This Matters
- **No guessing** - Every API call is verified against documentation
- **Maintainable** - When NX updates, we check official docs first
- **Traceable** - Each extractor documents its sources
- **Reliable** - Secondary sources only fill gaps, never override primary
---
# PART 5: SIMULATION-FOCUSED OPTIMIZATION
## Bridging State-of-the-Art Methods and Performant Simulations
Atomizer's core mission is making advanced optimization methods work seamlessly with NX Nastran simulations. The CAD and mesh are setup concerns - **our focus is on the simulation loop.**
### The Simulation Optimization Loop
```
┌─────────────────────────────────────────────────────────────────┐
│ SIMULATION-CENTRIC WORKFLOW │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ │
│ │ OPTIMIZER │ ← State-of-the-art algorithms │
│ │ (Atomizer) │ TPE, CMA-ES, GP-BO, NSGA-II │
│ └──────┬──────┘ + Neural surrogates │
│ │ │
│ ▼ Design Variables │
│ ┌─────────────┐ │
│ │ NX CONFIG │ ← Expression updates via .exp files │
│ │ UPDATER │ Automated, no GUI interaction │
│ └──────┬──────┘ │
│ │ │
│ ▼ Updated Model │
│ ┌─────────────┐ │
│ │ NX NASTRAN │ ← SOL 101, 103, 105, 111, 112 │
│ │ SOLVER │ Batch mode execution │
│ └──────┬──────┘ │
│ │ │
│ ▼ Results (OP2, F06) │
│ ┌─────────────┐ │
│ │ EXTRACTORS │ ← 24 physics extractors │
│ │ (pyNastran) │ Stress, displacement, frequency, etc. │
│ └──────┬──────┘ │
│ │ │
│ ▼ Objectives & Constraints │
│ ┌─────────────┐ │
│ │ OPTIMIZER │ ← Learning: What parameters → What results │
│ │ (Atomizer) │ Building surrogate models │
│ └─────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
```
### Supported Nastran Solution Types
| SOL | Type | What Atomizer Optimizes |
|-----|------|-------------------------|
| 101 | Linear Static | Stress, displacement, stiffness |
| 103 | Normal Modes | Frequencies, mode shapes, modal mass |
| 105 | Buckling | Critical load factors, stability margins |
| 111 | Frequency Response | Transfer functions, resonance peaks |
| 112 | Transient Response | Peak dynamic response, settling time |
### NX Expression Management
Atomizer updates NX models through the expression system - no manual CAD editing:
```python
# Expression file format (.exp)
[MilliMeter]rib_thickness=12.5
[MilliMeter]flange_width=25.0
[Degrees]support_angle=45.0
# Atomizer generates this, NX imports it, geometry updates automatically
```
This keeps the optimization loop fast:
- No interactive sessions
- No license seat occupation during solver runs
- Batch processing of hundreds of trials
---
# PART 6: OPTIMIZATION ALGORITHMS
## IMSO: Intelligent Multi-Strategy Optimization
Instead of asking "which algorithm should I use?", IMSO **characterizes your problem and selects automatically**.
### The Two-Phase Process
**Phase 1: Characterization (10-30 trials)**
- Unbiased sampling (Random or Sobol)
- Compute landscape metrics every 5 trials
- Stop when confidence reaches 85%
**Phase 2: Optimized Search**
- Algorithm selected based on landscape type:
- Smooth unimodal → CMA-ES or GP-BO
- Smooth multimodal → GP-BO
- Rugged → TPE
- Noisy → TPE (most robust)
### Performance Comparison
| Problem Type | Random Search | TPE Alone | IMSO |
|--------------|--------------|-----------|------|
| Smooth unimodal | 150 trials | 80 trials | **45 trials** |
| Rugged multimodal | 200 trials | 95 trials | **70 trials** |
| Mixed landscape | 180 trials | 100 trials | **56 trials** |
**Average improvement: 40% fewer trials to convergence**
## Multi-Objective: NSGA-II
For problems with competing objectives (mass vs. stiffness, cost vs. performance):
- Full Pareto front discovery
- Hypervolume tracking for solution quality
- Interactive Pareto visualization in dashboard
---
# PART 7: NEURAL NETWORK ACCELERATION
## When FEA is Too Slow
Single FEA evaluation: 10-30 minutes
Exploring 1000 designs: 7-20 days
**Neural surrogates change this equation entirely.**
### Performance Comparison
| Metric | FEA | Neural Network | Speedup |
|--------|-----|----------------|---------|
| Time per evaluation | 20 min | **4.5 ms** | **266,000x** |
| Trials per day | 72 | **19 million** | **263,000x** |
| Design exploration | Limited | **Comprehensive** | - |
### Two Approaches
**1. MLP Surrogate (Simple, Fast to Train)**
- 4-layer network, ~34K parameters
- Train on 50-100 FEA samples
- 1-5% error for most objectives
- Best for: Quick studies, smooth objectives
**2. Zernike GNN (Physics-Aware, High Accuracy)**
- Graph neural network with 1.2M parameters
- Predicts full displacement fields
- Differentiable Zernike fitting
- Best for: Mirror optimization, optical surfaces
### Turbo Mode Workflow
```
REPEAT until converged:
1. Run 5,000 neural predictions (~1 second)
2. Select top 5 diverse candidates
3. FEA validate those 5 (~25 minutes)
4. Retrain neural network with new data
5. Check for convergence
```
**Result:** 50 FEA runs explore what would take 1000+ trials traditionally.
---
# PART 8: THE EXTRACTOR LIBRARY
## 24 Physics Extractors
Every extractor follows the same pattern: verified API calls, robust error handling, documented sources.
| ID | Physics | Function | Output |
|----|---------|----------|--------|
| E1 | Displacement | `extract_displacement()` | mm |
| E2 | Frequency | `extract_frequency()` | Hz |
| E3 | Von Mises Stress | `extract_solid_stress()` | MPa |
| E4-E5 | Mass | BDF or CAD-based | kg |
| E8-E10 | Zernike WFE | Standard, relative, builder | nm |
| E12-E14 | Advanced Stress | Principal, strain energy, SPC | MPa, J, N |
| E15-E17 | Thermal | Temperature, gradient, flux | K, K/mm, W/mm² |
| E18 | Modal Mass | From F06 | kg |
| E19 | Part Introspection | Full part analysis | dict |
| E20-E22 | Zernike OPD | Analytic, comparison, figure | nm |
### The 20-Line Rule
If you're writing more than 20 lines of extraction code in your study, you're probably:
1. Duplicating existing functionality
2. Need to create a proper extractor
**Always check the library first. If it doesn't exist, propose a new extractor through the protocol evolution workflow.**
---
# PART 9: DASHBOARD & VISUALIZATION
## Real-Time Monitoring
**React + TypeScript + Plotly.js**
### Features
- **Parallel coordinates:** See all design variables and objectives simultaneously
- **Pareto front:** 2D/3D visualization of multi-objective trade-offs
- **Convergence tracking:** Best-so-far with individual trial scatter
- **WebSocket updates:** Live as optimization runs
### Report Generation
Automatic markdown reports with:
- Study configuration and objectives
- Best result with performance metrics
- Convergence plots (300 DPI, publication-ready)
- Top trials table
- Full history (collapsible)
---
# PART 10: STATISTICS & METRICS
## Codebase
| Component | Lines of Code |
|-----------|---------------|
| Optimization Engine (Python) | **66,204** |
| Dashboard (TypeScript) | **54,871** |
| Documentation | 999 files |
| **Total** | **~120,000+** |
## Performance
| Metric | Value |
|--------|-------|
| Neural inference | **4.5 ms** per trial |
| Turbo throughput | **5,000-7,000 trials/sec** |
| GNN R² accuracy | **0.95-0.99** |
| IMSO improvement | **40% fewer trials** |
## Coverage
- **24 physics extractors**
- **6+ optimization algorithms**
- **7 Nastran solution types** (SOL 101, 103, 105, 106, 111, 112, 153/159)
- **3 neural surrogate types** (MLP, GNN, Ensemble)
---
# PART 11: KEY TAKEAWAYS
## What Makes Atomizer Different
1. **Study characterization** - Learn what works for each problem type
2. **Persistent memory (LAC)** - Never start from scratch
3. **Protocol evolution** - Safe, validated extensibility
4. **MCP-first development** - Documentation-driven, not guessing
5. **Simulation focus** - Not CAD, not mesh - optimization of simulation performance
## Sound Bites for Podcast
- "Atomizer learns what works. After 100 studies, it knows that mirror problems need GP-BO, not TPE."
- "When we don't have an extractor, we query official NX documentation first - no guessing."
- "New capabilities go through research, review, and approval - just like engineering change orders."
- "4.5 milliseconds per prediction means we can explore 50,000 designs before lunch."
- "Every study makes the system smarter. That's not marketing - that's LAC."
## The Core Message
Atomizer is an **intelligent optimization platform** that:
- **Bridges** state-of-the-art algorithms and production FEA workflows
- **Learns** what works for different problem types
- **Grows** through structured protocol evolution
- **Accelerates** design exploration with neural surrogates
- **Documents** every decision for traceability
This isn't just automation - it's **accumulated engineering intelligence**.
---
*Atomizer: Where simulation expertise meets optimization science.*
---
**Document Statistics:**
- Sections: 11
- Focus: Simulation optimization (not CAD/mesh)
- Key additions: Study characterization, protocol evolution, MCP-first development
- Positioning: Optimizer & NX configurator, not "LLM-first"
**Prepared for NotebookLM/AI Podcast Generation**