Files
Atomizer/docs/ATOMIZER_PODCAST_BRIEFING.md
Anto01 73a7b9d9f1 feat: Add dashboard chat integration and MCP server
Major changes:
- Dashboard: WebSocket-based chat with session management
- Dashboard: New chat components (ChatPane, ChatInput, ModeToggle)
- Dashboard: Enhanced UI with parallel coordinates chart
- MCP Server: New atomizer-tools server for Claude integration
- Extractors: Enhanced Zernike OPD extractor
- Reports: Improved report generator

New studies (configs and scripts only):
- M1 Mirror: Cost reduction campaign studies
- Simple Beam, Simple Bracket, UAV Arm studies

Note: Large iteration data (2_iterations/, best_design_archive/)
excluded via .gitignore - kept on local Gitea only.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 15:53:55 -05:00

62 KiB

Atomizer: Intelligent FEA Optimization & NX Configuration Framework

Complete Technical Briefing Document for Podcast Generation

Document Version: 3.0 Generated: January 8, 2026 Purpose: NotebookLM/AI Podcast Source Material


PART 1: PROJECT OVERVIEW & PHILOSOPHY

What is Atomizer?

Atomizer is an intelligent optimization engine and NX configurator designed to bridge the gap between state-of-the-art simulation methods and performant, production-ready FEA workflows. It's not about CAD manipulation or mesh generation - those are setup concerns. Atomizer focuses on what matters: making advanced simulation methods accessible and effective.

The Core Problem We Solve

State-of-the-art optimization algorithms exist in academic papers. Performant FEA simulations exist in commercial tools like NX Nastran. But bridging these two worlds requires:

  • Deep knowledge of optimization theory (TPE, CMA-ES, Bayesian methods)
  • Understanding of simulation physics and solver behavior
  • Experience with what works for different problem types
  • Infrastructure for running hundreds of automated trials

Most engineers don't have time to become experts in all these domains. Atomizer is that bridge.

The Core Philosophy: "Optimize Smarter, Not Harder"

Traditional structural optimization is painful because:

  • Engineers pick algorithms without knowing which is best for their problem
  • Every new study starts from scratch - no accumulated knowledge
  • Commercial tools offer generic methods, not physics-appropriate ones
  • Simulation expertise and optimization expertise rarely coexist

Atomizer solves this by:

  1. Characterizing each study to understand its optimization landscape
  2. Selecting methods automatically based on problem characteristics
  3. Learning from every study what works and what doesn't
  4. Building a knowledge base of parameter-performance relationships

What Atomizer Is NOT

  • It's not a CAD tool - geometry modeling happens in NX
  • It's not a mesh generator - meshing is handled by NX Pre/Post
  • It's not replacing the engineer's judgment - it's amplifying it
  • It's not a black box - every decision is traceable and explainable

Target Audience

  • FEA Engineers who want to run serious optimization campaigns
  • Simulation specialists tired of manual trial-and-error
  • Research teams exploring design spaces systematically
  • Anyone who needs to find optimal designs faster

Key Differentiators from Commercial Tools

Feature OptiStruct/HEEDS optiSLang Atomizer
Algorithm selection Manual Manual Automatic (IMSO)
Learning from history None None LAC persistent memory
Study characterization Basic Basic Full landscape analysis
Neural acceleration Limited Basic GNN + MLP + Gradient
Protocol validation None None Research → Review → Approve
Documentation source Static manuals Static manuals MCP-first, live lookups
User interface GUI wizards GUI wizards Conversational + MCP tools
Extensibility Closed Limited API Power Mode: Full code access

PART 2: STUDY CHARACTERIZATION & PERFORMANCE LEARNING

The Heart of Atomizer: Understanding What Works

The most valuable thing Atomizer does is learn what makes studies succeed. This isn't just recording results - it's building a deep understanding of the relationship between:

  • Study parameters (geometry type, design variable count, constraint complexity)
  • Optimization methods (which algorithm, what settings)
  • Performance outcomes (convergence speed, solution quality, feasibility rate)

Study Characterization Process

When Atomizer runs an optimization, it doesn't just optimize - it characterizes:

┌─────────────────────────────────────────────────────────────────┐
│  STUDY CHARACTERIZATION                                         │
├─────────────────────────────────────────────────────────────────┤
│                                                                  │
│  PROBLEM FINGERPRINT:                                           │
│  • Geometry type (bracket, beam, mirror, shell, assembly)       │
│  • Number of design variables (1-5, 6-10, 11+)                  │
│  • Objective physics (stress, frequency, displacement, WFE)     │
│  • Constraint types (upper/lower bounds, ratios)                │
│  • Solver type (SOL 101, 103, 105, 111, 112)                    │
│                                                                  │
│  LANDSCAPE METRICS (computed during characterization phase):    │
│  • Smoothness score (0-1): How continuous is the response?      │
│  • Multimodality: How many distinct good regions exist?         │
│  • Parameter correlations: Which variables matter most?         │
│  • Noise level: How much solver variation exists?               │
│  • Dimensionality impact: How does space grow with variables?   │
│                                                                  │
│  PERFORMANCE OUTCOME:                                           │
│  • Trials to convergence                                        │
│  • Best objective achieved                                      │
│  • Constraint satisfaction rate                                 │
│  • Algorithm that won (if IMSO used)                            │
│                                                                  │
└─────────────────────────────────────────────────────────────────┘

Learning What Works: The LAC System

LAC (Learning Atomizer Core) stores the relationship between study characteristics and outcomes:

knowledge_base/lac/
├── optimization_memory/           # Performance by geometry type
│   ├── bracket.jsonl             # "For brackets with 4-6 vars, TPE converges in ~60 trials"
│   ├── beam.jsonl                # "Beam frequency problems are smooth - CMA-ES works well"
│   └── mirror.jsonl              # "Zernike objectives need GP-BO for sample efficiency"
├── session_insights/
│   ├── success_pattern.jsonl     # What configurations led to fast convergence
│   ├── failure.jsonl             # What configurations failed and why
│   └── workaround.jsonl          # Fixes for common issues
└── method_performance/
    └── algorithm_selection.jsonl # Which algorithm won for which problem type

Querying Historical Performance

Before starting a new study, Atomizer queries LAC:

# What worked for similar problems?
similar_studies = lac.query_similar_optimizations(
    geometry_type="bracket",
    n_objectives=2,
    n_design_vars=5,
    physics=["stress", "mass"]
)

# Result: "For 2-objective bracket problems with 5 vars,
#          NSGA-II with 80 trials typically finds a good Pareto front.
#          GP-BO is overkill - the landscape is usually rugged."

# Get the recommended method
recommendation = lac.get_best_method_for(
    geometry_type="bracket",
    n_objectives=2,
    constraint_types=["upper_bound"]
)
# Result: {"method": "NSGA-II", "n_trials": 80, "confidence": 0.87}

Why This Matters

Commercial tools treat every optimization as if it's the first one ever run. Atomizer treats every optimization as an opportunity to learn.

After 100 studies:

  • Atomizer knows that mirror problems need sample-efficient methods
  • Atomizer knows that bracket stress problems are often rugged
  • Atomizer knows that frequency optimization is usually smooth
  • Atomizer knows which constraint formulations cause infeasibility

This isn't AI magic - it's structured knowledge accumulation that makes every future study faster and more reliable.


PART 3: THE PROTOCOL OPERATING SYSTEM

Structured, Traceable Operations

Atomizer operates through a 4-layer protocol system that ensures every action is:

  • Documented - what should happen is written down
  • Traceable - what actually happened is logged
  • Validated - outcomes are checked against expectations
  • Improvable - protocols can be updated based on experience
┌─────────────────────────────────────────────────────────────────┐
│ Layer 0: BOOTSTRAP                                               │
│ Purpose: Task routing, session initialization                    │
└─────────────────────────────────────────────────────────────────┘
                              ▼
┌─────────────────────────────────────────────────────────────────┐
│ Layer 1: OPERATIONS (OP_01 - OP_07)                              │
│ Create Study | Run Optimization | Monitor | Analyze | Export    │
│ Troubleshoot | Disk Optimization                                 │
└─────────────────────────────────────────────────────────────────┘
                              ▼
┌─────────────────────────────────────────────────────────────────┐
│ Layer 2: SYSTEM (SYS_10 - SYS_17)                               │
│ IMSO | Multi-objective | Extractors | Dashboard                  │
│ Neural Acceleration | Method Selector | Study Insights           │
└─────────────────────────────────────────────────────────────────┘
                              ▼
┌─────────────────────────────────────────────────────────────────┐
│ Layer 3: EXTENSIONS (EXT_01 - EXT_04)                           │
│ Create Extractor | Create Hook | Create Protocol | Create Skill │
└─────────────────────────────────────────────────────────────────┘

Protocol Evolution: Research → Review → Approve

What happens when no protocol exists for your use case?

This is where Atomizer's extensibility shines. The system has a structured workflow for adding new capabilities:

The Protocol Evolution Workflow

┌─────────────────────────────────────────────────────────────────┐
│  STEP 1: IDENTIFY GAP                                           │
│  ─────────────────────────────────────────────────────────────  │
│  User: "I need to extract buckling load factors"                │
│  Atomizer: "No existing extractor for buckling. Initiating      │
│            new capability development."                          │
└─────────────────────────────────────────────────────────────────┘
                            │
                            ▼
┌─────────────────────────────────────────────────────────────────┐
│  STEP 2: RESEARCH PHASE                                         │
│  ─────────────────────────────────────────────────────────────  │
│  1. Query MCP Siemens docs: "How does NX store buckling?"       │
│  2. Check pyNastran docs: "OP2 buckling result format"          │
│  3. Search NX Open TSE: Example journals for SOL 105            │
│  4. Draft extractor implementation                               │
│  5. Create test cases                                            │
│                                                                  │
│  Output: Draft protocol + implementation + tests                 │
└─────────────────────────────────────────────────────────────────┘
                            │
                            ▼
┌─────────────────────────────────────────────────────────────────┐
│  STEP 3: PUSH TO APPROVAL BUCKET                                │
│  ─────────────────────────────────────────────────────────────  │
│  Location: docs/protocols/pending/                               │
│                                                                  │
│  Contents:                                                       │
│  • Protocol document (EXT_XX_BUCKLING_EXTRACTOR.md)             │
│  • Implementation (extract_buckling.py)                         │
│  • Test suite (test_buckling_extractor.py)                      │
│  • Validation evidence (example outputs)                         │
│                                                                  │
│  Status: PENDING_REVIEW                                          │
└─────────────────────────────────────────────────────────────────┘
                            │
                            ▼
┌─────────────────────────────────────────────────────────────────┐
│  STEP 4: PRIVILEGED REVIEW                                      │
│  ─────────────────────────────────────────────────────────────  │
│  Reviewer with "power_user" or "admin" privilege:               │
│                                                                  │
│  Checks:                                                         │
│  ☐ Implementation follows extractor patterns                    │
│  ☐ Tests pass on multiple SOL 105 models                        │
│  ☐ Documentation is complete                                    │
│  ☐ Error handling is robust                                     │
│  ☐ No security concerns                                         │
│                                                                  │
│  Decision: APPROVE / REQUEST_CHANGES / REJECT                   │
└─────────────────────────────────────────────────────────────────┘
                            │
                            ▼
┌─────────────────────────────────────────────────────────────────┐
│  STEP 5: INTEGRATION                                            │
│  ─────────────────────────────────────────────────────────────  │
│  On APPROVE:                                                     │
│  • Move to docs/protocols/system/                               │
│  • Add to optimization_engine/extractors/__init__.py            │
│  • Update SYS_12_EXTRACTOR_LIBRARY.md                           │
│  • Update .claude/skills/01_CHEATSHEET.md                       │
│  • Commit with: "feat: Add E23 buckling extractor"              │
│                                                                  │
│  Status: ACTIVE - Now part of Atomizer ecosystem                │
└─────────────────────────────────────────────────────────────────┘

Privilege Levels

Level Can Do Cannot Do
user Use all OP_* protocols Create/modify protocols
power_user Use OP_* + EXT_01, EXT_02 Approve new system protocols
admin Everything -

This ensures:

  • Anyone can propose new capabilities
  • Only validated code enters the ecosystem
  • Quality standards are maintained
  • The system grows safely over time

PART 4: STUDY INTERVIEW MODE - INTELLIGENT STUDY CREATION

The Problem: Configuration Complexity

Creating an optimization study traditionally requires:

  • Understanding optimization_config.json schema
  • Knowing which extractor (E1-E24) maps to which physics
  • Setting appropriate bounds for design variables
  • Choosing the right sampler and trial count
  • Avoiding common anti-patterns (mass optimization without constraints)

Most engineers aren't optimization experts. They know their physics, not Optuna samplers.

The Solution: Guided Interview

Instead of asking users to fill out JSON files, Atomizer now interviews them through natural conversation.

How It Works

┌─────────────────────────────────────────────────────────────────┐
│  STUDY INTERVIEW MODE (DEFAULT for all study creation)          │
├─────────────────────────────────────────────────────────────────┤
│                                                                  │
│  User: "I want to create a study for my bracket"                │
│                                                                  │
│  Atomizer: "I'll help you set up your optimization study.       │
│            Let me ask a few questions..."                        │
│                                                                  │
│  ┌──────────────────────────────────────────────────────────┐   │
│  │  PHASE 1: INTROSPECTION (automatic)                       │   │
│  │  • Analyze NX model expressions                           │   │
│  │  • Detect materials from simulation                       │   │
│  │  • Identify candidate design variables                    │   │
│  └──────────────────────────────────────────────────────────┘   │
│                                                                  │
│  ┌──────────────────────────────────────────────────────────┐   │
│  │  PHASE 2: PROBLEM DEFINITION                              │   │
│  │  Q: "What are you trying to optimize?"                    │   │
│  │  A: "Minimize mass while keeping stress low"              │   │
│  └──────────────────────────────────────────────────────────┘   │
│                                                                  │
│  ┌──────────────────────────────────────────────────────────┐   │
│  │  PHASE 3: OBJECTIVES (auto-mapped to extractors)         │   │
│  │  • Mass → E4 (BDF mass extractor)                        │   │
│  │  • Stress → E3 (Von Mises stress)                        │   │
│  │  • No manual extractor selection needed!                  │   │
│  └──────────────────────────────────────────────────────────┘   │
│                                                                  │
│  ┌──────────────────────────────────────────────────────────┐   │
│  │  PHASE 4: CONSTRAINTS (material-aware validation)        │   │
│  │  Q: "What's the maximum stress limit?"                   │   │
│  │  A: "200 MPa"                                            │   │
│  │                                                           │   │
│  │  ⚠️ "Your model uses Aluminum 6061-T6 (yield: 276 MPa).  │   │
│  │     200 MPa is close to yield. Consider 184 MPa (SF=1.5)"│   │
│  └──────────────────────────────────────────────────────────┘   │
│                                                                  │
│  ┌──────────────────────────────────────────────────────────┐   │
│  │  PHASE 5: DESIGN VARIABLES (from introspection)          │   │
│  │  "I found these expressions in your model:                │   │
│  │   • thickness (current: 5mm)                              │   │
│  │   • rib_height (current: 10mm)                            │   │
│  │   Which should we optimize?"                              │   │
│  │                                                           │   │
│  │  → Auto-suggests bounds: 2.5-7.5mm (±50% of current)     │   │
│  └──────────────────────────────────────────────────────────┘   │
│                                                                  │
│  ┌──────────────────────────────────────────────────────────┐   │
│  │  PHASE 6: REVIEW & GENERATE                               │   │
│  │  Shows complete blueprint, asks for confirmation          │   │
│  │  → Generates optimization_config.json                     │   │
│  │  → Generates run_optimization.py                          │   │
│  └──────────────────────────────────────────────────────────┘   │
│                                                                  │
└─────────────────────────────────────────────────────────────────┘

Anti-Pattern Detection

The interview includes an Engineering Validator that catches common mistakes:

Anti-Pattern Detection Warning
mass_no_constraint Mass objective without stress/displacement limit "This typically produces paper-thin designs"
stress_over_yield Stress limit > material yield "Consider safety factor 1.5-2.0"
bounds_too_wide Variable range > 100x "Wide bounds = slow convergence"
too_many_objectives >3 objectives "Focus on key goals for tractable optimization"

Materials Database

Built-in knowledge of engineering materials:

  • 12 common materials (aluminum, steel, titanium, composites)
  • Fuzzy name matching: "Al 6061", "6061-T6", "aluminum" → all work
  • Safety factors by application (static, fatigue, impact)
  • Yield/ultimate stress validation

Key Benefits

  1. Zero configuration knowledge needed - Just describe what you want
  2. Material-aware validation - Catches stress limits vs. yield
  3. Auto extractor mapping - Goals → E1-E24 automatically
  4. Anti-pattern detection - Warns about common mistakes
  5. State persistence - Resume interrupted interviews
  6. Blueprint validation - Complete config before generation

Trigger Phrases

Any of these start Interview Mode (now the DEFAULT):

  • "Create a study", "new study", "set up study"
  • "Optimize this", "optimize my model"
  • "I want to minimize mass"

To skip Interview Mode (power users only):

  • "Quick setup", "skip interview", "manual config"

Technical Implementation

optimization_engine/interview/
├── study_interview.py       # Main orchestrator (StudyInterviewEngine)
├── question_engine.py       # Conditional logic, dynamic options
├── interview_state.py       # Persistent state, JSON serialization
├── interview_presenter.py   # ClaudePresenter, DashboardPresenter
├── engineering_validator.py # Materials DB, anti-pattern detector
├── study_blueprint.py       # Validated configuration generation
└── schemas/
    ├── interview_questions.json  # 17 questions, 7 phases
    ├── materials_database.json   # 12 materials with properties
    └── anti_patterns.json        # 12 anti-pattern definitions

All 129 tests passing.


PART 5: MCP-FIRST DEVELOPMENT APPROACH

When Functions Don't Exist: How Atomizer Develops New Capabilities

When Atomizer encounters a task without an existing extractor or protocol, it follows a documentation-first development approach using MCP (Model Context Protocol) tools.

The Documentation Hierarchy

PRIMARY SOURCE (Always check first):
┌─────────────────────────────────────────────────────────────────┐
│  MCP Siemens Documentation Tools                                 │
│  ─────────────────────────────────────────────────────────────  │
│  • mcp__siemens-docs__nxopen_get_class                          │
│    → Get official NX Open class documentation                   │
│    → Example: Query "CaeResultType" for result access patterns  │
│                                                                  │
│  • mcp__siemens-docs__nxopen_get_index                          │
│    → Browse class/function indexes                               │
│    → Find related classes for a capability                       │
│                                                                  │
│  • mcp__siemens-docs__siemens_docs_list                         │
│    → List all available documentation resources                  │
│                                                                  │
│  WHY PRIMARY: This is the official, up-to-date source.          │
│  API calls verified against actual NX Open signatures.           │
└─────────────────────────────────────────────────────────────────┘
                            │
                            ▼
SECONDARY SOURCES (Use when MCP doesn't have the answer):
┌─────────────────────────────────────────────────────────────────┐
│  pyNastran Documentation                                         │
│  ─────────────────────────────────────────────────────────────  │
│  For OP2/F06 result parsing patterns                             │
│  Example: How to access buckling eigenvalues from OP2            │
│  Location: pyNastran GitHub, readthedocs                         │
└─────────────────────────────────────────────────────────────────┘
                            │
                            ▼
┌─────────────────────────────────────────────────────────────────┐
│  NX Open TSE (Technical Support Examples)                        │
│  ─────────────────────────────────────────────────────────────  │
│  Community examples and Siemens support articles                 │
│  Example: Working journal for exporting specific result types    │
│  Location: Siemens Community, support articles                   │
└─────────────────────────────────────────────────────────────────┘
                            │
                            ▼
┌─────────────────────────────────────────────────────────────────┐
│  Existing Atomizer Extractors                                    │
│  ─────────────────────────────────────────────────────────────  │
│  Pattern reference from similar implementations                  │
│  Example: How extract_frequency.py handles modal results         │
│  Location: optimization_engine/extractors/                       │
└─────────────────────────────────────────────────────────────────┘

Example: Developing a New Extractor

User request: "I need to extract heat flux from thermal analysis results"

Step 1: Query MCP First

# Query NX Open documentation
mcp__siemens-docs__nxopen_get_class("CaeResultComponent")
# Returns: Official documentation for result component access

mcp__siemens-docs__nxopen_get_class("HeatFluxComponent")
# Returns: Specific heat flux result access patterns

Step 2: Check pyNastran for OP2 Parsing

# How does pyNastran represent thermal results?
# Check: model.thermalFlux or model.heatFlux structures

Step 3: Reference Existing Extractors

# Look at extract_temperature.py for thermal result patterns
# Adapt the OP2 access pattern for heat flux

Step 4: Implement with Verified API Calls

def extract_heat_flux(op2_file: Path, subcase: int = 1) -> Dict:
    """
    Extract heat flux from SOL 153/159 thermal results.

    API Reference: NX Open CaeResultComponent (via MCP)
    OP2 Format: pyNastran thermal flux structures
    """
    # Implementation using verified patterns

Why This Matters

  • No guessing - Every API call is verified against documentation
  • Maintainable - When NX updates, we check official docs first
  • Traceable - Each extractor documents its sources
  • Reliable - Secondary sources only fill gaps, never override primary

PART 5: SIMULATION-FOCUSED OPTIMIZATION

Bridging State-of-the-Art Methods and Performant Simulations

Atomizer's core mission is making advanced optimization methods work seamlessly with NX Nastran simulations. The CAD and mesh are setup concerns - our focus is on the simulation loop.

The Simulation Optimization Loop

┌─────────────────────────────────────────────────────────────────┐
│  SIMULATION-CENTRIC WORKFLOW                                     │
├─────────────────────────────────────────────────────────────────┤
│                                                                  │
│  ┌─────────────┐                                                │
│  │ OPTIMIZER   │ ← State-of-the-art algorithms                  │
│  │ (Atomizer)  │   TPE, CMA-ES, GP-BO, NSGA-II                  │
│  └──────┬──────┘   + Neural surrogates                          │
│         │                                                        │
│         ▼ Design Variables                                       │
│  ┌─────────────┐                                                │
│  │ NX CONFIG   │ ← Expression updates via .exp files           │
│  │ UPDATER     │   Automated, no GUI interaction                │
│  └──────┬──────┘                                                │
│         │                                                        │
│         ▼ Updated Model                                          │
│  ┌─────────────┐                                                │
│  │ NX NASTRAN  │ ← SOL 101, 103, 105, 111, 112                  │
│  │ SOLVER      │   Batch mode execution                          │
│  └──────┬──────┘                                                │
│         │                                                        │
│         ▼ Results (OP2, F06)                                     │
│  ┌─────────────┐                                                │
│  │ EXTRACTORS  │ ← 24 physics extractors                        │
│  │ (pyNastran) │   Stress, displacement, frequency, etc.        │
│  └──────┬──────┘                                                │
│         │                                                        │
│         ▼ Objectives & Constraints                               │
│  ┌─────────────┐                                                │
│  │ OPTIMIZER   │ ← Learning: What parameters → What results     │
│  │ (Atomizer)  │   Building surrogate models                     │
│  └─────────────┘                                                │
│                                                                  │
└─────────────────────────────────────────────────────────────────┘

Supported Nastran Solution Types

SOL Type What Atomizer Optimizes
101 Linear Static Stress, displacement, stiffness
103 Normal Modes Frequencies, mode shapes, modal mass
105 Buckling Critical load factors, stability margins
111 Frequency Response Transfer functions, resonance peaks
112 Transient Response Peak dynamic response, settling time

NX Expression Management

Atomizer updates NX models through the expression system - no manual CAD editing:

# Expression file format (.exp)
[MilliMeter]rib_thickness=12.5
[MilliMeter]flange_width=25.0
[Degrees]support_angle=45.0

# Atomizer generates this, NX imports it, geometry updates automatically

This keeps the optimization loop fast:

  • No interactive sessions
  • No license seat occupation during solver runs
  • Batch processing of hundreds of trials

PART 6: OPTIMIZATION ALGORITHMS

IMSO: Intelligent Multi-Strategy Optimization

Instead of asking "which algorithm should I use?", IMSO characterizes your problem and selects automatically.

The Two-Phase Process

Phase 1: Characterization (10-30 trials)

  • Unbiased sampling (Random or Sobol)
  • Compute landscape metrics every 5 trials
  • Stop when confidence reaches 85%

Phase 2: Optimized Search

  • Algorithm selected based on landscape type:
    • Smooth unimodal → CMA-ES or GP-BO
    • Smooth multimodal → GP-BO
    • Rugged → TPE
    • Noisy → TPE (most robust)

Performance Comparison

Problem Type Random Search TPE Alone IMSO
Smooth unimodal 150 trials 80 trials 45 trials
Rugged multimodal 200 trials 95 trials 70 trials
Mixed landscape 180 trials 100 trials 56 trials

Average improvement: 40% fewer trials to convergence

Multi-Objective: NSGA-II

For problems with competing objectives (mass vs. stiffness, cost vs. performance):

  • Full Pareto front discovery
  • Hypervolume tracking for solution quality
  • Interactive Pareto visualization in dashboard

PART 7: NEURAL NETWORK ACCELERATION

When FEA is Too Slow

Single FEA evaluation: 10-30 minutes Exploring 1000 designs: 7-20 days

Neural surrogates change this equation entirely.

Performance Comparison

Metric FEA Neural Network Speedup
Time per evaluation 20 min 4.5 ms 266,000x
Trials per day 72 19 million 263,000x
Design exploration Limited Comprehensive -

Two Approaches

1. MLP Surrogate (Simple, Fast to Train)

  • 4-layer network, ~34K parameters
  • Train on 50-100 FEA samples
  • 1-5% error for most objectives
  • Best for: Quick studies, smooth objectives

2. Zernike GNN (Physics-Aware, High Accuracy)

  • Graph neural network with 1.2M parameters
  • Predicts full displacement fields
  • Differentiable Zernike fitting
  • Best for: Mirror optimization, optical surfaces

Turbo Mode Workflow

REPEAT until converged:
  1. Run 5,000 neural predictions (~1 second)
  2. Select top 5 diverse candidates
  3. FEA validate those 5 (~25 minutes)
  4. Retrain neural network with new data
  5. Check for convergence

Result: 50 FEA runs explore what would take 1000+ trials traditionally.


PART 8: SELF-AWARE TURBO (SAT) - VALIDATED BREAKTHROUGH

The Problem: Surrogates That Don't Know When They're Wrong

Traditional neural surrogates have a fatal flaw: they're confidently wrong in unexplored regions.

In V5, we trained an MLP on 129 FEA samples and ran L-BFGS gradient descent on the surrogate. It found a "minimum" at WS=280. We ran FEA. The actual result: WS=376 - a 30%+ error.

The surrogate had descended to a region with no training data and predicted with perfect confidence. L-BFGS loves smooth surfaces, and the MLP happily provided one - completely fabricated.

Root cause: The surrogate doesn't know what it doesn't know.

The Solution: Self-Aware Turbo (SAT)

SAT v3 achieved WS=205.58, beating all previous methods (V7 TPE: 218.26, V6 TPE: 225.41).

Core Principles

  1. Never trust a point prediction - Always require uncertainty bounds
  2. High uncertainty = run FEA - Don't optimize where you don't know
  3. Actively fill gaps - Prioritize FEA in high-uncertainty regions
  4. Validate gradient solutions - Check L-BFGS results before trusting

Key Innovations

1. Ensemble Surrogate (Epistemic Uncertainty)

Instead of one MLP, train 5 independent models with different initializations:

class EnsembleSurrogate:
    def predict(self, x):
        preds = [m.predict(x) for m in self.models]
        mean = np.mean(preds, axis=0)
        std = np.std(preds, axis=0)  # Epistemic uncertainty!
        return mean, std

Why this works: Models trained on different seeds agree in well-sampled regions but disagree wildly in extrapolation regions.

2. Distance-Based Out-of-Distribution Detection

Track training data distribution and flag points that are "too far":

def is_in_distribution(self, x, threshold=2.0):
    """Check if point is within 2 std of training data."""
    z_scores = np.abs((x - self.mean) / (self.std + 1e-6))
    return z_scores.max() < threshold

3. Adaptive Exploration Schedule

def get_exploration_weight(trial_num):
    if trial_num <= 30:      return 0.15  # Phase 1: 15% exploration
    elif trial_num <= 80:    return 0.08  # Phase 2: 8% exploration
    else:                    return 0.03  # Phase 3: 3% exploitation

4. Soft Mass Constraints in Acquisition

mass_penalty = max(0, pred_mass - 118.0) * 5.0  # Soft threshold at 118 kg
acquisition = norm_ws - exploration_weight * norm_dist + norm_mass_penalty

SAT Version History

Version Training Data Key Fix Best WS
v1 129 samples - 218.26
v2 196 samples Duplicate prevention 271.38 (regression!)
v3 556 samples (V5-V8) Adaptive exploration + mass targeting 205.58

V9 Results (SAT v3)

Phase Trials Best WS Mean WS
Phase 1 (explore) 30 232.00 394.48
Phase 2 (balanced) 50 222.01 360.51
Phase 3 (exploit) 57+ 205.58 262.57

Key metrics:

  • 100% feasibility rate
  • 100% unique designs (no duplicates)
  • Surrogate R² = 0.99

When to Use SAT vs Pure TPE

Scenario Recommendation
< 100 existing samples Pure TPE (not enough for good surrogate)
100-500 samples SAT Phase 1-2 only (no L-BFGS)
> 500 samples Full SAT with L-BFGS refinement
High-dimensional (>20 params) Pure TPE (curse of dimensionality)
Noisy FEA Pure TPE (surrogates struggle with noise)

The Core Insight

"A surrogate that knows when it doesn't know is infinitely more valuable than one that's confidently wrong."

SAT doesn't just optimize faster - it optimizes safer. Every prediction comes with uncertainty bounds. Every gradient step is validated. Every extrapolation is flagged.

This is the difference between a tool that works in demos and a system that works in production.


PART 9: THE EXTRACTOR LIBRARY

24 Physics Extractors

Every extractor follows the same pattern: verified API calls, robust error handling, documented sources.

ID Physics Function Output
E1 Displacement extract_displacement() mm
E2 Frequency extract_frequency() Hz
E3 Von Mises Stress extract_solid_stress() MPa
E4-E5 Mass BDF or CAD-based kg
E8-E10 Zernike WFE Standard, relative, builder nm
E12-E14 Advanced Stress Principal, strain energy, SPC MPa, J, N
E15-E17 Thermal Temperature, gradient, flux K, K/mm, W/mm²
E18 Modal Mass From F06 kg
E19 Part Introspection Full part analysis dict
E20-E22 Zernike OPD Analytic, comparison, figure nm

The 20-Line Rule

If you're writing more than 20 lines of extraction code in your study, you're probably:

  1. Duplicating existing functionality
  2. Need to create a proper extractor

Always check the library first. If it doesn't exist, propose a new extractor through the protocol evolution workflow.


PART 10: DASHBOARD & VISUALIZATION

Real-Time Monitoring

React + TypeScript + Plotly.js

Features

  • Parallel coordinates: See all design variables and objectives simultaneously
  • Pareto front: 2D/3D visualization of multi-objective trade-offs
  • Convergence tracking: Best-so-far with individual trial scatter
  • WebSocket updates: Live as optimization runs

Report Generation

Automatic markdown reports with:

  • Study configuration and objectives
  • Best result with performance metrics
  • Convergence plots (300 DPI, publication-ready)
  • Top trials table
  • Full history (collapsible)

PART 11: MCP-POWERED CONVERSATIONAL INTERFACE (NEW)

The Evolution: From Terminal to Intelligent Assistant

The Atomizer Dashboard now features an MCP-powered conversational interface that brings full Claude Code capabilities directly into the browser. This isn't a simple chatbot - it's a complete integration layer that can create studies, run optimizations, and even modify Atomizer itself.

The Two-Mode Architecture

┌─────────────────────────────────────────────────────────────────────┐
│                    ATOMIZER ASSISTANT                                │
├─────────────────────────────────────────────────────────────────────┤
│                                                                      │
│  ┌──────────────────────────┐  ┌──────────────────────────┐        │
│  │      👤 USER MODE        │  │     ⚡ POWER MODE        │        │
│  │     (Default - Safe)     │  │  (Full System Access)    │        │
│  ├──────────────────────────┤  ├──────────────────────────┤        │
│  │ ✓ Create studies         │  │ ✓ All User Mode tools    │        │
│  │ ✓ Run optimizations      │  │ ✓ Edit any file          │        │
│  │ ✓ Monitor progress       │  │ ✓ Create extractors      │        │
│  │ ✓ Analyze results        │  │ ✓ Modify protocols       │        │
│  │ ✓ Generate reports       │  │ ✓ Run shell commands     │        │
│  │ ✓ Query trial data       │  │ ✓ Git operations         │        │
│  │ ✓ Explain FEA concepts   │  │ ✓ Dashboard modifications│        │
│  │                          │  │                          │        │
│  │ ✗ No code modification   │  │ (Requires confirmation)  │        │
│  │ ✗ No shell access        │  │                          │        │
│  └──────────────────────────┘  └──────────────────────────┘        │
│                                                                      │
└─────────────────────────────────────────────────────────────────────┘

Why MCP (Model Context Protocol)?

Instead of building a custom integration, Atomizer uses MCP - Claude's official protocol for tool integration:

┌─────────────────────────────────────────────────────────────────────┐
│  Traditional Approach (Fragile)                                      │
│  ─────────────────────────────────────────────────────────────────  │
│  Chat → Parse intent → Custom code → Maybe works → Hope it's right  │
└─────────────────────────────────────────────────────────────────────┘
                              vs
┌─────────────────────────────────────────────────────────────────────┐
│  MCP Approach (Robust)                                               │
│  ─────────────────────────────────────────────────────────────────  │
│  Chat → Claude understands → Calls defined tool → Verified result   │
│                                                                      │
│  Tools are:                                                          │
│  • Strongly typed (Zod schemas)                                     │
│  • Self-documenting                                                 │
│  • Version controlled                                               │
│  • Testable independently                                           │
└─────────────────────────────────────────────────────────────────────┘

The Atomizer MCP Server

A dedicated MCP server exposes Atomizer capabilities as tools:

Study Management Tools:

Tool Description
list_studies List all studies with status and trial counts
get_study_status Get detailed study configuration and results
create_study Create study from natural language description

Optimization Tools:

Tool Description
run_optimization Start optimization with specified parameters
stop_optimization Gracefully stop running optimization
get_trial_data Query trials with filters (best, pareto, recent)

Analysis Tools:

Tool Description
analyze_convergence Compute convergence metrics and trends
compare_trials Side-by-side trial comparison
get_best_design Get optimal design with full parameters
generate_report Create markdown report with plots

Physics Tools:

Tool Description
explain_physics Explain FEA concepts in context
recommend_method Suggest optimization method for problem
query_extractors List available physics extractors

Power Mode Tools (Additional):

Tool Description
edit_file Modify files in Atomizer codebase
create_extractor Generate new physics extractor
run_shell_command Execute shell commands
search_codebase Search files with patterns

Session Persistence

Conversations persist across page navigation:

┌─────────────────────────────────────────────────────────────────────┐
│  CONVERSATION CONTINUITY                                             │
├─────────────────────────────────────────────────────────────────────┤
│                                                                      │
│  1. User on Home: "Create a bracket optimization study"             │
│     └─→ Atomizer creates study, stores session_id                   │
│                                                                      │
│  2. User clicks into new study                                       │
│     └─→ Same session continues, context updated                     │
│                                                                      │
│  3. User: "Now add a stress constraint"                             │
│     └─→ Atomizer knows which study, continues setup                 │
│                                                                      │
│  4. User refreshes page                                              │
│     └─→ Session restored from SQLite, conversation intact           │
│                                                                      │
└─────────────────────────────────────────────────────────────────────┘

Context-Aware Responses

The assistant automatically adapts to context:

On Dashboard Home (no study selected):

Available actions:
• Create new study
• List existing studies
• Compare studies
• Explain Atomizer concepts

Inside a Study:

Study context loaded: bracket_mass_v3
• 47 trials completed
• Best: 2.34 kg (trial #32)
• Status: Running

Available actions:
• Check progress
• Analyze convergence
• Stop optimization
• Generate report

Tool Call Visualization

When the assistant uses tools, users see exactly what's happening:

┌─────────────────────────────────────────────────────────────────────┐
│  User: "Show me the best 5 designs"                                  │
│                                                                      │
│  ┌─────────────────────────────────────────────────────────────┐    │
│  │  🔧 get_trial_data                                    ✓     │    │
│  │  ┌─────────────────────────────────────────────────────┐   │    │
│  │  │ Arguments:                                          │   │    │
│  │  │   study_name: "bracket_mass_v3"                    │   │    │
│  │  │   query: "best"                                     │   │    │
│  │  │   limit: 5                                          │   │    │
│  │  └─────────────────────────────────────────────────────┘   │    │
│  │  ┌─────────────────────────────────────────────────────┐   │    │
│  │  │ Result:                                             │   │    │
│  │  │   [Trial #32: 2.34 kg, Trial #41: 2.38 kg, ...]    │   │    │
│  │  └─────────────────────────────────────────────────────┘   │    │
│  └─────────────────────────────────────────────────────────────┘    │
│                                                                      │
│  Here are the top 5 designs from your optimization:                 │
│  | Rank | Trial | Mass (kg) | Stress (MPa) |                        │
│  |------|-------|-----------|--------------|                        │
│  | 1    | #32   | 2.34      | 198.2        |                        │
│  | ...                                                               │
└─────────────────────────────────────────────────────────────────────┘

Power Mode Use Cases

Creating New Extractors:

User: "I need to extract buckling load factors"

[Power Mode]
1. Query NX Open docs (MCP siemens-docs)
2. Check pyNastran buckling structures
3. Generate extract_buckling.py from template
4. Update __init__.py exports
5. Update SYS_12_EXTRACTOR_LIBRARY.md

Result: New E25 buckling extractor ready to use

Extending Atomizer:

User: "Add a new report section for design sensitivity"

[Power Mode]
1. Read current report generator
2. Add sensitivity analysis section
3. Update visualization code
4. Test with existing study
5. Commit changes

Result: Reports now include sensitivity plots

Security Model

Action User Mode Power Mode Notes
Read study files Within studies/ directory
Run Python scripts Predefined operations only
Edit study configs Via tools, validated
Edit Atomizer code Requires mode switch
Run shell commands Logged, cwd restricted
Git operations Commit/push to remotes

Power Mode activation requires explicit confirmation - users can't accidentally modify system files.

Why This Matters for Atomizer

  1. Zero learning curve - Engineers talk, Atomizer acts
  2. Full capability access - Everything CLI can do, browser can too
  3. Safe by default - User mode prevents accidents
  4. Extensible - Power mode for growing the system
  5. Transparent - Every tool call is visible and logged
  6. Persistent - Conversations don't disappear

Technical Stack

Frontend:
├── React + TypeScript
├── WebSocket for streaming
├── Tool call visualization components
└── Mode toggle with confirmation

Backend:
├── FastAPI
├── Session manager (subprocess lifecycle)
├── Conversation store (SQLite)
├── Context builder (study + global)
└── MCP config generator

MCP Server:
├── Node.js + TypeScript
├── @modelcontextprotocol/sdk
├── Zod schemas for validation
├── Python subprocess for heavy ops
└── Mode-based tool filtering

PART 12: STATISTICS & METRICS

Codebase

Component Lines of Code
Optimization Engine (Python) 66,204
Dashboard (TypeScript) 54,871
Documentation 999 files
Total ~120,000+

Performance

Metric Value
Neural inference 4.5 ms per trial
Turbo throughput 5,000-7,000 trials/sec
GNN R² accuracy 0.95-0.99
IMSO improvement 40% fewer trials

Coverage

  • 24 physics extractors
  • 6+ optimization algorithms
  • 7 Nastran solution types (SOL 101, 103, 105, 106, 111, 112, 153/159)
  • 3 neural surrogate types (MLP, GNN, Ensemble)

PART 13: KEY TAKEAWAYS

What Makes Atomizer Different

  1. Study characterization - Learn what works for each problem type
  2. Persistent memory (LAC) - Never start from scratch
  3. Protocol evolution - Safe, validated extensibility
  4. MCP-first development - Documentation-driven, not guessing
  5. Simulation focus - Not CAD, not mesh - optimization of simulation performance
  6. Self-aware surrogates (SAT) - Know when predictions are uncertain, validated WS=205.58
  7. Interview Mode - Zero-config study creation through natural conversation
  8. MCP-Powered Chat (NEW) - Full Atomizer capabilities via conversational interface with User/Power modes

Sound Bites for Podcast

  • "Atomizer learns what works. After 100 studies, it knows that mirror problems need GP-BO, not TPE."
  • "When we don't have an extractor, we query official NX documentation first - no guessing."
  • "New capabilities go through research, review, and approval - just like engineering change orders."
  • "4.5 milliseconds per prediction means we can explore 50,000 designs before lunch."
  • "Every study makes the system smarter. That's not marketing - that's LAC."
  • "SAT knows when it doesn't know. A surrogate that's confidently wrong is worse than no surrogate at all."
  • "V5 surrogate said WS=280. FEA said WS=376. That's a 30% error from extrapolating into the unknown. SAT v3 fixed that - WS=205.58."
  • "Just say 'create a study' and Atomizer interviews you. No JSON, no manuals, just conversation."
  • "User mode for running optimizations safely. Power mode for extending Atomizer itself. Same conversation, different permissions."
  • "Every tool call is visible. You see exactly what's happening - no black boxes."
  • "Create a study from the dashboard, navigate into it, and the conversation continues. Context follows you."

The Core Message

Atomizer is an intelligent optimization platform that:

  • Bridges state-of-the-art algorithms and production FEA workflows
  • Learns what works for different problem types
  • Grows through structured protocol evolution
  • Accelerates design exploration with neural surrogates
  • Documents every decision for traceability
  • Converses naturally through MCP-powered tools with appropriate permission levels

This isn't just automation - it's accumulated engineering intelligence with a natural interface.


Atomizer: Where simulation expertise meets optimization science.


Document Statistics:

  • Sections: 14 (including new MCP Chat Architecture)
  • Focus: Simulation optimization (not CAD/mesh)
  • Key additions: Study characterization, protocol evolution, MCP-first development, SAT v3, Study Interview Mode, MCP-Powered Chat
  • Positioning: Optimizer & NX configurator with conversational interface
  • SAT Performance: Validated WS=205.58 (best ever, beating V7 TPE at 218.26)
  • Interview Mode: 129 tests passing, 12 materials, 12 anti-patterns, 7 phases
  • MCP Chat: User/Power modes, 15+ tools, WebSocket streaming, session persistence

Prepared for NotebookLM/AI Podcast Generation