2025-11-15 08:05:19 -05:00
# Atomizer
2025-11-15 07:56:35 -05:00
2026-01-20 10:03:45 -05:00
> **LLM-driven structural optimization framework** for Siemens NX with neural network acceleration.
2025-11-15 07:56:35 -05:00
[](https://www.python.org/downloads/)
2025-12-23 15:06:15 -05:00
[](https://www.plm.automation.siemens.com/global/en/products/nx/)
2025-11-15 07:56:35 -05:00
[](LICENSE)
2026-01-20 10:03:45 -05:00
[](docs/physics/)
2025-11-15 07:56:35 -05:00
2025-12-23 15:06:15 -05:00
---
## What is Atomizer?
2025-11-15 07:56:35 -05:00
2025-12-23 15:06:15 -05:00
Atomizer is an **LLM-first optimization framework ** that transforms how engineers interact with FEA optimization. Instead of manually configuring JSON files and writing extraction scripts, you describe what you want in natural language - and Atomizer handles the rest.
2025-11-15 07:56:35 -05:00
2025-12-23 15:06:15 -05:00
```
Engineer: "Optimize the M1 mirror support structure to minimize wavefront error
across elevation angles 20-90 degrees. Keep mass under 15kg."
2025-11-15 14:34:16 -05:00
2025-12-23 15:06:15 -05:00
Atomizer: Creates study, configures extractors, runs optimization, reports results.
```
2025-11-15 14:34:16 -05:00
2025-12-23 15:06:15 -05:00
### Core Capabilities
2025-11-15 14:34:16 -05:00
2025-12-23 15:06:15 -05:00
| Capability | Description |
|------------|-------------|
| **LLM-Driven Workflow ** | Describe optimizations in plain English. Claude interprets, configures, and executes. |
| **Neural Acceleration ** | GNN surrogates achieve 2000-500,000x speedup over FEA (4.5ms vs 10-30min) |
| **Physics Insights ** | Real-time Zernike wavefront error, stress fields, modal analysis visualizations |
| **Multi-Objective ** | Pareto optimization with NSGA-II, interactive parallel coordinates plots |
| **NX Integration ** | Seamless journal-based control of Siemens NX Simcenter |
| **Extensible ** | Plugin system with hooks for pre/post mesh, solve, and extraction phases |
2025-11-15 14:34:16 -05:00
2025-11-17 08:29:30 -05:00
---
2025-12-23 15:06:15 -05:00
## Architecture Overview
```
┌─────────────────────────────────┐
│ LLM Interface Layer │
│ Claude Code + Natural Lang │
└───────────────┬─────────────────┘
│
┌─────────────────────────┼─────────────────────────┐
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────────┐ ┌─────────────────┐
│ Traditional │ │ Neural Path │ │ Dashboard │
│ FEA Path │ │ (GNN Surrogate) │ │ (React) │
│ ~10-30 min │ │ ~4.5 ms │ │ Real-time │
└────────┬────────┘ └──────────┬──────────┘ └────────┬────────┘
│ │ │
└─────────────────────────┼─────────────────────────┘
│
┌──────────────┴──────────────┐
│ Extractors & Insights │
│ 20+ physics extractors │
│ 8 visualization types │
└─────────────────────────────┘
```
---
## Key Features
### 1. Physics Extractors (20+)
Atomizer includes a comprehensive library of validated physics extractors:
| Category | Extractors | Notes |
|----------|------------|-------|
| **Displacement ** | `extract_displacement()` | mm, nodal |
| **Stress ** | `extract_von_mises_stress()` , `extract_principal_stress()` | Shell (CQUAD4) & Solid (CTETRA) |
| **Modal ** | `extract_frequency()` , `extract_modal_mass()` | Hz, kg |
| **Mass ** | `extract_mass_from_bdf()` , `extract_mass_from_expression()` | kg |
| **Thermal ** | `extract_temperature()` | K |
| **Energy ** | `extract_strain_energy()` | J |
| **Optics ** | `extract_zernike_*()` (Standard, Analytic, **OPD ** ) | nm RMS |
**Zernike OPD Method**: The recommended extractor for mirror optimization. Correctly accounts for lateral displacement when computing wavefront error - critical for tilted mirror analysis.
### 2. Study Insights (8 Types)
2025-11-17 08:29:30 -05:00
2025-12-23 15:06:15 -05:00
Interactive physics visualizations generated on-demand:
2025-11-17 08:29:30 -05:00
2025-12-23 15:06:15 -05:00
| Insight | Purpose |
|---------|---------|
| `zernike_wfe` | Wavefront error decomposition with Zernike coefficients |
| `zernike_opd_comparison` | Compare Standard vs OPD methods across subcases |
| `msf_zernike` | Mid-spatial frequency analysis |
| `stress_field` | 3D stress field visualization |
| `modal_analysis` | Mode shapes and frequencies |
| `thermal_field` | Temperature distribution |
| `design_space` | Parameter sensitivity exploration |
2025-11-24 07:49:48 -05:00
2025-12-23 15:06:15 -05:00
### 3. Neural Network Acceleration
2025-11-24 07:49:48 -05:00
2025-12-23 15:06:15 -05:00
The GNN surrogate system (`optimization_engine/gnn/` ) provides:
2025-11-24 07:49:48 -05:00
2025-12-23 15:06:15 -05:00
- **PolarMirrorGraph**: Fixed 3000-node polar grid for consistent predictions
- **ZernikeGNN**: Design-conditioned graph convolutions
- **Differentiable Zernike fitting**: GPU-accelerated coefficient computation
- **Hybrid optimization**: Automatic switching between FEA and NN based on confidence
**Performance**: 4.5ms per prediction vs 10-30 minutes for FEA (2000x+ speedup)
### 4. Real-Time Dashboard
React-based monitoring with:
- Live trial progress tracking
- Pareto front visualization
- Parallel coordinates for multi-objective analysis
- Insights tab for physics visualizations
- Interactive Zernike decomposition with OPD/Standard toggle
```bash
# Start the dashboard
python launch_dashboard.py
# Opens at http://localhost:3003
```
2025-11-24 07:49:48 -05:00
---
2025-11-15 07:56:35 -05:00
2025-12-23 15:06:15 -05:00
## Current Studies
Studies are organized by geometry type:
2025-11-15 07:56:35 -05:00
```
2025-12-23 15:06:15 -05:00
studies/
├── M1_Mirror/ # Telescope primary mirror optimization
│ ├── m1_mirror_adaptive_V15/ # Latest: Zernike OPD + GNN turbo
│ └── m1_mirror_cost_reduction_V12/
├── Simple_Bracket/ # Structural bracket studies
├── UAV_Arm/ # UAV arm frequency optimization
├── Drone_Gimbal/ # Gimbal assembly
├── Simple_Beam/ # Beam topology studies
└── _Other/ # Experimental
2025-11-15 07:56:35 -05:00
```
2025-12-23 15:06:15 -05:00
### Study Structure
Each study follows a standardized structure:
2025-11-26 12:01:50 -05:00
```
2025-12-23 15:06:15 -05:00
study_name/
├── optimization_config.json # Problem definition
├── run_optimization.py # FEA optimization script
├── run_nn_optimization.py # Neural turbo mode (optional)
├── README.md # Study documentation
├── 1_setup/
│ └── model/ # NX part, sim, fem files
├── 2_iterations/ # Trial folders (iter1, iter2, ...)
├── 3_results/
│ ├── study.db # Optuna database
│ └── optimization.log # Execution logs
└── 3_insights/ # Generated visualizations
└── zernike_*.html
2025-11-26 12:01:50 -05:00
```
2025-12-23 15:06:15 -05:00
---
2025-11-15 07:56:35 -05:00
## Quick Start
### Prerequisites
2025-12-23 15:06:15 -05:00
- **Siemens NX 2506+** with NX Nastran solver
- **Python 3.10+** (Anaconda recommended)
- **Atomizer conda environment** (pre-configured)
2025-11-15 07:56:35 -05:00
2025-12-23 15:06:15 -05:00
### Run an Optimization
2025-11-15 07:56:35 -05:00
2025-12-23 15:06:15 -05:00
```bash
# Activate the environment
conda activate atomizer
2025-11-15 07:56:35 -05:00
2025-12-23 15:06:15 -05:00
# Navigate to a study
cd studies/M1_Mirror/m1_mirror_adaptive_V15
2025-11-15 07:56:35 -05:00
2025-12-23 15:06:15 -05:00
# Run optimization (50 FEA trials)
python run_optimization.py --start --trials 50
2025-11-15 07:56:35 -05:00
2025-12-23 15:06:15 -05:00
# Or run with neural turbo mode (5000 GNN trials)
python run_nn_optimization.py --turbo --nn-trials 5000
```
2025-11-15 07:56:35 -05:00
2025-12-23 15:06:15 -05:00
### Monitor Progress
2025-11-15 07:56:35 -05:00
feat: Phase 3.2 Task 1.2 - Wire LLMOptimizationRunner to production
Task 1.2 Complete: LLM Mode Integration with Production Runner
===============================================================
Overview:
This commit completes Task 1.2 of Phase 3.2, which wires the LLMOptimizationRunner
to the production optimization infrastructure. Natural language optimization is now
available via the unified run_optimization.py entry point.
Key Accomplishments:
- ✅ LLM workflow validation and error handling
- ✅ Interface contracts verified (model_updater, simulation_runner)
- ✅ Comprehensive integration test suite (5/5 tests passing)
- ✅ Example walkthrough for users
- ✅ Documentation updated to reflect LLM mode availability
Files Modified:
1. optimization_engine/llm_optimization_runner.py
- Fixed docstring: simulation_runner signature now correctly documented
- Interface: Callable[[Dict], Path] (takes design_vars, returns OP2 file)
2. optimization_engine/run_optimization.py
- Added LLM workflow validation (lines 184-193)
- Required fields: engineering_features, optimization, design_variables
- Added error handling for runner initialization (lines 220-252)
- Graceful failure with actionable error messages
3. tests/test_phase_3_2_llm_mode.py
- Fixed path issue for running from tests/ directory
- Added cwd parameter and ../ to path
Files Created:
1. tests/test_task_1_2_integration.py (443 lines)
- Test 1: LLM Workflow Validation
- Test 2: Interface Contracts
- Test 3: LLMOptimizationRunner Structure
- Test 4: Error Handling
- Test 5: Component Integration
- ALL TESTS PASSING ✅
2. examples/llm_mode_simple_example.py (167 lines)
- Complete walkthrough of LLM mode workflow
- Natural language request → Auto-generated code → Optimization
- Uses test_env to avoid environment issues
3. docs/PHASE_3_2_INTEGRATION_PLAN.md
- Detailed 4-week integration roadmap
- Week 1 tasks, deliverables, and validation criteria
- Tasks 1.1-1.4 with explicit acceptance criteria
Documentation Updates:
1. README.md
- Changed LLM mode from "Future - Phase 2" to "Available Now!"
- Added natural language optimization example
- Listed auto-generated components (extractors, hooks, calculations)
- Updated status: Phase 3.2 Week 1 COMPLETE
2. DEVELOPMENT.md
- Added Phase 3.2 Integration section
- Listed Week 1 tasks with completion status
3. DEVELOPMENT_GUIDANCE.md
- Updated active phase to Phase 3.2
- Added LLM mode milestone completion
Verified Integration:
- ✅ model_updater interface: Callable[[Dict], None]
- ✅ simulation_runner interface: Callable[[Dict], Path]
- ✅ LLM workflow validation catches missing fields
- ✅ Error handling for initialization failures
- ✅ Component structure verified (ExtractorOrchestrator, HookGenerator, etc.)
Known Gaps (Out of Scope for Task 1.2):
- LLMWorkflowAnalyzer Claude Code integration returns empty workflow
(This is Phase 2.7 component work, not Task 1.2 integration)
- Manual mode (--config) not yet fully integrated
(Task 1.2 focuses on LLM mode wiring only)
Test Results:
=============
[OK] PASSED: LLM Workflow Validation
[OK] PASSED: Interface Contracts
[OK] PASSED: LLMOptimizationRunner Initialization
[OK] PASSED: Error Handling
[OK] PASSED: Component Integration
Task 1.2 Integration Status: ✅ VERIFIED
Next Steps:
- Task 1.3: Minimal working example (completed in this commit)
- Task 1.4: End-to-end integration test
- Week 2: Robustness & Safety (validation, fallbacks, tests, audit trail)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 20:48:40 -05:00
```bash
2025-12-23 15:06:15 -05:00
# Start the dashboard
python launch_dashboard.py
2025-11-15 07:56:35 -05:00
2025-12-23 15:06:15 -05:00
# Or check status from command line
python -c "from optimization_engine.study_state import get_study_status; print(get_study_status('.'))"
2025-11-15 07:56:35 -05:00
```
2025-12-23 15:06:15 -05:00
---
## Optimization Methods
Atomizer supports multiple optimization strategies:
| Method | Use Case | Protocol |
|--------|----------|----------|
| **TPE ** | Single-objective, <50 trials | SYS_10 (IMSO) |
| **NSGA-II ** | Multi-objective, Pareto optimization | SYS_11 |
| **CMA-ES ** | Continuous parameters, >100 trials | SYS_10 |
| **GNN Turbo ** | >50 FEA trials available for training | SYS_14 |
| **Hybrid ** | Confidence-based FEA/NN switching | SYS_15 |
The **Method Selector ** automatically recommends the best approach based on your problem:
2025-11-15 07:56:35 -05:00
```bash
2025-12-23 15:06:15 -05:00
python -m optimization_engine.method_selector config.json study.db
2025-11-15 07:56:35 -05:00
```
2025-12-23 15:06:15 -05:00
---
## Protocol System
2025-11-15 07:56:35 -05:00
2025-12-23 15:06:15 -05:00
Atomizer uses a layered protocol system for consistent operations:
2025-11-15 07:56:35 -05:00
```
2025-12-23 15:06:15 -05:00
Layer 0: Bootstrap → Task routing, quick reference
Layer 1: Operations → OP_01-06: Create, Run, Monitor, Analyze, Export, Debug
Layer 2: System → SYS_10-16: IMSO, Multi-obj, Extractors, Dashboard, Neural, Insights
Layer 3: Extensions → EXT_01-04: Create extractors, hooks, protocols, skills
2025-11-15 07:56:35 -05:00
```
2025-12-23 15:06:15 -05:00
### Key Protocols
2025-11-15 07:56:35 -05:00
2025-12-23 15:06:15 -05:00
| Protocol | Purpose |
|----------|---------|
| **OP_01 ** | Create new study from description |
| **OP_02 ** | Run optimization |
| **OP_06 ** | Troubleshoot issues |
| **SYS_12 ** | Extractor library reference |
| **SYS_14 ** | Neural network acceleration |
| **SYS_16 ** | Study insights |
2025-11-15 07:56:35 -05:00
2025-12-23 15:06:15 -05:00
---
2025-11-15 07:56:35 -05:00
2025-12-23 15:06:15 -05:00
## Development Roadmap
2025-11-16 21:29:54 -05:00
2025-12-23 15:06:15 -05:00
### Current Status (Dec 2025)
2025-11-26 12:01:50 -05:00
2025-12-23 15:06:15 -05:00
- Core FEA optimization engine
- 20+ physics extractors including Zernike OPD
- GNN surrogate for mirror optimization
- React dashboard with live tracking
- Multi-objective Pareto optimization
- Study insights visualization system
2025-11-26 12:01:50 -05:00
2025-12-23 15:06:15 -05:00
### Planned
2025-11-15 14:34:16 -05:00
2025-12-23 15:06:15 -05:00
| Feature | Status |
|---------|--------|
| Dynamic response (random vibration, PSD) | Planning |
| Code reorganization (modular structure) | Planning |
| Ensemble uncertainty quantification | Planned |
| Auto-documentation generator | Implemented |
| MCP server integration | Partial |
2025-11-26 12:01:50 -05:00
2025-12-23 15:06:15 -05:00
---
## Project Structure
2025-11-26 12:01:50 -05:00
2025-12-23 15:06:15 -05:00
```
Atomizer/
├── .claude/ # LLM configuration
│ ├── skills/ # Claude skill definitions
│ └── commands/ # Slash commands
├── optimization_engine/ # Core Python modules
│ ├── extractors/ # Physics extraction (20+ extractors)
│ ├── insights/ # Visualization generators (8 types)
│ ├── gnn/ # Graph neural network surrogate
│ ├── hooks/ # NX automation hooks
│ ├── validators/ # Config validation
│ └── templates/ # Study templates
├── atomizer-dashboard/ # React frontend + FastAPI backend
├── studies/ # Optimization studies by geometry
├── docs/ # Documentation
│ ├── protocols/ # Protocol specifications
2026-01-20 10:03:45 -05:00
│ └── physics/ # Physics domain docs
2025-12-23 15:06:15 -05:00
├── knowledge_base/ # LAC persistent learning
│ └── lac/ # Session insights, failures, patterns
└── nx_journals/ # NX Open automation scripts
2025-11-26 12:01:50 -05:00
```
2025-11-15 07:56:35 -05:00
2025-12-23 15:06:15 -05:00
---
2025-11-15 07:56:35 -05:00
2025-12-23 15:06:15 -05:00
## Key Principles
2025-11-15 07:56:35 -05:00
2025-12-23 15:06:15 -05:00
1. **Conversation first ** - Don't ask users to edit JSON manually
2. **Validate everything ** - Catch errors before expensive FEA runs
3. **Explain decisions ** - Say why a sampler/method was chosen
4. **Never modify master files ** - Copy NX files to study directory
5. **Reuse code ** - Check existing extractors before writing new ones
6. **Document proactively ** - Update docs after code changes
2025-11-15 14:34:16 -05:00
2025-12-23 15:06:15 -05:00
---
2025-11-15 14:34:16 -05:00
2025-12-23 15:06:15 -05:00
## Documentation
2025-11-15 14:34:16 -05:00
2025-12-23 15:06:15 -05:00
| Document | Purpose |
|----------|---------|
| [CLAUDE.md ](CLAUDE.md ) | System instructions for Claude |
| [.claude/ATOMIZER_CONTEXT.md ](.claude/ATOMIZER_CONTEXT.md ) | Session context loader |
| [docs/protocols/ ](docs/protocols/ ) | Protocol specifications |
2026-01-20 10:03:45 -05:00
| [docs/physics/ ](docs/physics/ ) | Physics domain documentation |
2025-11-15 14:34:16 -05:00
2025-12-23 15:06:15 -05:00
### Physics Documentation
2025-11-15 07:56:35 -05:00
2026-01-20 10:03:45 -05:00
- [ZERNIKE_FUNDAMENTALS.md ](docs/physics/ZERNIKE_FUNDAMENTALS.md ) - Zernike polynomial basics
- [ZERNIKE_OPD_METHOD.md ](docs/physics/ZERNIKE_OPD_METHOD.md ) - OPD method for lateral displacement
2025-11-15 07:56:35 -05:00
2025-12-23 15:06:15 -05:00
---
2025-11-15 07:56:35 -05:00
2026-01-07 09:07:44 -05:00
## For AI Assistants
Atomizer is designed for LLM-first interaction. Key resources:
- **[CLAUDE.md ](CLAUDE.md )** - System instructions for Claude Code
- **[.claude/skills/ ](/.claude/skills/ )** - LLM skill modules
- **[docs/protocols/ ](docs/protocols/ )** - Protocol Operating System
### Knowledge Base (LAC)
The Learning Atomizer Core (`knowledge_base/lac/` ) accumulates optimization knowledge:
- `session_insights/` - Learnings from past sessions
- `optimization_memory/` - Optimization outcomes by geometry type
- `playbook.json` - ACE framework knowledge store
For detailed AI interaction guidance, see CLAUDE.md.
---
2025-12-23 15:06:15 -05:00
## Environment
2025-11-15 14:34:16 -05:00
2025-12-23 15:06:15 -05:00
**Critical**: Always use the `atomizer` conda environment:
2025-11-15 07:56:35 -05:00
2025-12-23 15:06:15 -05:00
```bash
conda activate atomizer
```
2025-11-15 07:56:35 -05:00
2025-12-23 15:06:15 -05:00
Python and dependencies are pre-configured. Do not install additional packages.
2025-11-15 07:56:35 -05:00
2025-12-23 15:06:15 -05:00
---
2025-11-15 07:56:35 -05:00
## Support
- **Documentation**: [docs/ ](docs/ )
2025-12-23 15:06:15 -05:00
- **Issue Tracker**: GitHub Issues
2025-11-15 14:34:16 -05:00
- **Email**: antoine@atomaste .com
2025-11-15 07:56:35 -05:00
2025-12-23 15:06:15 -05:00
---
2025-11-15 08:10:05 -05:00
2025-12-23 15:06:15 -05:00
## License
2025-11-15 08:10:05 -05:00
2026-01-20 10:03:45 -05:00
Proprietary - Atomaste 2026
2025-11-15 08:10:05 -05:00
2025-11-15 07:56:35 -05:00
---
2026-01-20 10:03:45 -05:00
*Atomizer: LLM-driven structural optimization for engineering.*