Files
Atomizer/README.md

364 lines
13 KiB
Markdown
Raw Permalink Normal View History

# Atomizer
> **LLM-driven structural optimization framework** for Siemens NX with neural network acceleration.
[![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/)
[![NX 2506+](https://img.shields.io/badge/NX-2506+-orange.svg)](https://www.plm.automation.siemens.com/global/en/products/nx/)
[![License](https://img.shields.io/badge/license-Proprietary-red.svg)](LICENSE)
[![Neural](https://img.shields.io/badge/neural-GNN%20powered-purple.svg)](docs/physics/)
---
## What is Atomizer?
Atomizer is an **LLM-first optimization framework** that transforms how engineers interact with FEA optimization. Instead of manually configuring JSON files and writing extraction scripts, you describe what you want in natural language - and Atomizer handles the rest.
```
Engineer: "Optimize the M1 mirror support structure to minimize wavefront error
across elevation angles 20-90 degrees. Keep mass under 15kg."
Atomizer: Creates study, configures extractors, runs optimization, reports results.
```
### Core Capabilities
| Capability | Description |
|------------|-------------|
| **LLM-Driven Workflow** | Describe optimizations in plain English. Claude interprets, configures, and executes. |
| **Neural Acceleration** | GNN surrogates achieve 2000-500,000x speedup over FEA (4.5ms vs 10-30min) |
| **Physics Insights** | Real-time Zernike wavefront error, stress fields, modal analysis visualizations |
| **Multi-Objective** | Pareto optimization with NSGA-II, interactive parallel coordinates plots |
| **NX Integration** | Seamless journal-based control of Siemens NX Simcenter |
| **Extensible** | Plugin system with hooks for pre/post mesh, solve, and extraction phases |
docs: Add comprehensive development guidance and align documentation Major Updates: - Created DEVELOPMENT_GUIDANCE.md - comprehensive status report and strategic direction * Full project assessment (75-85% complete) * Current status: Phases 2.5-3.1 built (85%), integration needed * Development strategy: Continue using Claude Code, defer LLM API integration * Priority initiatives: Phase 3.2 Integration, NXOpen docs, Engineering pipeline * Foundation for future: Feature documentation pipeline specification Key Strategic Decisions: - LLM API integration deferred - use Claude Code for development - Phase 3.2 Integration is TOP PRIORITY (2-4 weeks) - NXOpen documentation access - high priority research initiative - Engineering feature validation pipeline - foundation for production rigor Documentation Alignment: - Updated README.md with current status (75-85% complete) - Added clear links to DEVELOPMENT_GUIDANCE.md for developers - Updated DEVELOPMENT.md to reflect Phase 3.2 integration focus - Corrected status indicators across all docs New Initiatives Documented: 1. NXOpen Documentation Integration - Authenticated access to Siemens docs - Leverage NXOpen Python stub files for intellisense - Enable LLM to reference NXOpen API during code generation 2. Engineering Feature Documentation Pipeline - Auto-generate comprehensive docs for FEA features - Human review/approval workflow - Validation framework for scientific rigor - Foundation for production-ready LLM-generated features 3. Validation Pipeline Framework - Request parsing → Code gen → Testing → Review → Integration - Ensures traceability and engineering rigor - NOT for current dev, but foundation for future users All documentation now consistent and aligned with strategic direction. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 08:29:30 -05:00
---
## Architecture Overview
```
┌─────────────────────────────────┐
│ LLM Interface Layer │
│ Claude Code + Natural Lang │
└───────────────┬─────────────────┘
┌─────────────────────────┼─────────────────────────┐
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────────┐ ┌─────────────────┐
│ Traditional │ │ Neural Path │ │ Dashboard │
│ FEA Path │ │ (GNN Surrogate) │ │ (React) │
│ ~10-30 min │ │ ~4.5 ms │ │ Real-time │
└────────┬────────┘ └──────────┬──────────┘ └────────┬────────┘
│ │ │
└─────────────────────────┼─────────────────────────┘
┌──────────────┴──────────────┐
│ Extractors & Insights │
│ 20+ physics extractors │
│ 8 visualization types │
└─────────────────────────────┘
```
---
## Key Features
### 1. Physics Extractors (20+)
Atomizer includes a comprehensive library of validated physics extractors:
| Category | Extractors | Notes |
|----------|------------|-------|
| **Displacement** | `extract_displacement()` | mm, nodal |
| **Stress** | `extract_von_mises_stress()`, `extract_principal_stress()` | Shell (CQUAD4) & Solid (CTETRA) |
| **Modal** | `extract_frequency()`, `extract_modal_mass()` | Hz, kg |
| **Mass** | `extract_mass_from_bdf()`, `extract_mass_from_expression()` | kg |
| **Thermal** | `extract_temperature()` | K |
| **Energy** | `extract_strain_energy()` | J |
| **Optics** | `extract_zernike_*()` (Standard, Analytic, **OPD**) | nm RMS |
**Zernike OPD Method**: The recommended extractor for mirror optimization. Correctly accounts for lateral displacement when computing wavefront error - critical for tilted mirror analysis.
### 2. Study Insights (8 Types)
docs: Add comprehensive development guidance and align documentation Major Updates: - Created DEVELOPMENT_GUIDANCE.md - comprehensive status report and strategic direction * Full project assessment (75-85% complete) * Current status: Phases 2.5-3.1 built (85%), integration needed * Development strategy: Continue using Claude Code, defer LLM API integration * Priority initiatives: Phase 3.2 Integration, NXOpen docs, Engineering pipeline * Foundation for future: Feature documentation pipeline specification Key Strategic Decisions: - LLM API integration deferred - use Claude Code for development - Phase 3.2 Integration is TOP PRIORITY (2-4 weeks) - NXOpen documentation access - high priority research initiative - Engineering feature validation pipeline - foundation for production rigor Documentation Alignment: - Updated README.md with current status (75-85% complete) - Added clear links to DEVELOPMENT_GUIDANCE.md for developers - Updated DEVELOPMENT.md to reflect Phase 3.2 integration focus - Corrected status indicators across all docs New Initiatives Documented: 1. NXOpen Documentation Integration - Authenticated access to Siemens docs - Leverage NXOpen Python stub files for intellisense - Enable LLM to reference NXOpen API during code generation 2. Engineering Feature Documentation Pipeline - Auto-generate comprehensive docs for FEA features - Human review/approval workflow - Validation framework for scientific rigor - Foundation for production-ready LLM-generated features 3. Validation Pipeline Framework - Request parsing → Code gen → Testing → Review → Integration - Ensures traceability and engineering rigor - NOT for current dev, but foundation for future users All documentation now consistent and aligned with strategic direction. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 08:29:30 -05:00
Interactive physics visualizations generated on-demand:
docs: Add comprehensive development guidance and align documentation Major Updates: - Created DEVELOPMENT_GUIDANCE.md - comprehensive status report and strategic direction * Full project assessment (75-85% complete) * Current status: Phases 2.5-3.1 built (85%), integration needed * Development strategy: Continue using Claude Code, defer LLM API integration * Priority initiatives: Phase 3.2 Integration, NXOpen docs, Engineering pipeline * Foundation for future: Feature documentation pipeline specification Key Strategic Decisions: - LLM API integration deferred - use Claude Code for development - Phase 3.2 Integration is TOP PRIORITY (2-4 weeks) - NXOpen documentation access - high priority research initiative - Engineering feature validation pipeline - foundation for production rigor Documentation Alignment: - Updated README.md with current status (75-85% complete) - Added clear links to DEVELOPMENT_GUIDANCE.md for developers - Updated DEVELOPMENT.md to reflect Phase 3.2 integration focus - Corrected status indicators across all docs New Initiatives Documented: 1. NXOpen Documentation Integration - Authenticated access to Siemens docs - Leverage NXOpen Python stub files for intellisense - Enable LLM to reference NXOpen API during code generation 2. Engineering Feature Documentation Pipeline - Auto-generate comprehensive docs for FEA features - Human review/approval workflow - Validation framework for scientific rigor - Foundation for production-ready LLM-generated features 3. Validation Pipeline Framework - Request parsing → Code gen → Testing → Review → Integration - Ensures traceability and engineering rigor - NOT for current dev, but foundation for future users All documentation now consistent and aligned with strategic direction. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 08:29:30 -05:00
| Insight | Purpose |
|---------|---------|
| `zernike_wfe` | Wavefront error decomposition with Zernike coefficients |
| `zernike_opd_comparison` | Compare Standard vs OPD methods across subcases |
| `msf_zernike` | Mid-spatial frequency analysis |
| `stress_field` | 3D stress field visualization |
| `modal_analysis` | Mode shapes and frequencies |
| `thermal_field` | Temperature distribution |
| `design_space` | Parameter sensitivity exploration |
### 3. Neural Network Acceleration
The GNN surrogate system (`optimization_engine/gnn/`) provides:
- **PolarMirrorGraph**: Fixed 3000-node polar grid for consistent predictions
- **ZernikeGNN**: Design-conditioned graph convolutions
- **Differentiable Zernike fitting**: GPU-accelerated coefficient computation
- **Hybrid optimization**: Automatic switching between FEA and NN based on confidence
**Performance**: 4.5ms per prediction vs 10-30 minutes for FEA (2000x+ speedup)
### 4. Real-Time Dashboard
React-based monitoring with:
- Live trial progress tracking
- Pareto front visualization
- Parallel coordinates for multi-objective analysis
- Insights tab for physics visualizations
- Interactive Zernike decomposition with OPD/Standard toggle
```bash
# Start the dashboard
python launch_dashboard.py
# Opens at http://localhost:3003
```
---
## Current Studies
Studies are organized by geometry type:
```
studies/
├── M1_Mirror/ # Telescope primary mirror optimization
│ ├── m1_mirror_adaptive_V15/ # Latest: Zernike OPD + GNN turbo
│ └── m1_mirror_cost_reduction_V12/
├── Simple_Bracket/ # Structural bracket studies
├── UAV_Arm/ # UAV arm frequency optimization
├── Drone_Gimbal/ # Gimbal assembly
├── Simple_Beam/ # Beam topology studies
└── _Other/ # Experimental
```
### Study Structure
Each study follows a standardized structure:
```
study_name/
├── optimization_config.json # Problem definition
├── run_optimization.py # FEA optimization script
├── run_nn_optimization.py # Neural turbo mode (optional)
├── README.md # Study documentation
├── 1_setup/
│ └── model/ # NX part, sim, fem files
├── 2_iterations/ # Trial folders (iter1, iter2, ...)
├── 3_results/
│ ├── study.db # Optuna database
│ └── optimization.log # Execution logs
└── 3_insights/ # Generated visualizations
└── zernike_*.html
```
---
## Quick Start
### Prerequisites
- **Siemens NX 2506+** with NX Nastran solver
- **Python 3.10+** (Anaconda recommended)
- **Atomizer conda environment** (pre-configured)
### Run an Optimization
```bash
# Activate the environment
conda activate atomizer
# Navigate to a study
cd studies/M1_Mirror/m1_mirror_adaptive_V15
# Run optimization (50 FEA trials)
python run_optimization.py --start --trials 50
# Or run with neural turbo mode (5000 GNN trials)
python run_nn_optimization.py --turbo --nn-trials 5000
```
### Monitor Progress
feat: Phase 3.2 Task 1.2 - Wire LLMOptimizationRunner to production Task 1.2 Complete: LLM Mode Integration with Production Runner =============================================================== Overview: This commit completes Task 1.2 of Phase 3.2, which wires the LLMOptimizationRunner to the production optimization infrastructure. Natural language optimization is now available via the unified run_optimization.py entry point. Key Accomplishments: - ✅ LLM workflow validation and error handling - ✅ Interface contracts verified (model_updater, simulation_runner) - ✅ Comprehensive integration test suite (5/5 tests passing) - ✅ Example walkthrough for users - ✅ Documentation updated to reflect LLM mode availability Files Modified: 1. optimization_engine/llm_optimization_runner.py - Fixed docstring: simulation_runner signature now correctly documented - Interface: Callable[[Dict], Path] (takes design_vars, returns OP2 file) 2. optimization_engine/run_optimization.py - Added LLM workflow validation (lines 184-193) - Required fields: engineering_features, optimization, design_variables - Added error handling for runner initialization (lines 220-252) - Graceful failure with actionable error messages 3. tests/test_phase_3_2_llm_mode.py - Fixed path issue for running from tests/ directory - Added cwd parameter and ../ to path Files Created: 1. tests/test_task_1_2_integration.py (443 lines) - Test 1: LLM Workflow Validation - Test 2: Interface Contracts - Test 3: LLMOptimizationRunner Structure - Test 4: Error Handling - Test 5: Component Integration - ALL TESTS PASSING ✅ 2. examples/llm_mode_simple_example.py (167 lines) - Complete walkthrough of LLM mode workflow - Natural language request → Auto-generated code → Optimization - Uses test_env to avoid environment issues 3. docs/PHASE_3_2_INTEGRATION_PLAN.md - Detailed 4-week integration roadmap - Week 1 tasks, deliverables, and validation criteria - Tasks 1.1-1.4 with explicit acceptance criteria Documentation Updates: 1. README.md - Changed LLM mode from "Future - Phase 2" to "Available Now!" - Added natural language optimization example - Listed auto-generated components (extractors, hooks, calculations) - Updated status: Phase 3.2 Week 1 COMPLETE 2. DEVELOPMENT.md - Added Phase 3.2 Integration section - Listed Week 1 tasks with completion status 3. DEVELOPMENT_GUIDANCE.md - Updated active phase to Phase 3.2 - Added LLM mode milestone completion Verified Integration: - ✅ model_updater interface: Callable[[Dict], None] - ✅ simulation_runner interface: Callable[[Dict], Path] - ✅ LLM workflow validation catches missing fields - ✅ Error handling for initialization failures - ✅ Component structure verified (ExtractorOrchestrator, HookGenerator, etc.) Known Gaps (Out of Scope for Task 1.2): - LLMWorkflowAnalyzer Claude Code integration returns empty workflow (This is Phase 2.7 component work, not Task 1.2 integration) - Manual mode (--config) not yet fully integrated (Task 1.2 focuses on LLM mode wiring only) Test Results: ============= [OK] PASSED: LLM Workflow Validation [OK] PASSED: Interface Contracts [OK] PASSED: LLMOptimizationRunner Initialization [OK] PASSED: Error Handling [OK] PASSED: Component Integration Task 1.2 Integration Status: ✅ VERIFIED Next Steps: - Task 1.3: Minimal working example (completed in this commit) - Task 1.4: End-to-end integration test - Week 2: Robustness & Safety (validation, fallbacks, tests, audit trail) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 20:48:40 -05:00
```bash
# Start the dashboard
python launch_dashboard.py
# Or check status from command line
python -c "from optimization_engine.study_state import get_study_status; print(get_study_status('.'))"
```
---
## Optimization Methods
Atomizer supports multiple optimization strategies:
| Method | Use Case | Protocol |
|--------|----------|----------|
| **TPE** | Single-objective, <50 trials | SYS_10 (IMSO) |
| **NSGA-II** | Multi-objective, Pareto optimization | SYS_11 |
| **CMA-ES** | Continuous parameters, >100 trials | SYS_10 |
| **GNN Turbo** | >50 FEA trials available for training | SYS_14 |
| **Hybrid** | Confidence-based FEA/NN switching | SYS_15 |
The **Method Selector** automatically recommends the best approach based on your problem:
```bash
python -m optimization_engine.method_selector config.json study.db
```
---
## Protocol System
Atomizer uses a layered protocol system for consistent operations:
```
Layer 0: Bootstrap → Task routing, quick reference
Layer 1: Operations → OP_01-06: Create, Run, Monitor, Analyze, Export, Debug
Layer 2: System → SYS_10-16: IMSO, Multi-obj, Extractors, Dashboard, Neural, Insights
Layer 3: Extensions → EXT_01-04: Create extractors, hooks, protocols, skills
```
### Key Protocols
| Protocol | Purpose |
|----------|---------|
| **OP_01** | Create new study from description |
| **OP_02** | Run optimization |
| **OP_06** | Troubleshoot issues |
| **SYS_12** | Extractor library reference |
| **SYS_14** | Neural network acceleration |
| **SYS_16** | Study insights |
---
## Development Roadmap
feat: Add substudy system with live history tracking and workflow fixes Major Features: - Hierarchical substudy system (like NX Solutions/Subcases) * Shared model files across all substudies * Independent configuration per substudy * Continuation support from previous substudies * Real-time incremental history updates - Live history tracking with optimization_history_incremental.json - Complete bracket_displacement_maximizing study with substudy examples Core Fixes: - Fixed expression update workflow to pass design_vars through simulation_runner * Restored working NX journal expression update mechanism * OP2 timestamp verification instead of file deletion * Resolved issue where all trials returned identical objective values - Fixed LLMOptimizationRunner to pass design variables to simulation runner - Enhanced NXSolver with timestamp-based file regeneration verification New Components: - optimization_engine/llm_optimization_runner.py - LLM-driven optimization runner - optimization_engine/optimization_setup_wizard.py - Phase 3.3 setup wizard - studies/bracket_displacement_maximizing/ - Complete substudy example * run_substudy.py - Substudy runner with continuation * run_optimization.py - Standalone optimization runner * config/substudy_template.json - Template for new substudies * substudies/coarse_exploration/ - 20-trial coarse search * substudies/fine_tuning/ - 50-trial refinement (continuation example) * SUBSTUDIES_README.md - Complete substudy documentation Technical Improvements: - Incremental history saving after each trial (optimization_history_incremental.json) - Expression update workflow: .prt update → NX journal receives values → geometry update → FEM update → solve - Trial indexing fix in substudy result saving - Updated README with substudy system documentation Testing: - Successfully ran 20-trial coarse_exploration substudy - Verified different objective values across trials (workflow fix validated) - Confirmed live history updates in real-time - Tested shared model file usage across substudies 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-16 21:29:54 -05:00
### Current Status (Dec 2025)
- Core FEA optimization engine
- 20+ physics extractors including Zernike OPD
- GNN surrogate for mirror optimization
- React dashboard with live tracking
- Multi-objective Pareto optimization
- Study insights visualization system
### Planned
| Feature | Status |
|---------|--------|
| Dynamic response (random vibration, PSD) | Planning |
| Code reorganization (modular structure) | Planning |
| Ensemble uncertainty quantification | Planned |
| Auto-documentation generator | Implemented |
| MCP server integration | Partial |
---
## Project Structure
```
Atomizer/
├── .claude/ # LLM configuration
│ ├── skills/ # Claude skill definitions
│ └── commands/ # Slash commands
├── optimization_engine/ # Core Python modules
│ ├── extractors/ # Physics extraction (20+ extractors)
│ ├── insights/ # Visualization generators (8 types)
│ ├── gnn/ # Graph neural network surrogate
│ ├── hooks/ # NX automation hooks
│ ├── validators/ # Config validation
│ └── templates/ # Study templates
├── atomizer-dashboard/ # React frontend + FastAPI backend
├── studies/ # Optimization studies by geometry
├── docs/ # Documentation
│ ├── protocols/ # Protocol specifications
│ └── physics/ # Physics domain docs
├── knowledge_base/ # LAC persistent learning
│ └── lac/ # Session insights, failures, patterns
└── nx_journals/ # NX Open automation scripts
```
---
## Key Principles
1. **Conversation first** - Don't ask users to edit JSON manually
2. **Validate everything** - Catch errors before expensive FEA runs
3. **Explain decisions** - Say why a sampler/method was chosen
4. **Never modify master files** - Copy NX files to study directory
5. **Reuse code** - Check existing extractors before writing new ones
6. **Document proactively** - Update docs after code changes
---
## Documentation
| Document | Purpose |
|----------|---------|
| [CLAUDE.md](CLAUDE.md) | System instructions for Claude |
| [.claude/ATOMIZER_CONTEXT.md](.claude/ATOMIZER_CONTEXT.md) | Session context loader |
| [docs/protocols/](docs/protocols/) | Protocol specifications |
| [docs/physics/](docs/physics/) | Physics domain documentation |
### Physics Documentation
- [ZERNIKE_FUNDAMENTALS.md](docs/physics/ZERNIKE_FUNDAMENTALS.md) - Zernike polynomial basics
- [ZERNIKE_OPD_METHOD.md](docs/physics/ZERNIKE_OPD_METHOD.md) - OPD method for lateral displacement
---
## For AI Assistants
Atomizer is designed for LLM-first interaction. Key resources:
- **[CLAUDE.md](CLAUDE.md)** - System instructions for Claude Code
- **[.claude/skills/](/.claude/skills/)** - LLM skill modules
- **[docs/protocols/](docs/protocols/)** - Protocol Operating System
### Knowledge Base (LAC)
The Learning Atomizer Core (`knowledge_base/lac/`) accumulates optimization knowledge:
- `session_insights/` - Learnings from past sessions
- `optimization_memory/` - Optimization outcomes by geometry type
- `playbook.json` - ACE framework knowledge store
For detailed AI interaction guidance, see CLAUDE.md.
---
## Environment
**Critical**: Always use the `atomizer` conda environment:
```bash
conda activate atomizer
```
Python and dependencies are pre-configured. Do not install additional packages.
---
## Support
- **Documentation**: [docs/](docs/)
- **Issue Tracker**: GitHub Issues
- **Email**: antoine@atomaste.com
---
## License
Proprietary - Atomaste 2026
---
*Atomizer: LLM-driven structural optimization for engineering.*