Anto01 857c01e7ca chore: major repo cleanup - remove dead code and cruft
Remove ~24K lines of dead code for a lean rebuild foundation:

- Remove atomizer-field/ (neural field predictor experiment, concept archived in docs)
- Remove generated_extractors/, generated_hooks/ (legacy generator outputs)
- Remove optimization_validation/ (empty skeleton)
- Remove reports/ (superseded by optimization_engine/reporting/)
- Remove root-level stale files: DEVELOPMENT.md, INSTALL_INSTRUCTIONS.md,
  config.py, atomizer_paths.py, optimization_config.json, train_neural.bat,
  generate_training_data.py, run_training_fea.py, migrate_imports.py
- Update .gitignore for introspection caches and insight outputs

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-09 14:26:37 -05:00
2025-11-15 08:12:32 -05:00

Atomizer

LLM-driven structural optimization framework for Siemens NX with neural network acceleration.

Python 3.10+ NX 2506+ License Neural


What is Atomizer?

Atomizer is an LLM-first optimization framework that transforms how engineers interact with FEA optimization. Instead of manually configuring JSON files and writing extraction scripts, you describe what you want in natural language - and Atomizer handles the rest.

Engineer: "Optimize the M1 mirror support structure to minimize wavefront error
           across elevation angles 20-90 degrees. Keep mass under 15kg."

Atomizer: Creates study, configures extractors, runs optimization, reports results.

Core Capabilities

Capability Description
LLM-Driven Workflow Describe optimizations in plain English. Claude interprets, configures, and executes.
Neural Acceleration GNN surrogates achieve 2000-500,000x speedup over FEA (4.5ms vs 10-30min)
Physics Insights Real-time Zernike wavefront error, stress fields, modal analysis visualizations
Multi-Objective Pareto optimization with NSGA-II, interactive parallel coordinates plots
NX Integration Seamless journal-based control of Siemens NX Simcenter
Extensible Plugin system with hooks for pre/post mesh, solve, and extraction phases

Architecture Overview

                        ┌─────────────────────────────────┐
                        │      LLM Interface Layer        │
                        │   Claude Code + Natural Lang    │
                        └───────────────┬─────────────────┘
                                        │
              ┌─────────────────────────┼─────────────────────────┐
              │                         │                         │
              ▼                         ▼                         ▼
    ┌─────────────────┐     ┌─────────────────────┐     ┌─────────────────┐
    │   Traditional   │     │   Neural Path       │     │   Dashboard     │
    │   FEA Path      │     │   (GNN Surrogate)   │     │   (React)       │
    │   ~10-30 min    │     │   ~4.5 ms           │     │   Real-time     │
    └────────┬────────┘     └──────────┬──────────┘     └────────┬────────┘
             │                         │                         │
             └─────────────────────────┼─────────────────────────┘
                                       │
                        ┌──────────────┴──────────────┐
                        │   Extractors & Insights     │
                        │   20+ physics extractors    │
                        │   8 visualization types     │
                        └─────────────────────────────┘

Key Features

1. Physics Extractors (20+)

Atomizer includes a comprehensive library of validated physics extractors:

Category Extractors Notes
Displacement extract_displacement() mm, nodal
Stress extract_von_mises_stress(), extract_principal_stress() Shell (CQUAD4) & Solid (CTETRA)
Modal extract_frequency(), extract_modal_mass() Hz, kg
Mass extract_mass_from_bdf(), extract_mass_from_expression() kg
Thermal extract_temperature() K
Energy extract_strain_energy() J
Optics extract_zernike_*() (Standard, Analytic, OPD) nm RMS

Zernike OPD Method: The recommended extractor for mirror optimization. Correctly accounts for lateral displacement when computing wavefront error - critical for tilted mirror analysis.

2. Study Insights (8 Types)

Interactive physics visualizations generated on-demand:

Insight Purpose
zernike_wfe Wavefront error decomposition with Zernike coefficients
zernike_opd_comparison Compare Standard vs OPD methods across subcases
msf_zernike Mid-spatial frequency analysis
stress_field 3D stress field visualization
modal_analysis Mode shapes and frequencies
thermal_field Temperature distribution
design_space Parameter sensitivity exploration

3. Neural Network Acceleration

The GNN surrogate system (optimization_engine/gnn/) provides:

  • PolarMirrorGraph: Fixed 3000-node polar grid for consistent predictions
  • ZernikeGNN: Design-conditioned graph convolutions
  • Differentiable Zernike fitting: GPU-accelerated coefficient computation
  • Hybrid optimization: Automatic switching between FEA and NN based on confidence

Performance: 4.5ms per prediction vs 10-30 minutes for FEA (2000x+ speedup)

4. Real-Time Dashboard

React-based monitoring with:

  • Live trial progress tracking
  • Pareto front visualization
  • Parallel coordinates for multi-objective analysis
  • Insights tab for physics visualizations
  • Interactive Zernike decomposition with OPD/Standard toggle
# Start the dashboard
python launch_dashboard.py
# Opens at http://localhost:3003

Current Studies

Studies are organized by geometry type:

studies/
├── M1_Mirror/              # Telescope primary mirror optimization
│   ├── m1_mirror_adaptive_V15/    # Latest: Zernike OPD + GNN turbo
│   └── m1_mirror_cost_reduction_V12/
├── Simple_Bracket/         # Structural bracket studies
├── UAV_Arm/               # UAV arm frequency optimization
├── Drone_Gimbal/          # Gimbal assembly
├── Simple_Beam/           # Beam topology studies
└── _Other/                # Experimental

Study Structure

Each study follows a standardized structure:

study_name/
├── optimization_config.json    # Problem definition
├── run_optimization.py         # FEA optimization script
├── run_nn_optimization.py      # Neural turbo mode (optional)
├── README.md                   # Study documentation
├── 1_setup/
│   └── model/                  # NX part, sim, fem files
├── 2_iterations/               # Trial folders (iter1, iter2, ...)
├── 3_results/
│   ├── study.db               # Optuna database
│   └── optimization.log       # Execution logs
└── 3_insights/                # Generated visualizations
    └── zernike_*.html

Quick Start

Prerequisites

  • Siemens NX 2506+ with NX Nastran solver
  • Python 3.10+ (Anaconda recommended)
  • Atomizer conda environment (pre-configured)

Run an Optimization

# Activate the environment
conda activate atomizer

# Navigate to a study
cd studies/M1_Mirror/m1_mirror_adaptive_V15

# Run optimization (50 FEA trials)
python run_optimization.py --start --trials 50

# Or run with neural turbo mode (5000 GNN trials)
python run_nn_optimization.py --turbo --nn-trials 5000

Monitor Progress

# Start the dashboard
python launch_dashboard.py

# Or check status from command line
python -c "from optimization_engine.study_state import get_study_status; print(get_study_status('.'))"

Optimization Methods

Atomizer supports multiple optimization strategies:

Method Use Case Protocol
TPE Single-objective, <50 trials SYS_10 (IMSO)
NSGA-II Multi-objective, Pareto optimization SYS_11
CMA-ES Continuous parameters, >100 trials SYS_10
GNN Turbo >50 FEA trials available for training SYS_14
Hybrid Confidence-based FEA/NN switching SYS_15

The Method Selector automatically recommends the best approach based on your problem:

python -m optimization_engine.method_selector config.json study.db

Protocol System

Atomizer uses a layered protocol system for consistent operations:

Layer 0: Bootstrap    → Task routing, quick reference
Layer 1: Operations   → OP_01-06: Create, Run, Monitor, Analyze, Export, Debug
Layer 2: System       → SYS_10-16: IMSO, Multi-obj, Extractors, Dashboard, Neural, Insights
Layer 3: Extensions   → EXT_01-04: Create extractors, hooks, protocols, skills

Key Protocols

Protocol Purpose
OP_01 Create new study from description
OP_02 Run optimization
OP_06 Troubleshoot issues
SYS_12 Extractor library reference
SYS_14 Neural network acceleration
SYS_16 Study insights

Development Roadmap

Current Status (Dec 2025)

  • Core FEA optimization engine
  • 20+ physics extractors including Zernike OPD
  • GNN surrogate for mirror optimization
  • React dashboard with live tracking
  • Multi-objective Pareto optimization
  • Study insights visualization system

Planned

Feature Status
Dynamic response (random vibration, PSD) Planning
Code reorganization (modular structure) Planning
Ensemble uncertainty quantification Planned
Auto-documentation generator Implemented
MCP server integration Partial

Project Structure

Atomizer/
├── .claude/                 # LLM configuration
│   ├── skills/             # Claude skill definitions
│   └── commands/           # Slash commands
├── optimization_engine/     # Core Python modules
│   ├── extractors/         # Physics extraction (20+ extractors)
│   ├── insights/           # Visualization generators (8 types)
│   ├── gnn/               # Graph neural network surrogate
│   ├── hooks/             # NX automation hooks
│   ├── validators/        # Config validation
│   └── templates/         # Study templates
├── atomizer-dashboard/      # React frontend + FastAPI backend
├── studies/                 # Optimization studies by geometry
├── docs/                    # Documentation
│   ├── protocols/          # Protocol specifications
│   └── physics/            # Physics domain docs
├── knowledge_base/          # LAC persistent learning
│   └── lac/               # Session insights, failures, patterns
└── nx_journals/            # NX Open automation scripts

Key Principles

  1. Conversation first - Don't ask users to edit JSON manually
  2. Validate everything - Catch errors before expensive FEA runs
  3. Explain decisions - Say why a sampler/method was chosen
  4. Never modify master files - Copy NX files to study directory
  5. Reuse code - Check existing extractors before writing new ones
  6. Document proactively - Update docs after code changes

Documentation

Document Purpose
CLAUDE.md System instructions for Claude
.claude/ATOMIZER_CONTEXT.md Session context loader
docs/protocols/ Protocol specifications
docs/physics/ Physics domain documentation

Physics Documentation


For AI Assistants

Atomizer is designed for LLM-first interaction. Key resources:

Knowledge Base (LAC)

The Learning Atomizer Core (knowledge_base/lac/) accumulates optimization knowledge:

  • session_insights/ - Learnings from past sessions
  • optimization_memory/ - Optimization outcomes by geometry type
  • playbook.json - ACE framework knowledge store

For detailed AI interaction guidance, see CLAUDE.md.


Environment

Critical: Always use the atomizer conda environment:

conda activate atomizer

Python and dependencies are pre-configured. Do not install additional packages.


Support


License

Proprietary - Atomaste 2026


Atomizer: LLM-driven structural optimization for engineering.

Description
Main atomizer project (GitHub mirror)
Readme 3.8 GiB
Languages
Python 82.9%
TypeScript 16%
Shell 0.6%
Batchfile 0.3%
PowerShell 0.1%