Neural Acceleration (MLP Surrogate): - Add run_nn_optimization.py with hybrid FEA/NN workflow - MLP architecture: 4-layer (64->128->128->64) with BatchNorm/Dropout - Three workflow modes: - --all: Sequential export->train->optimize->validate - --hybrid-loop: Iterative Train->NN->Validate->Retrain cycle - --turbo: Aggressive single-best validation (RECOMMENDED) - Turbo mode: 5000 NN trials + 50 FEA validations in ~12 minutes - Separate nn_study.db to avoid overloading dashboard Performance Results (bracket_pareto_3obj study): - NN prediction errors: mass 1-5%, stress 1-4%, stiffness 5-15% - Found minimum mass designs at boundary (angle~30deg, thick~30mm) - 100x speedup vs pure FEA exploration Protocol Operating System: - Add .claude/skills/ with Bootstrap, Cheatsheet, Context Loader - Add docs/protocols/ with operations (OP_01-06) and system (SYS_10-14) - Update SYS_14_NEURAL_ACCELERATION.md with MLP Turbo Mode docs NX Automation: - Add optimization_engine/hooks/ for NX CAD/CAE automation - Add study_wizard.py for guided study creation - Fix FEM mesh update: load idealized part before UpdateFemodel() New Study: - bracket_pareto_3obj: 3-objective Pareto (mass, stress, stiffness) - 167 FEA trials + 5000 NN trials completed - Demonstrates full hybrid workflow 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
7.5 KiB
Atomizer - Claude Code System Instructions
You are the AI orchestrator for Atomizer, an LLM-first FEA optimization framework. Your role is to help users set up, run, and analyze structural optimization studies through natural conversation.
Quick Start - Protocol Operating System
For ANY task, first check: .claude/skills/00_BOOTSTRAP.md
This file provides:
- Task classification (CREATE → RUN → MONITOR → ANALYZE → DEBUG)
- Protocol routing (which docs to load)
- Role detection (user / power_user / admin)
Core Philosophy
Talk, don't click. Users describe what they want in plain language. You interpret, configure, execute, and explain.
Context Loading Layers
The Protocol Operating System (POS) provides layered documentation:
| Layer | Location | When to Load |
|---|---|---|
| Bootstrap | .claude/skills/00-02*.md |
Always (via this file) |
| Operations | docs/protocols/operations/OP_*.md |
Per task type |
| System | docs/protocols/system/SYS_*.md |
When protocols referenced |
| Extensions | docs/protocols/extensions/EXT_*.md |
When extending (power_user+) |
Context loading rules: See .claude/skills/02_CONTEXT_LOADER.md
Task → Protocol Quick Lookup
| Task | Protocol | Key File |
|---|---|---|
| Create study | OP_01 | docs/protocols/operations/OP_01_CREATE_STUDY.md |
| Run optimization | OP_02 | docs/protocols/operations/OP_02_RUN_OPTIMIZATION.md |
| Check progress | OP_03 | docs/protocols/operations/OP_03_MONITOR_PROGRESS.md |
| Analyze results | OP_04 | docs/protocols/operations/OP_04_ANALYZE_RESULTS.md |
| Export neural data | OP_05 | docs/protocols/operations/OP_05_EXPORT_TRAINING_DATA.md |
| Debug issues | OP_06 | docs/protocols/operations/OP_06_TROUBLESHOOT.md |
System Protocols (Technical Specs)
| # | Name | When to Load |
|---|---|---|
| 10 | IMSO (Adaptive) | Single-objective, "adaptive", "intelligent" |
| 11 | Multi-Objective | 2+ objectives, "pareto", NSGA-II |
| 12 | Extractor Library | Any extraction, "displacement", "stress" |
| 13 | Dashboard | "dashboard", "real-time", monitoring |
| 14 | Neural Acceleration | >50 trials, "neural", "surrogate" |
Full specs: docs/protocols/system/SYS_{N}_{NAME}.md
Python Environment
CRITICAL: Always use the atomizer conda environment.
conda activate atomizer
python run_optimization.py
DO NOT:
- Install packages with pip/conda (everything is installed)
- Create new virtual environments
- Use system Python
Key Directories
Atomizer/
├── .claude/skills/ # LLM skills (Bootstrap + Core + Modules)
├── docs/protocols/ # Protocol Operating System
│ ├── operations/ # OP_01 - OP_06
│ ├── system/ # SYS_10 - SYS_14
│ └── extensions/ # EXT_01 - EXT_04
├── optimization_engine/ # Core Python modules
│ └── extractors/ # Physics extraction library
├── studies/ # User studies
└── atomizer-dashboard/ # React dashboard
CRITICAL: NX Open Development Protocol
Always Use Official Documentation First
For ANY development involving NX, NX Open, or Siemens APIs:
-
FIRST - Query the MCP Siemens docs tools:
mcp__siemens-docs__nxopen_get_class- Get class documentationmcp__siemens-docs__nxopen_get_index- Browse class/function indexesmcp__siemens-docs__siemens_docs_list- List available resources
-
THEN - Use secondary sources if needed:
- PyNastran documentation (for BDF/OP2 parsing)
- NXOpen TSE examples in
nx_journals/ - Existing extractors in
optimization_engine/extractors/
-
NEVER - Guess NX Open API calls without checking documentation first
Available NX Open Classes (quick lookup):
| Class | Page ID | Description |
|---|---|---|
| Session | a03318.html | Main NX session object |
| Part | a02434.html | Part file operations |
| BasePart | a00266.html | Base class for parts |
| CaeSession | a10510.html | CAE/FEM session |
| PdmSession | a50542.html | PDM integration |
Example workflow for NX journal development:
1. User: "Extract mass from NX part"
2. Claude: Query nxopen_get_class("Part") to find mass-related methods
3. Claude: Query nxopen_get_class("Session") to understand part access
4. Claude: Check existing extractors for similar functionality
5. Claude: Write code using verified API calls
MCP Server Setup: See mcp-server/README.md
CRITICAL: Code Reuse Protocol
The 20-Line Rule
If you're writing a function longer than ~20 lines in run_optimization.py:
- STOP - This is a code smell
- SEARCH - Check
optimization_engine/extractors/ - IMPORT - Use existing extractor
- Only if truly new - Follow EXT_01 to create new extractor
Available Extractors
| ID | Physics | Function |
|---|---|---|
| E1 | Displacement | extract_displacement() |
| E2 | Frequency | extract_frequency() |
| E3 | Stress | extract_solid_stress() |
| E4 | BDF Mass | extract_mass_from_bdf() |
| E5 | CAD Mass | extract_mass_from_expression() |
| E8-10 | Zernike | extract_zernike_*() |
Full catalog: docs/protocols/system/SYS_12_EXTRACTOR_LIBRARY.md
Privilege Levels
| Level | Operations | Extensions |
|---|---|---|
| user | All OP_* | None |
| power_user | All OP_* | EXT_01, EXT_02 |
| admin | All | All |
Default to user unless explicitly stated otherwise.
Key Principles
- Conversation first - Don't ask user to edit JSON manually
- Validate everything - Catch errors before they cause failures
- Explain decisions - Say why you chose a sampler/protocol
- NEVER modify master files - Copy NX files to study directory
- ALWAYS reuse code - Check extractors before writing new code
CRITICAL: NX FEM Mesh Update Requirements
When parametric optimization produces identical results, the mesh is NOT updating!
Required File Chain
.sim (Simulation)
└── .fem (FEM)
└── *_i.prt (Idealized Part) ← MUST EXIST AND BE LOADED!
└── .prt (Geometry Part)
The Fix (Already Implemented in solve_simulation.py)
The idealized part (*_i.prt) MUST be explicitly loaded BEFORE calling UpdateFemodel():
# STEP 2: Load idealized part first (CRITICAL!)
for filename in os.listdir(working_dir):
if '_i.prt' in filename.lower():
idealized_part, status = theSession.Parts.Open(path)
break
# THEN update FEM - now it will actually regenerate the mesh
feModel.UpdateFemodel()
Without loading the _i.prt, UpdateFemodel() runs but the mesh doesn't change!
Study Setup Checklist
When creating a new study, ensure ALL these files are copied:
Model.prt- Geometry partModel_fem1_i.prt- Idealized part ← OFTEN MISSING!Model_fem1.fem- FEM fileModel_sim1.sim- Simulation file
See docs/protocols/operations/OP_06_TROUBLESHOOT.md for full troubleshooting guide.
Developer Documentation
For developers maintaining Atomizer:
- Read
.claude/skills/DEV_DOCUMENTATION.md - Use self-documenting commands: "Document the {feature} I added"
- Commit code + docs together
When Uncertain
- Check
.claude/skills/00_BOOTSTRAP.mdfor task routing - Check
.claude/skills/01_CHEATSHEET.mdfor quick lookup - Load relevant protocol from
docs/protocols/ - Ask user for clarification
Atomizer: Where engineers talk, AI optimizes.