- Add TrialManager (trial_manager.py) for consistent trial_NNNN naming - Add DashboardDB (dashboard_db.py) for Optuna-compatible database schema - Update CLAUDE.md with trial management documentation - Update ATOMIZER_CONTEXT.md with v1.8 trial system - Update cheatsheet v2.2 with new utilities - Update SYS_14 protocol to v2.3 with TrialManager integration - Add LAC learnings for trial management patterns - Add archive/README.md for deprecated code policy Key principles: - Trial numbers NEVER reset (monotonic) - Folders NEVER get overwritten - Database always synced with filesystem - Surrogate predictions are NOT trials (only FEA results) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
19 KiB
Atomizer Session Context
What is Atomizer?
Atomizer is an LLM-first FEA (Finite Element Analysis) optimization framework. Users describe optimization problems in natural language, and Claude orchestrates the entire workflow: model introspection, config generation, optimization execution, and results analysis.
Philosophy: Talk, don't click. Engineers describe what they want; AI handles the rest.
Session Initialization Checklist
On EVERY new session, perform these steps:
Step 1: Identify Working Directory
If in: c:\Users\Antoine\Atomizer\ → Project root (full capabilities)
If in: c:\Users\Antoine\Atomizer\studies\* → Inside a study (load study context)
If elsewhere: → Limited context (warn user)
Step 2: Detect Study Context
If working directory contains optimization_config.json:
- Read the config to understand the study
- Check
2_results/study.dbfor optimization status - Summarize study state to user
Python utility for study detection:
# Get study state for current directory
python -m optimization_engine.study_state .
# Get all studies in Atomizer
python -c "from optimization_engine.study_state import get_all_studies; from pathlib import Path; [print(f'{s[\"study_name\"]}: {s[\"status\"]}') for s in get_all_studies(Path('.'))]"
Step 3: Route to Task Protocol
Use keyword matching to load appropriate context:
| User Intent | Keywords | Load Protocol | Action |
|---|---|---|---|
| Create study | "create", "new", "set up", "optimize" | OP_01 + SYS_12 | Launch study builder |
| Run optimization | "run", "start", "execute", "trials" | OP_02 + SYS_15 | Execute optimization |
| Check progress | "status", "progress", "how many" | OP_03 | Query study.db |
| Analyze results | "results", "best", "Pareto", "analyze" | OP_04 | Generate analysis |
| Neural acceleration | "neural", "surrogate", "turbo", "NN" | SYS_14 + SYS_15 | Method selection |
| NX/CAD help | "NX", "model", "mesh", "expression" | MCP + nx-docs | Use Siemens MCP |
| Physics insights | "zernike", "stress view", "insight" | SYS_16 | Generate insights |
| Troubleshoot | "error", "failed", "fix", "debug" | OP_06 | Diagnose issues |
Quick Reference
Core Commands
# Optimization workflow
python run_optimization.py --start --trials 50 # Run optimization
python run_optimization.py --start --resume # Continue interrupted run
python run_optimization.py --test # Single trial test
# Neural acceleration
python run_nn_optimization.py --turbo --nn-trials 5000 # Fast NN exploration
python -m optimization_engine.method_selector config.json study.db # Get recommendation
# Dashboard
cd atomizer-dashboard && npm run dev # Start at http://localhost:3003
When to Use --resume
| Scenario | Use --resume? |
|---|---|
| First run of new study | NO |
| First run with seeding (e.g., V15 from V14) | NO - seeding is automatic |
| Continue interrupted run | YES |
| Add more trials to completed study | YES |
Key: --resume continues existing study.db. Seeding from source_studies in config happens automatically on first run - don't confuse seeding with resuming!
Study Structure (100% standardized)
Studies are organized by geometry type:
studies/
├── M1_Mirror/ # Mirror optimization studies
│ ├── m1_mirror_adaptive_V14/
│ ├── m1_mirror_cost_reduction_V3/
│ └── m1_mirror_cost_reduction_V4/
├── Simple_Bracket/ # Bracket studies
├── UAV_Arm/ # UAV arm studies
├── Drone_Gimbal/ # Gimbal studies
├── Simple_Beam/ # Beam studies
└── _Other/ # Test/experimental studies
Individual study structure:
studies/{geometry_type}/{study_name}/
├── optimization_config.json # Problem definition
├── run_optimization.py # FEA optimization script
├── run_turbo_optimization.py # GNN-Turbo acceleration (optional)
├── README.md # MANDATORY documentation
├── STUDY_REPORT.md # Results template
├── 1_setup/
│ ├── optimization_config.json # Config copy for reference
│ └── model/
│ ├── Model.prt # NX part file
│ ├── Model_sim1.sim # NX simulation
│ └── Model_fem1.fem # FEM definition
├── 2_iterations/ # FEA trial folders (trial_NNNN/)
│ ├── trial_0001/ # Zero-padded, NEVER reset
│ ├── trial_0002/
│ └── ...
├── 3_results/
│ ├── study.db # Optuna-compatible database
│ ├── optimization.log # Logs
│ └── turbo_report.json # NN results (if run)
└── 3_insights/ # Study Insights (SYS_16)
├── zernike_*.html # Zernike WFE visualizations
├── stress_*.html # Stress field visualizations
└── design_space_*.html # Parameter exploration
IMPORTANT: When creating a new study, always place it under the appropriate geometry type folder!
Available Extractors (SYS_12)
| ID | Physics | Function | Notes |
|---|---|---|---|
| E1 | Displacement | extract_displacement() |
mm |
| E2 | Frequency | extract_frequency() |
Hz |
| E3 | Von Mises Stress | extract_solid_stress() |
Specify element_type! |
| E4 | BDF Mass | extract_mass_from_bdf() |
kg |
| E5 | CAD Mass | extract_mass_from_expression() |
kg |
| E8-10 | Zernike WFE (standard) | extract_zernike_*() |
nm (mirrors) |
| E12-14 | Phase 2 | Principal stress, strain energy, SPC forces | |
| E15-18 | Phase 3 | Temperature, heat flux, modal mass | |
| E20 | Zernike Analytic | extract_zernike_analytic() |
nm (parabola-based) |
| E22 | Zernike OPD | extract_zernike_opd() |
nm (RECOMMENDED) |
Critical: For stress extraction, specify element type:
- Shell (CQUAD4):
element_type='cquad4' - Solid (CTETRA):
element_type='ctetra'
Protocol System Overview
┌─────────────────────────────────────────────────────────────────┐
│ Layer 0: BOOTSTRAP (.claude/skills/00_BOOTSTRAP.md) │
│ Purpose: Task routing, quick reference │
└─────────────────────────────────────────────────────────────────┘
▼
┌─────────────────────────────────────────────────────────────────┐
│ Layer 1: OPERATIONS (docs/protocols/operations/OP_*.md) │
│ OP_01: Create Study OP_02: Run Optimization │
│ OP_03: Monitor OP_04: Analyze Results │
│ OP_05: Export Data OP_06: Troubleshoot │
└─────────────────────────────────────────────────────────────────┘
▼
┌─────────────────────────────────────────────────────────────────┐
│ Layer 2: SYSTEM (docs/protocols/system/SYS_*.md) │
│ SYS_10: IMSO (single-obj) SYS_11: Multi-objective │
│ SYS_12: Extractors SYS_13: Dashboard │
│ SYS_14: Neural Accel SYS_15: Method Selector │
│ SYS_16: Study Insights │
└─────────────────────────────────────────────────────────────────┘
▼
┌─────────────────────────────────────────────────────────────────┐
│ Layer 3: EXTENSIONS (docs/protocols/extensions/EXT_*.md) │
│ EXT_01: Create Extractor EXT_02: Create Hook │
│ EXT_03: Create Protocol EXT_04: Create Skill │
└─────────────────────────────────────────────────────────────────┘
Subagent Routing
For complex tasks, Claude should spawn specialized subagents:
| Task | Subagent Type | Context to Load |
|---|---|---|
| Create study from description | general-purpose |
core/study-creation-core.md, SYS_12 |
| Explore codebase | Explore |
(built-in) |
| Plan architecture | Plan |
(built-in) |
| NX API lookup | general-purpose |
Use MCP siemens-docs tools |
Environment Setup
CRITICAL: Always use the atomizer conda environment:
conda activate atomizer
python run_optimization.py
DO NOT:
- Install packages with pip/conda (everything is installed)
- Create new virtual environments
- Use system Python
NX Open Requirements:
- NX 2506 installed at
C:\Program Files\Siemens\NX2506\ - Use
run_journal.exefor NX automation
Template Registry
Available study templates for quick creation:
| Template | Objectives | Extractors | Example Study |
|---|---|---|---|
multi_objective_structural |
mass, stress, stiffness | E1, E3, E4 | bracket_pareto_3obj |
frequency_optimization |
frequency, mass | E2, E4 | uav_arm_optimization |
mirror_wavefront |
Zernike RMS | E8-E10 | m1_mirror_zernike |
shell_structural |
mass, stress | E1, E3, E4 | beam_pareto_4var |
thermal_structural |
temperature, stress | E3, E15 | (template only) |
Python utility for templates:
# List all templates
python -m optimization_engine.templates
# Get template details in code
from optimization_engine.templates import get_template, suggest_template
template = suggest_template(n_objectives=2, physics_type="structural")
Auto-Documentation Protocol
When Claude creates/modifies extractors or protocols:
- Code change → Update
optimization_engine/extractors/__init__.py - Doc update → Update
SYS_12_EXTRACTOR_LIBRARY.md - Quick ref → Update
.claude/skills/01_CHEATSHEET.md - Commit → Use structured message:
feat: Add E{N} {name} extractor
Key Principles
- Conversation first - Don't ask user to edit JSON manually
- Validate everything - Catch errors before FEA runs
- Explain decisions - Say why you chose a sampler/protocol
- NEVER modify master files - Copy NX files to study directory
- ALWAYS reuse code - Check extractors before writing new code
- Proactive documentation - Update docs after code changes
Base Classes (Phase 2 - Code Deduplication)
New studies should use these base classes instead of duplicating code:
ConfigDrivenRunner (FEA Optimization)
# run_optimization.py - Now just ~30 lines instead of ~300
from optimization_engine.base_runner import ConfigDrivenRunner
runner = ConfigDrivenRunner(__file__)
runner.run() # Handles --discover, --validate, --test, --run
ConfigDrivenSurrogate (Neural Acceleration)
# run_nn_optimization.py - Now just ~30 lines instead of ~600
from optimization_engine.generic_surrogate import ConfigDrivenSurrogate
surrogate = ConfigDrivenSurrogate(__file__)
surrogate.run() # Handles --train, --turbo, --all
Templates: optimization_engine/templates/run_*_template.py
CRITICAL: NXSolver Initialization Pattern
NEVER pass full config dict to NXSolver. This causes TypeError: expected str, bytes or os.PathLike object, not dict.
WRONG
self.nx_solver = NXSolver(self.config) # ❌ NEVER DO THIS
CORRECT - FEARunner Pattern
Always wrap NXSolver in a FEARunner class with explicit parameters:
class FEARunner:
def __init__(self, config: Dict):
self.config = config
self.nx_solver = None
self.master_model_dir = SETUP_DIR / "model"
def setup(self):
import re
nx_settings = self.config.get('nx_settings', {})
nx_install_dir = nx_settings.get('nx_install_path', 'C:\\Program Files\\Siemens\\NX2506')
version_match = re.search(r'NX(\d+)', nx_install_dir)
nastran_version = version_match.group(1) if version_match else "2506"
self.nx_solver = NXSolver(
master_model_dir=str(self.master_model_dir),
nx_install_dir=nx_install_dir,
nastran_version=nastran_version,
timeout=nx_settings.get('simulation_timeout_s', 600),
use_iteration_folders=True,
study_name=self.config.get('study_name', 'my_study')
)
def run_fea(self, params, iter_num):
if self.nx_solver is None:
self.setup()
# ... run simulation
Reference implementations:
studies/m1_mirror_adaptive_V14/run_optimization.pystudies/m1_mirror_adaptive_V15/run_optimization.py
Skill Registry (Phase 3 - Consolidated Skills)
All skills now have YAML frontmatter with metadata for versioning and dependency tracking.
| Skill ID | Name | Type | Version | Location |
|---|---|---|---|---|
| SKILL_000 | Bootstrap | bootstrap | 2.0 | .claude/skills/00_BOOTSTRAP.md |
| SKILL_001 | Cheatsheet | reference | 2.0 | .claude/skills/01_CHEATSHEET.md |
| SKILL_002 | Context Loader | loader | 2.0 | .claude/skills/02_CONTEXT_LOADER.md |
| SKILL_CORE_001 | Study Creation Core | core | 2.4 | .claude/skills/core/study-creation-core.md |
Deprecated Skills
| Old File | Reason | Replacement |
|---|---|---|
create-study.md |
Duplicate of core skill | core/study-creation-core.md |
Skill Metadata Format
All skills use YAML frontmatter:
---
skill_id: SKILL_XXX
version: X.X
last_updated: YYYY-MM-DD
type: bootstrap|reference|loader|core|module
code_dependencies:
- path/to/code.py
requires_skills:
- SKILL_YYY
replaces: old-skill.md # if applicable
---
Subagent Commands (Phase 5 - Specialized Agents)
Atomizer provides specialized subagent commands for complex tasks:
| Command | Purpose | When to Use |
|---|---|---|
/study-builder |
Create new optimization studies | "create study", "set up optimization" |
/nx-expert |
NX Open API help, model automation | "how to in NX", "update mesh" |
/protocol-auditor |
Validate configs and code quality | "validate config", "check study" |
/results-analyzer |
Analyze optimization results | "analyze results", "best solution" |
Command Files
.claude/commands/
├── study-builder.md # Create studies from descriptions
├── nx-expert.md # NX Open / Simcenter expertise
├── protocol-auditor.md # Config and code validation
├── results-analyzer.md # Results analysis and reporting
└── dashboard.md # Dashboard control
Subagent Invocation Pattern
# Master agent delegates to specialized subagent
Task(
subagent_type='general-purpose',
prompt='''
Load context from .claude/commands/study-builder.md
User request: "{user's request}"
Follow the workflow in the command file.
''',
description='Study builder task'
)
Auto-Documentation (Phase 4 - Self-Expanding Knowledge)
Atomizer can auto-generate documentation from code:
# Generate all documentation
python -m optimization_engine.auto_doc all
# Generate only extractor docs
python -m optimization_engine.auto_doc extractors
# Generate only template docs
python -m optimization_engine.auto_doc templates
Generated Files:
docs/generated/EXTRACTORS.md- Full extractor reference (auto-generated)docs/generated/EXTRACTOR_CHEATSHEET.md- Quick reference tabledocs/generated/TEMPLATES.md- Study templates reference
When to Run Auto-Doc:
- After adding a new extractor
- After modifying template registry
- Before major releases
Trial Management System (v2.3)
New unified trial management ensures consistency across all optimization methods:
Key Components
| Component | Path | Purpose |
|---|---|---|
TrialManager |
optimization_engine/utils/trial_manager.py |
Unified trial folder + DB management |
DashboardDB |
optimization_engine/utils/dashboard_db.py |
Optuna-compatible database wrapper |
Trial Naming Convention
2_iterations/
├── trial_0001/ # Zero-padded, monotonically increasing
├── trial_0002/ # NEVER reset, NEVER overwritten
├── trial_0003/
└── ...
Key principles:
- Trial numbers NEVER reset (monotonically increasing)
- Folders NEVER get overwritten
- Database is always in sync with filesystem
- Surrogate predictions (5K) are NOT trials - only FEA results
Usage
from optimization_engine.utils.trial_manager import TrialManager
tm = TrialManager(study_dir)
# Start new trial
trial = tm.new_trial(params={'rib_thickness': 10.5})
# After FEA completes
tm.complete_trial(
trial_number=trial['trial_number'],
objectives={'wfe_40_20': 5.63, 'mass_kg': 118.67},
weighted_sum=42.5,
is_feasible=True
)
Database Schema (Optuna-Compatible)
The DashboardDB class creates Optuna-compatible schema for dashboard integration:
trials- Main trial records with state, datetime, valuetrial_values- Objective values (supports multiple objectives)trial_params- Design parameter valuestrial_user_attributes- Metadata (source, solve_time, etc.)studies- Study metadata (directions, name)
Version Info
| Component | Version | Last Updated |
|---|---|---|
| ATOMIZER_CONTEXT | 1.8 | 2025-12-28 |
| BaseOptimizationRunner | 1.0 | 2025-12-07 |
| GenericSurrogate | 1.0 | 2025-12-07 |
| Study State Detector | 1.0 | 2025-12-07 |
| Template Registry | 1.0 | 2025-12-07 |
| Extractor Library | 1.4 | 2025-12-12 |
| Method Selector | 2.1 | 2025-12-07 |
| Protocol System | 2.1 | 2025-12-12 |
| Skill System | 2.1 | 2025-12-12 |
| Auto-Doc Generator | 1.0 | 2025-12-07 |
| Subagent Commands | 1.0 | 2025-12-07 |
| FEARunner Pattern | 1.0 | 2025-12-12 |
| Study Insights | 1.0 | 2025-12-20 |
| TrialManager | 1.0 | 2025-12-28 |
| DashboardDB | 1.0 | 2025-12-28 |
| GNN-Turbo System | 2.3 | 2025-12-28 |
Atomizer: Where engineers talk, AI optimizes.