Files
Atomizer/.claude/ATOMIZER_CONTEXT.md
Anto01 ea437d360e docs: Major documentation overhaul - restructure folders, update tagline, add Getting Started guide
- Restructure docs/ folder (remove numeric prefixes):
  - 04_USER_GUIDES -> guides/
  - 05_API_REFERENCE -> api/
  - 06_PHYSICS -> physics/
  - 07_DEVELOPMENT -> development/
  - 08_ARCHIVE -> archive/
  - 09_DIAGRAMS -> diagrams/

- Replace tagline 'Talk, don't click' with 'LLM-driven optimization framework' in 9 files

- Create comprehensive docs/GETTING_STARTED.md:
  - Prerequisites and quick setup
  - Project structure overview
  - First study tutorial (Claude or manual)
  - Dashboard usage guide
  - Neural acceleration introduction

- Rewrite docs/00_INDEX.md with correct paths and modern structure

- Archive obsolete files:
  - 01_PROTOCOLS.md -> archive/historical/01_PROTOCOLS_legacy.md
  - 03_GETTING_STARTED.md -> archive/historical/
  - ATOMIZER_PODCAST_BRIEFING.md -> archive/marketing/

- Update timestamps to 2026-01-20 across all key files

- Update .gitignore to exclude docs/generated/

- Version bump: ATOMIZER_CONTEXT v1.8 -> v2.0
2026-01-20 10:03:45 -05:00

525 lines
19 KiB
Markdown

# Atomizer Session Context
<!--
ATOMIZER CONTEXT LOADER v1.0
This file is the SINGLE SOURCE OF TRUTH for new Claude sessions.
Load this FIRST on every new session, then route to specific protocols.
-->
## What is Atomizer?
**Atomizer** is an LLM-first FEA (Finite Element Analysis) optimization framework. Users describe optimization problems in natural language, and Claude orchestrates the entire workflow: model introspection, config generation, optimization execution, and results analysis.
**Philosophy**: LLM-driven optimization. Engineers describe what they want; AI handles the rest.
---
## Session Initialization Checklist
On EVERY new session, perform these steps:
### Step 1: Identify Working Directory
```
If in: c:\Users\Antoine\Atomizer\ → Project root (full capabilities)
If in: c:\Users\Antoine\Atomizer\studies\* → Inside a study (load study context)
If elsewhere: → Limited context (warn user)
```
### Step 2: Detect Study Context
If working directory contains `optimization_config.json`:
1. Read the config to understand the study
2. Check `2_results/study.db` for optimization status
3. Summarize study state to user
**Python utility for study detection**:
```bash
# Get study state for current directory
python -m optimization_engine.study_state .
# Get all studies in Atomizer
python -c "from optimization_engine.study_state import get_all_studies; from pathlib import Path; [print(f'{s[\"study_name\"]}: {s[\"status\"]}') for s in get_all_studies(Path('.'))]"
```
### Step 3: Route to Task Protocol
Use keyword matching to load appropriate context:
| User Intent | Keywords | Load Protocol | Action |
|-------------|----------|---------------|--------|
| Create study | "create", "new", "set up", "optimize" | OP_01 + SYS_12 | Launch study builder |
| Run optimization | "run", "start", "execute", "trials" | OP_02 + SYS_15 | Execute optimization |
| Check progress | "status", "progress", "how many" | OP_03 | Query study.db |
| Analyze results | "results", "best", "Pareto", "analyze" | OP_04 | Generate analysis |
| Neural acceleration | "neural", "surrogate", "turbo", "NN", "SAT" | SYS_14 + SYS_16 | Method selection |
| NX/CAD help | "NX", "model", "mesh", "expression" | MCP + nx-docs | Use Siemens MCP |
| Physics insights | "zernike", "stress view", "insight" | SYS_16 | Generate insights |
| Troubleshoot | "error", "failed", "fix", "debug" | OP_06 | Diagnose issues |
---
## Quick Reference
### Core Commands
```bash
# Optimization workflow
python run_optimization.py --start --trials 50 # Run optimization
python run_optimization.py --start --resume # Continue interrupted run
python run_optimization.py --test # Single trial test
# Neural acceleration
python run_nn_optimization.py --turbo --nn-trials 5000 # Fast NN exploration
python -m optimization_engine.method_selector config.json study.db # Get recommendation
# Dashboard
cd atomizer-dashboard && npm run dev # Start at http://localhost:3003
```
### When to Use --resume
| Scenario | Use --resume? |
|----------|---------------|
| First run of new study | NO |
| First run with seeding (e.g., V15 from V14) | NO - seeding is automatic |
| Continue interrupted run | YES |
| Add more trials to completed study | YES |
**Key**: `--resume` continues existing `study.db`. Seeding from `source_studies` in config happens automatically on first run - don't confuse seeding with resuming!
### Study Structure (100% standardized)
**Studies are organized by geometry type**:
```
studies/
├── M1_Mirror/ # Mirror optimization studies
│ ├── m1_mirror_adaptive_V14/
│ ├── m1_mirror_cost_reduction_V3/
│ └── m1_mirror_cost_reduction_V4/
├── Simple_Bracket/ # Bracket studies
├── UAV_Arm/ # UAV arm studies
├── Drone_Gimbal/ # Gimbal studies
├── Simple_Beam/ # Beam studies
└── _Other/ # Test/experimental studies
```
**Individual study structure**:
```
studies/{geometry_type}/{study_name}/
├── optimization_config.json # Problem definition
├── run_optimization.py # FEA optimization script
├── run_turbo_optimization.py # GNN-Turbo acceleration (optional)
├── README.md # MANDATORY documentation
├── STUDY_REPORT.md # Results template
├── 1_setup/
│ ├── optimization_config.json # Config copy for reference
│ └── model/
│ ├── Model.prt # NX part file
│ ├── Model_sim1.sim # NX simulation
│ └── Model_fem1.fem # FEM definition
├── 2_iterations/ # FEA trial folders (trial_NNNN/)
│ ├── trial_0001/ # Zero-padded, NEVER reset
│ ├── trial_0002/
│ └── ...
├── 3_results/
│ ├── study.db # Optuna-compatible database
│ ├── optimization.log # Logs
│ └── turbo_report.json # NN results (if run)
└── 3_insights/ # Study Insights (SYS_16)
├── zernike_*.html # Zernike WFE visualizations
├── stress_*.html # Stress field visualizations
└── design_space_*.html # Parameter exploration
```
**IMPORTANT**: When creating a new study, always place it under the appropriate geometry type folder!
### Available Extractors (SYS_12)
| ID | Physics | Function | Notes |
|----|---------|----------|-------|
| E1 | Displacement | `extract_displacement()` | mm |
| E2 | Frequency | `extract_frequency()` | Hz |
| E3 | Von Mises Stress | `extract_solid_stress()` | **Specify element_type!** |
| E4 | BDF Mass | `extract_mass_from_bdf()` | kg |
| E5 | CAD Mass | `extract_mass_from_expression()` | kg |
| E8-10 | Zernike WFE (standard) | `extract_zernike_*()` | nm (mirrors) |
| E12-14 | Phase 2 | Principal stress, strain energy, SPC forces |
| E15-18 | Phase 3 | Temperature, heat flux, modal mass |
| E20 | Zernike Analytic | `extract_zernike_analytic()` | nm (parabola-based) |
| E22 | **Zernike OPD** | `extract_zernike_opd()` | nm (**RECOMMENDED**) |
**Critical**: For stress extraction, specify element type:
- Shell (CQUAD4): `element_type='cquad4'`
- Solid (CTETRA): `element_type='ctetra'`
---
## Protocol System Overview
```
┌─────────────────────────────────────────────────────────────────┐
│ Layer 0: BOOTSTRAP (.claude/skills/00_BOOTSTRAP.md) │
│ Purpose: Task routing, quick reference │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Layer 1: OPERATIONS (docs/protocols/operations/OP_*.md) │
│ OP_01: Create Study OP_02: Run Optimization │
│ OP_03: Monitor OP_04: Analyze Results │
│ OP_05: Export Data OP_06: Troubleshoot │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Layer 2: SYSTEM (docs/protocols/system/SYS_*.md) │
│ SYS_10: IMSO (single-obj) SYS_11: Multi-objective │
│ SYS_12: Extractors SYS_13: Dashboard │
│ SYS_14: Neural Accel SYS_15: Method Selector │
│ SYS_16: SAT (Self-Aware Turbo) - VALIDATED v3, WS=205.58 │
│ SYS_17: Context Engineering │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Layer 3: EXTENSIONS (docs/protocols/extensions/EXT_*.md) │
│ EXT_01: Create Extractor EXT_02: Create Hook │
│ EXT_03: Create Protocol EXT_04: Create Skill │
└─────────────────────────────────────────────────────────────────┘
```
---
## Subagent Routing
For complex tasks, Claude should spawn specialized subagents:
| Task | Subagent Type | Context to Load |
|------|---------------|-----------------|
| Create study from description | `general-purpose` | core/study-creation-core.md, SYS_12 |
| Explore codebase | `Explore` | (built-in) |
| Plan architecture | `Plan` | (built-in) |
| NX API lookup | `general-purpose` | Use MCP siemens-docs tools |
---
## Environment Setup
**CRITICAL**: Always use the `atomizer` conda environment:
```bash
conda activate atomizer
python run_optimization.py
```
**DO NOT**:
- Install packages with pip/conda (everything is installed)
- Create new virtual environments
- Use system Python
**NX Open Requirements**:
- NX 2506 installed at `C:\Program Files\Siemens\NX2506\`
- Use `run_journal.exe` for NX automation
---
## Template Registry
Available study templates for quick creation:
| Template | Objectives | Extractors | Example Study |
|----------|------------|------------|---------------|
| `multi_objective_structural` | mass, stress, stiffness | E1, E3, E4 | bracket_pareto_3obj |
| `frequency_optimization` | frequency, mass | E2, E4 | uav_arm_optimization |
| `mirror_wavefront` | Zernike RMS | E8-E10 | m1_mirror_zernike |
| `shell_structural` | mass, stress | E1, E3, E4 | beam_pareto_4var |
| `thermal_structural` | temperature, stress | E3, E15 | (template only) |
**Python utility for templates**:
```bash
# List all templates
python -m optimization_engine.templates
# Get template details in code
from optimization_engine.templates import get_template, suggest_template
template = suggest_template(n_objectives=2, physics_type="structural")
```
---
## Auto-Documentation Protocol
When Claude creates/modifies extractors or protocols:
1. **Code change** → Update `optimization_engine/extractors/__init__.py`
2. **Doc update** → Update `SYS_12_EXTRACTOR_LIBRARY.md`
3. **Quick ref** → Update `.claude/skills/01_CHEATSHEET.md`
4. **Commit** → Use structured message: `feat: Add E{N} {name} extractor`
---
## Key Principles
1. **Conversation first** - Don't ask user to edit JSON manually
2. **Validate everything** - Catch errors before FEA runs
3. **Explain decisions** - Say why you chose a sampler/protocol
4. **NEVER modify master files** - Copy NX files to study directory
5. **ALWAYS reuse code** - Check extractors before writing new code
6. **Proactive documentation** - Update docs after code changes
---
## Base Classes (Phase 2 - Code Deduplication)
New studies should use these base classes instead of duplicating code:
### ConfigDrivenRunner (FEA Optimization)
```python
# run_optimization.py - Now just ~30 lines instead of ~300
from optimization_engine.base_runner import ConfigDrivenRunner
runner = ConfigDrivenRunner(__file__)
runner.run() # Handles --discover, --validate, --test, --run
```
### ConfigDrivenSurrogate (Neural Acceleration)
```python
# run_nn_optimization.py - Now just ~30 lines instead of ~600
from optimization_engine.generic_surrogate import ConfigDrivenSurrogate
surrogate = ConfigDrivenSurrogate(__file__)
surrogate.run() # Handles --train, --turbo, --all
```
**Templates**: `optimization_engine/templates/run_*_template.py`
---
## CRITICAL: NXSolver Initialization Pattern
**NEVER pass full config dict to NXSolver.** This causes `TypeError: expected str, bytes or os.PathLike object, not dict`.
### WRONG
```python
self.nx_solver = NXSolver(self.config) # ❌ NEVER DO THIS
```
### CORRECT - FEARunner Pattern
Always wrap NXSolver in a `FEARunner` class with explicit parameters:
```python
class FEARunner:
def __init__(self, config: Dict):
self.config = config
self.nx_solver = None
self.master_model_dir = SETUP_DIR / "model"
def setup(self):
import re
nx_settings = self.config.get('nx_settings', {})
nx_install_dir = nx_settings.get('nx_install_path', 'C:\\Program Files\\Siemens\\NX2506')
version_match = re.search(r'NX(\d+)', nx_install_dir)
nastran_version = version_match.group(1) if version_match else "2506"
self.nx_solver = NXSolver(
master_model_dir=str(self.master_model_dir),
nx_install_dir=nx_install_dir,
nastran_version=nastran_version,
timeout=nx_settings.get('simulation_timeout_s', 600),
use_iteration_folders=True,
study_name=self.config.get('study_name', 'my_study')
)
def run_fea(self, params, iter_num):
if self.nx_solver is None:
self.setup()
# ... run simulation
```
**Reference implementations**:
- `studies/m1_mirror_adaptive_V14/run_optimization.py`
- `studies/m1_mirror_adaptive_V15/run_optimization.py`
---
## Skill Registry (Phase 3 - Consolidated Skills)
All skills now have YAML frontmatter with metadata for versioning and dependency tracking.
| Skill ID | Name | Type | Version | Location |
|----------|------|------|---------|----------|
| SKILL_000 | Bootstrap | bootstrap | 2.0 | `.claude/skills/00_BOOTSTRAP.md` |
| SKILL_001 | Cheatsheet | reference | 2.0 | `.claude/skills/01_CHEATSHEET.md` |
| SKILL_002 | Context Loader | loader | 2.0 | `.claude/skills/02_CONTEXT_LOADER.md` |
| SKILL_CORE_001 | Study Creation Core | core | 2.4 | `.claude/skills/core/study-creation-core.md` |
### Deprecated Skills
| Old File | Reason | Replacement |
|----------|--------|-------------|
| `create-study.md` | Duplicate of core skill | `core/study-creation-core.md` |
### Skill Metadata Format
All skills use YAML frontmatter:
```yaml
---
skill_id: SKILL_XXX
version: X.X
last_updated: YYYY-MM-DD
type: bootstrap|reference|loader|core|module
code_dependencies:
- path/to/code.py
requires_skills:
- SKILL_YYY
replaces: old-skill.md # if applicable
---
```
---
## Subagent Commands (Phase 5 - Specialized Agents)
Atomizer provides specialized subagent commands for complex tasks:
| Command | Purpose | When to Use |
|---------|---------|-------------|
| `/study-builder` | Create new optimization studies | "create study", "set up optimization" |
| `/nx-expert` | NX Open API help, model automation | "how to in NX", "update mesh" |
| `/protocol-auditor` | Validate configs and code quality | "validate config", "check study" |
| `/results-analyzer` | Analyze optimization results | "analyze results", "best solution" |
### Command Files
```
.claude/commands/
├── study-builder.md # Create studies from descriptions
├── nx-expert.md # NX Open / Simcenter expertise
├── protocol-auditor.md # Config and code validation
├── results-analyzer.md # Results analysis and reporting
└── dashboard.md # Dashboard control
```
### Subagent Invocation Pattern
```python
# Master agent delegates to specialized subagent
Task(
subagent_type='general-purpose',
prompt='''
Load context from .claude/commands/study-builder.md
User request: "{user's request}"
Follow the workflow in the command file.
''',
description='Study builder task'
)
```
---
## Auto-Documentation (Phase 4 - Self-Expanding Knowledge)
Atomizer can auto-generate documentation from code:
```bash
# Generate all documentation
python -m optimization_engine.auto_doc all
# Generate only extractor docs
python -m optimization_engine.auto_doc extractors
# Generate only template docs
python -m optimization_engine.auto_doc templates
```
**Generated Files**:
- `docs/generated/EXTRACTORS.md` - Full extractor reference (auto-generated)
- `docs/generated/EXTRACTOR_CHEATSHEET.md` - Quick reference table
- `docs/generated/TEMPLATES.md` - Study templates reference
**When to Run Auto-Doc**:
1. After adding a new extractor
2. After modifying template registry
3. Before major releases
---
## Trial Management System (v2.3)
New unified trial management ensures consistency across all optimization methods:
### Key Components
| Component | Path | Purpose |
|-----------|------|---------|
| `TrialManager` | `optimization_engine/utils/trial_manager.py` | Unified trial folder + DB management |
| `DashboardDB` | `optimization_engine/utils/dashboard_db.py` | Optuna-compatible database wrapper |
### Trial Naming Convention
```
2_iterations/
├── trial_0001/ # Zero-padded, monotonically increasing
├── trial_0002/ # NEVER reset, NEVER overwritten
├── trial_0003/
└── ...
```
**Key principles**:
- Trial numbers **NEVER reset** (monotonically increasing)
- Folders **NEVER get overwritten**
- Database is always in sync with filesystem
- Surrogate predictions (5K) are NOT trials - only FEA results
### Usage
```python
from optimization_engine.utils.trial_manager import TrialManager
tm = TrialManager(study_dir)
# Start new trial
trial = tm.new_trial(params={'rib_thickness': 10.5})
# After FEA completes
tm.complete_trial(
trial_number=trial['trial_number'],
objectives={'wfe_40_20': 5.63, 'mass_kg': 118.67},
weighted_sum=42.5,
is_feasible=True
)
```
### Database Schema (Optuna-Compatible)
The `DashboardDB` class creates Optuna-compatible schema for dashboard integration:
- `trials` - Main trial records with state, datetime, value
- `trial_values` - Objective values (supports multiple objectives)
- `trial_params` - Design parameter values
- `trial_user_attributes` - Metadata (source, solve_time, etc.)
- `studies` - Study metadata (directions, name)
---
## Version Info
| Component | Version | Last Updated |
|-----------|---------|--------------|
| ATOMIZER_CONTEXT | 2.0 | 2026-01-20 |
| Documentation Structure | 2.0 | 2026-01-20 |
| BaseOptimizationRunner | 1.0 | 2025-12-07 |
| GenericSurrogate | 1.0 | 2025-12-07 |
| Study State Detector | 1.0 | 2025-12-07 |
| Template Registry | 1.0 | 2025-12-07 |
| Extractor Library | 1.4 | 2025-12-12 |
| Method Selector | 2.1 | 2025-12-07 |
| Protocol System | 2.1 | 2025-12-12 |
| Skill System | 2.1 | 2025-12-12 |
| Auto-Doc Generator | 1.0 | 2025-12-07 |
| Subagent Commands | 1.0 | 2025-12-07 |
| FEARunner Pattern | 1.0 | 2025-12-12 |
| Study Insights | 1.0 | 2025-12-20 |
| TrialManager | 1.0 | 2025-12-28 |
| DashboardDB | 1.0 | 2025-12-28 |
| GNN-Turbo System | 2.3 | 2025-12-28 |
---
*Atomizer: LLM-driven structural optimization for engineering.*