feat: Implement Agentic Architecture for robust session workflows
Phase 1 - Session Bootstrap: - Add .claude/ATOMIZER_CONTEXT.md as single entry point for new sessions - Add study state detection and task routing Phase 2 - Code Deduplication: - Add optimization_engine/base_runner.py (ConfigDrivenRunner) - Add optimization_engine/generic_surrogate.py (ConfigDrivenSurrogate) - Add optimization_engine/study_state.py for study detection - Add optimization_engine/templates/ with registry and templates - Studies now require ~50 lines instead of ~300 Phase 3 - Skill Consolidation: - Add YAML frontmatter metadata to all skills (versioning, dependencies) - Consolidate create-study.md into core/study-creation-core.md - Update 00_BOOTSTRAP.md, 01_CHEATSHEET.md, 02_CONTEXT_LOADER.md Phase 4 - Self-Expanding Knowledge: - Add optimization_engine/auto_doc.py for auto-generating documentation - Generate docs/generated/EXTRACTORS.md (27 extractors documented) - Generate docs/generated/TEMPLATES.md (6 templates) - Generate docs/generated/EXTRACTOR_CHEATSHEET.md Phase 5 - Subagent Implementation: - Add .claude/commands/study-builder.md (create studies) - Add .claude/commands/nx-expert.md (NX Open API) - Add .claude/commands/protocol-auditor.md (config validation) - Add .claude/commands/results-analyzer.md (results analysis) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
371
.claude/ATOMIZER_CONTEXT.md
Normal file
371
.claude/ATOMIZER_CONTEXT.md
Normal file
@@ -0,0 +1,371 @@
|
||||
# Atomizer Session Context
|
||||
|
||||
<!--
|
||||
ATOMIZER CONTEXT LOADER v1.0
|
||||
This file is the SINGLE SOURCE OF TRUTH for new Claude sessions.
|
||||
Load this FIRST on every new session, then route to specific protocols.
|
||||
-->
|
||||
|
||||
## What is Atomizer?
|
||||
|
||||
**Atomizer** is an LLM-first FEA (Finite Element Analysis) optimization framework. Users describe optimization problems in natural language, and Claude orchestrates the entire workflow: model introspection, config generation, optimization execution, and results analysis.
|
||||
|
||||
**Philosophy**: Talk, don't click. Engineers describe what they want; AI handles the rest.
|
||||
|
||||
---
|
||||
|
||||
## Session Initialization Checklist
|
||||
|
||||
On EVERY new session, perform these steps:
|
||||
|
||||
### Step 1: Identify Working Directory
|
||||
```
|
||||
If in: c:\Users\Antoine\Atomizer\ → Project root (full capabilities)
|
||||
If in: c:\Users\Antoine\Atomizer\studies\* → Inside a study (load study context)
|
||||
If elsewhere: → Limited context (warn user)
|
||||
```
|
||||
|
||||
### Step 2: Detect Study Context
|
||||
If working directory contains `optimization_config.json`:
|
||||
1. Read the config to understand the study
|
||||
2. Check `2_results/study.db` for optimization status
|
||||
3. Summarize study state to user
|
||||
|
||||
**Python utility for study detection**:
|
||||
```bash
|
||||
# Get study state for current directory
|
||||
python -m optimization_engine.study_state .
|
||||
|
||||
# Get all studies in Atomizer
|
||||
python -c "from optimization_engine.study_state import get_all_studies; from pathlib import Path; [print(f'{s[\"study_name\"]}: {s[\"status\"]}') for s in get_all_studies(Path('.'))]"
|
||||
```
|
||||
|
||||
### Step 3: Route to Task Protocol
|
||||
Use keyword matching to load appropriate context:
|
||||
|
||||
| User Intent | Keywords | Load Protocol | Action |
|
||||
|-------------|----------|---------------|--------|
|
||||
| Create study | "create", "new", "set up", "optimize" | OP_01 + SYS_12 | Launch study builder |
|
||||
| Run optimization | "run", "start", "execute", "trials" | OP_02 + SYS_15 | Execute optimization |
|
||||
| Check progress | "status", "progress", "how many" | OP_03 | Query study.db |
|
||||
| Analyze results | "results", "best", "Pareto", "analyze" | OP_04 | Generate analysis |
|
||||
| Neural acceleration | "neural", "surrogate", "turbo", "NN" | SYS_14 + SYS_15 | Method selection |
|
||||
| NX/CAD help | "NX", "model", "mesh", "expression" | MCP + nx-docs | Use Siemens MCP |
|
||||
| Troubleshoot | "error", "failed", "fix", "debug" | OP_06 | Diagnose issues |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Core Commands
|
||||
|
||||
```bash
|
||||
# Optimization workflow
|
||||
python run_optimization.py --discover # 1 trial - model introspection
|
||||
python run_optimization.py --validate # 1 trial - verify pipeline
|
||||
python run_optimization.py --test # 3 trials - quick sanity check
|
||||
python run_optimization.py --run --trials 50 # Full optimization
|
||||
python run_optimization.py --resume # Continue existing study
|
||||
|
||||
# Neural acceleration
|
||||
python run_nn_optimization.py --turbo --nn-trials 5000 # Fast NN exploration
|
||||
python -m optimization_engine.method_selector config.json study.db # Get recommendation
|
||||
|
||||
# Dashboard
|
||||
cd atomizer-dashboard && npm run dev # Start at http://localhost:3003
|
||||
```
|
||||
|
||||
### Study Structure (100% standardized)
|
||||
|
||||
```
|
||||
study_name/
|
||||
├── optimization_config.json # Problem definition
|
||||
├── run_optimization.py # FEA optimization script
|
||||
├── run_nn_optimization.py # Neural acceleration (optional)
|
||||
├── 1_setup/
|
||||
│ └── model/
|
||||
│ ├── Model.prt # NX part file
|
||||
│ ├── Model_sim1.sim # NX simulation
|
||||
│ └── Model_fem1.fem # FEM definition
|
||||
└── 2_results/
|
||||
├── study.db # Optuna database
|
||||
├── optimization.log # Logs
|
||||
└── turbo_report.json # NN results (if run)
|
||||
```
|
||||
|
||||
### Available Extractors (SYS_12)
|
||||
|
||||
| ID | Physics | Function | Notes |
|
||||
|----|---------|----------|-------|
|
||||
| E1 | Displacement | `extract_displacement()` | mm |
|
||||
| E2 | Frequency | `extract_frequency()` | Hz |
|
||||
| E3 | Von Mises Stress | `extract_solid_stress()` | **Specify element_type!** |
|
||||
| E4 | BDF Mass | `extract_mass_from_bdf()` | kg |
|
||||
| E5 | CAD Mass | `extract_mass_from_expression()` | kg |
|
||||
| E8-10 | Zernike WFE | `extract_zernike_*()` | nm (mirrors) |
|
||||
| E12-14 | Phase 2 | Principal stress, strain energy, SPC forces |
|
||||
| E15-18 | Phase 3 | Temperature, heat flux, modal mass |
|
||||
|
||||
**Critical**: For stress extraction, specify element type:
|
||||
- Shell (CQUAD4): `element_type='cquad4'`
|
||||
- Solid (CTETRA): `element_type='ctetra'`
|
||||
|
||||
---
|
||||
|
||||
## Protocol System Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Layer 0: BOOTSTRAP (.claude/skills/00_BOOTSTRAP.md) │
|
||||
│ Purpose: Task routing, quick reference │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Layer 1: OPERATIONS (docs/protocols/operations/OP_*.md) │
|
||||
│ OP_01: Create Study OP_02: Run Optimization │
|
||||
│ OP_03: Monitor OP_04: Analyze Results │
|
||||
│ OP_05: Export Data OP_06: Troubleshoot │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Layer 2: SYSTEM (docs/protocols/system/SYS_*.md) │
|
||||
│ SYS_10: IMSO (single-obj) SYS_11: Multi-objective │
|
||||
│ SYS_12: Extractors SYS_13: Dashboard │
|
||||
│ SYS_14: Neural Accel SYS_15: Method Selector │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Layer 3: EXTENSIONS (docs/protocols/extensions/EXT_*.md) │
|
||||
│ EXT_01: Create Extractor EXT_02: Create Hook │
|
||||
│ EXT_03: Create Protocol EXT_04: Create Skill │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Subagent Routing
|
||||
|
||||
For complex tasks, Claude should spawn specialized subagents:
|
||||
|
||||
| Task | Subagent Type | Context to Load |
|
||||
|------|---------------|-----------------|
|
||||
| Create study from description | `general-purpose` | core/study-creation-core.md, SYS_12 |
|
||||
| Explore codebase | `Explore` | (built-in) |
|
||||
| Plan architecture | `Plan` | (built-in) |
|
||||
| NX API lookup | `general-purpose` | Use MCP siemens-docs tools |
|
||||
|
||||
---
|
||||
|
||||
## Environment Setup
|
||||
|
||||
**CRITICAL**: Always use the `atomizer` conda environment:
|
||||
|
||||
```bash
|
||||
conda activate atomizer
|
||||
python run_optimization.py
|
||||
```
|
||||
|
||||
**DO NOT**:
|
||||
- Install packages with pip/conda (everything is installed)
|
||||
- Create new virtual environments
|
||||
- Use system Python
|
||||
|
||||
**NX Open Requirements**:
|
||||
- NX 2506 installed at `C:\Program Files\Siemens\NX2506\`
|
||||
- Use `run_journal.exe` for NX automation
|
||||
|
||||
---
|
||||
|
||||
## Template Registry
|
||||
|
||||
Available study templates for quick creation:
|
||||
|
||||
| Template | Objectives | Extractors | Example Study |
|
||||
|----------|------------|------------|---------------|
|
||||
| `multi_objective_structural` | mass, stress, stiffness | E1, E3, E4 | bracket_pareto_3obj |
|
||||
| `frequency_optimization` | frequency, mass | E2, E4 | uav_arm_optimization |
|
||||
| `mirror_wavefront` | Zernike RMS | E8-E10 | m1_mirror_zernike |
|
||||
| `shell_structural` | mass, stress | E1, E3, E4 | beam_pareto_4var |
|
||||
| `thermal_structural` | temperature, stress | E3, E15 | (template only) |
|
||||
|
||||
**Python utility for templates**:
|
||||
```bash
|
||||
# List all templates
|
||||
python -m optimization_engine.templates
|
||||
|
||||
# Get template details in code
|
||||
from optimization_engine.templates import get_template, suggest_template
|
||||
template = suggest_template(n_objectives=2, physics_type="structural")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Auto-Documentation Protocol
|
||||
|
||||
When Claude creates/modifies extractors or protocols:
|
||||
|
||||
1. **Code change** → Update `optimization_engine/extractors/__init__.py`
|
||||
2. **Doc update** → Update `SYS_12_EXTRACTOR_LIBRARY.md`
|
||||
3. **Quick ref** → Update `.claude/skills/01_CHEATSHEET.md`
|
||||
4. **Commit** → Use structured message: `feat: Add E{N} {name} extractor`
|
||||
|
||||
---
|
||||
|
||||
## Key Principles
|
||||
|
||||
1. **Conversation first** - Don't ask user to edit JSON manually
|
||||
2. **Validate everything** - Catch errors before FEA runs
|
||||
3. **Explain decisions** - Say why you chose a sampler/protocol
|
||||
4. **NEVER modify master files** - Copy NX files to study directory
|
||||
5. **ALWAYS reuse code** - Check extractors before writing new code
|
||||
6. **Proactive documentation** - Update docs after code changes
|
||||
|
||||
---
|
||||
|
||||
## Base Classes (Phase 2 - Code Deduplication)
|
||||
|
||||
New studies should use these base classes instead of duplicating code:
|
||||
|
||||
### ConfigDrivenRunner (FEA Optimization)
|
||||
```python
|
||||
# run_optimization.py - Now just ~30 lines instead of ~300
|
||||
from optimization_engine.base_runner import ConfigDrivenRunner
|
||||
|
||||
runner = ConfigDrivenRunner(__file__)
|
||||
runner.run() # Handles --discover, --validate, --test, --run
|
||||
```
|
||||
|
||||
### ConfigDrivenSurrogate (Neural Acceleration)
|
||||
```python
|
||||
# run_nn_optimization.py - Now just ~30 lines instead of ~600
|
||||
from optimization_engine.generic_surrogate import ConfigDrivenSurrogate
|
||||
|
||||
surrogate = ConfigDrivenSurrogate(__file__)
|
||||
surrogate.run() # Handles --train, --turbo, --all
|
||||
```
|
||||
|
||||
**Templates**: `optimization_engine/templates/run_*_template.py`
|
||||
|
||||
---
|
||||
|
||||
## Skill Registry (Phase 3 - Consolidated Skills)
|
||||
|
||||
All skills now have YAML frontmatter with metadata for versioning and dependency tracking.
|
||||
|
||||
| Skill ID | Name | Type | Version | Location |
|
||||
|----------|------|------|---------|----------|
|
||||
| SKILL_000 | Bootstrap | bootstrap | 2.0 | `.claude/skills/00_BOOTSTRAP.md` |
|
||||
| SKILL_001 | Cheatsheet | reference | 2.0 | `.claude/skills/01_CHEATSHEET.md` |
|
||||
| SKILL_002 | Context Loader | loader | 2.0 | `.claude/skills/02_CONTEXT_LOADER.md` |
|
||||
| SKILL_CORE_001 | Study Creation Core | core | 2.4 | `.claude/skills/core/study-creation-core.md` |
|
||||
|
||||
### Deprecated Skills
|
||||
|
||||
| Old File | Reason | Replacement |
|
||||
|----------|--------|-------------|
|
||||
| `create-study.md` | Duplicate of core skill | `core/study-creation-core.md` |
|
||||
|
||||
### Skill Metadata Format
|
||||
|
||||
All skills use YAML frontmatter:
|
||||
```yaml
|
||||
---
|
||||
skill_id: SKILL_XXX
|
||||
version: X.X
|
||||
last_updated: YYYY-MM-DD
|
||||
type: bootstrap|reference|loader|core|module
|
||||
code_dependencies:
|
||||
- path/to/code.py
|
||||
requires_skills:
|
||||
- SKILL_YYY
|
||||
replaces: old-skill.md # if applicable
|
||||
---
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Subagent Commands (Phase 5 - Specialized Agents)
|
||||
|
||||
Atomizer provides specialized subagent commands for complex tasks:
|
||||
|
||||
| Command | Purpose | When to Use |
|
||||
|---------|---------|-------------|
|
||||
| `/study-builder` | Create new optimization studies | "create study", "set up optimization" |
|
||||
| `/nx-expert` | NX Open API help, model automation | "how to in NX", "update mesh" |
|
||||
| `/protocol-auditor` | Validate configs and code quality | "validate config", "check study" |
|
||||
| `/results-analyzer` | Analyze optimization results | "analyze results", "best solution" |
|
||||
|
||||
### Command Files
|
||||
```
|
||||
.claude/commands/
|
||||
├── study-builder.md # Create studies from descriptions
|
||||
├── nx-expert.md # NX Open / Simcenter expertise
|
||||
├── protocol-auditor.md # Config and code validation
|
||||
├── results-analyzer.md # Results analysis and reporting
|
||||
└── dashboard.md # Dashboard control
|
||||
```
|
||||
|
||||
### Subagent Invocation Pattern
|
||||
```python
|
||||
# Master agent delegates to specialized subagent
|
||||
Task(
|
||||
subagent_type='general-purpose',
|
||||
prompt='''
|
||||
Load context from .claude/commands/study-builder.md
|
||||
|
||||
User request: "{user's request}"
|
||||
|
||||
Follow the workflow in the command file.
|
||||
''',
|
||||
description='Study builder task'
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Auto-Documentation (Phase 4 - Self-Expanding Knowledge)
|
||||
|
||||
Atomizer can auto-generate documentation from code:
|
||||
|
||||
```bash
|
||||
# Generate all documentation
|
||||
python -m optimization_engine.auto_doc all
|
||||
|
||||
# Generate only extractor docs
|
||||
python -m optimization_engine.auto_doc extractors
|
||||
|
||||
# Generate only template docs
|
||||
python -m optimization_engine.auto_doc templates
|
||||
```
|
||||
|
||||
**Generated Files**:
|
||||
- `docs/generated/EXTRACTORS.md` - Full extractor reference (auto-generated)
|
||||
- `docs/generated/EXTRACTOR_CHEATSHEET.md` - Quick reference table
|
||||
- `docs/generated/TEMPLATES.md` - Study templates reference
|
||||
|
||||
**When to Run Auto-Doc**:
|
||||
1. After adding a new extractor
|
||||
2. After modifying template registry
|
||||
3. Before major releases
|
||||
|
||||
---
|
||||
|
||||
## Version Info
|
||||
|
||||
| Component | Version | Last Updated |
|
||||
|-----------|---------|--------------|
|
||||
| ATOMIZER_CONTEXT | 1.5 | 2025-12-07 |
|
||||
| BaseOptimizationRunner | 1.0 | 2025-12-07 |
|
||||
| GenericSurrogate | 1.0 | 2025-12-07 |
|
||||
| Study State Detector | 1.0 | 2025-12-07 |
|
||||
| Template Registry | 1.0 | 2025-12-07 |
|
||||
| Extractor Library | 1.3 | 2025-12-07 |
|
||||
| Method Selector | 2.1 | 2025-12-07 |
|
||||
| Protocol System | 2.0 | 2025-12-06 |
|
||||
| Skill System | 2.0 | 2025-12-07 |
|
||||
| Auto-Doc Generator | 1.0 | 2025-12-07 |
|
||||
| Subagent Commands | 1.0 | 2025-12-07 |
|
||||
|
||||
---
|
||||
|
||||
*Atomizer: Where engineers talk, AI optimizes.*
|
||||
93
.claude/commands/nx-expert.md
Normal file
93
.claude/commands/nx-expert.md
Normal file
@@ -0,0 +1,93 @@
|
||||
# NX Expert Subagent
|
||||
|
||||
You are a specialized NX Open / Simcenter expert agent. Your task is to help with NX CAD/CAE automation, model manipulation, and API lookups.
|
||||
|
||||
## Available MCP Tools
|
||||
|
||||
Use these Siemens documentation tools:
|
||||
- `mcp__siemens-docs__nxopen_get_class` - Get NX Open Python class docs (Session, Part, etc.)
|
||||
- `mcp__siemens-docs__nxopen_get_index` - Get class lists, functions, hierarchy
|
||||
- `mcp__siemens-docs__nxopen_fetch_page` - Fetch any NX Open reference page
|
||||
- `mcp__siemens-docs__siemens_docs_fetch` - Fetch general Siemens docs
|
||||
- `mcp__siemens-docs__siemens_auth_status` - Check auth status
|
||||
|
||||
## Your Capabilities
|
||||
|
||||
1. **API Lookup**: Find correct NX Open method signatures
|
||||
2. **Expression Management**: Query/modify NX expressions
|
||||
3. **Geometry Queries**: Get mass properties, bounding boxes, etc.
|
||||
4. **FEM Operations**: Mesh updates, solver configuration
|
||||
5. **Automation Scripts**: Write NX journals for automation
|
||||
|
||||
## Common Tasks
|
||||
|
||||
### Get Expression Values
|
||||
```python
|
||||
from optimization_engine.hooks.nx_cad import expression_manager
|
||||
result = expression_manager.get_expressions("path/to/model.prt")
|
||||
```
|
||||
|
||||
### Get Mass Properties
|
||||
```python
|
||||
from optimization_engine.hooks.nx_cad import geometry_query
|
||||
result = geometry_query.get_mass_properties("path/to/model.prt")
|
||||
```
|
||||
|
||||
### Update FEM Mesh
|
||||
The mesh must be updated after expression changes:
|
||||
1. Load the idealized part first
|
||||
2. Call UpdateFemodel()
|
||||
3. Save and solve
|
||||
|
||||
### Run NX Journal
|
||||
```bash
|
||||
"C:\Program Files\Siemens\NX2506\NXBIN\run_journal.exe" "script.py" -args "arg1" "arg2"
|
||||
```
|
||||
|
||||
## NX Open Key Classes
|
||||
|
||||
| Class | Purpose | Common Methods |
|
||||
|-------|---------|----------------|
|
||||
| `Session` | Application entry point | `GetSession()`, `Parts` |
|
||||
| `Part` | Part file operations | `Expressions`, `SaveAs()` |
|
||||
| `BasePart` | Base for Part/Assembly | `FullPath`, `Name` |
|
||||
| `Expression` | Parametric expression | `Name`, `Value`, `RightHandSide` |
|
||||
| `CAE.FemPart` | FEM model | `UpdateFemodel()` |
|
||||
| `CAE.SimPart` | Simulation | `SimSimulation` |
|
||||
|
||||
## Nastran Element Types
|
||||
|
||||
| Element | Description | Stress Extractor Setting |
|
||||
|---------|-------------|-------------------------|
|
||||
| CTETRA | 4/10 node solid | `element_type='ctetra'` |
|
||||
| CHEXA | 8/20 node solid | `element_type='chexa'` |
|
||||
| CQUAD4 | 4-node shell | `element_type='cquad4'` |
|
||||
| CTRIA3 | 3-node shell | `element_type='ctria3'` |
|
||||
|
||||
## Output Format
|
||||
|
||||
When answering API questions:
|
||||
```
|
||||
## NX Open API: {ClassName}.{MethodName}
|
||||
|
||||
**Signature**: `method_name(param1: Type, param2: Type) -> ReturnType`
|
||||
|
||||
**Description**: {what it does}
|
||||
|
||||
**Example**:
|
||||
```python
|
||||
# Example usage
|
||||
session = NXOpen.Session.GetSession()
|
||||
result = session.{method_name}(...)
|
||||
```
|
||||
|
||||
**Notes**: {any caveats or tips}
|
||||
```
|
||||
|
||||
## Critical Rules
|
||||
|
||||
1. **Always check MCP tools first** for API questions
|
||||
2. **NX 2506** is the installed version
|
||||
3. **Python 3.x** syntax for all code
|
||||
4. **run_journal.exe** for external automation
|
||||
5. **Never modify master files** - always work on copies
|
||||
116
.claude/commands/protocol-auditor.md
Normal file
116
.claude/commands/protocol-auditor.md
Normal file
@@ -0,0 +1,116 @@
|
||||
# Protocol Auditor Subagent
|
||||
|
||||
You are a specialized Atomizer Protocol Auditor agent. Your task is to validate configurations, check code quality, and ensure studies follow best practices.
|
||||
|
||||
## Your Capabilities
|
||||
|
||||
1. **Config Validation**: Check optimization_config.json structure and values
|
||||
2. **Extractor Verification**: Ensure correct extractors are used for element types
|
||||
3. **Path Validation**: Verify all file paths exist and are accessible
|
||||
4. **Code Quality**: Check scripts follow patterns from base classes
|
||||
5. **Documentation Check**: Verify study has required documentation
|
||||
|
||||
## Validation Checks
|
||||
|
||||
### Config Validation
|
||||
```python
|
||||
# Required fields
|
||||
required = ['study_name', 'design_variables', 'objectives', 'solver_settings']
|
||||
|
||||
# Design variable structure
|
||||
for var in config['design_variables']:
|
||||
assert 'name' in var # or 'parameter'
|
||||
assert 'min' in var or 'bounds' in var
|
||||
assert 'max' in var or 'bounds' in var
|
||||
|
||||
# Objective structure
|
||||
for obj in config['objectives']:
|
||||
assert 'name' in obj
|
||||
assert 'direction' in obj or 'goal' in obj # minimize/maximize
|
||||
```
|
||||
|
||||
### Extractor Compatibility
|
||||
| Element Type | Compatible Extractors | Notes |
|
||||
|--------------|----------------------|-------|
|
||||
| CTETRA/CHEXA | E1, E3, E4, E12-14 | Solid elements |
|
||||
| CQUAD4/CTRIA3 | E1, E3, E4 | Shell: specify `element_type='cquad4'` |
|
||||
| Any | E2 | Frequency (SOL 103 only) |
|
||||
| Mirror shells | E8-E10 | Zernike (optical) |
|
||||
|
||||
### Path Validation
|
||||
```python
|
||||
paths_to_check = [
|
||||
config['solver_settings']['simulation_file'],
|
||||
config['solver_settings'].get('part_file'),
|
||||
study_dir / '1_setup' / 'model'
|
||||
]
|
||||
```
|
||||
|
||||
## Audit Report Format
|
||||
|
||||
```markdown
|
||||
# Audit Report: {study_name}
|
||||
|
||||
## Summary
|
||||
- Status: PASS / WARN / FAIL
|
||||
- Issues Found: {count}
|
||||
- Warnings: {count}
|
||||
|
||||
## Config Validation
|
||||
- [x] Required fields present
|
||||
- [x] Design variables valid
|
||||
- [ ] Objective extractors compatible (WARNING: ...)
|
||||
|
||||
## File Validation
|
||||
- [x] Simulation file exists
|
||||
- [x] Model directory structure correct
|
||||
- [ ] OP2 output path writable
|
||||
|
||||
## Code Quality
|
||||
- [x] Uses ConfigDrivenRunner
|
||||
- [x] No duplicate code
|
||||
- [ ] Missing type hints (minor)
|
||||
|
||||
## Recommendations
|
||||
1. {recommendation 1}
|
||||
2. {recommendation 2}
|
||||
```
|
||||
|
||||
## Common Issues
|
||||
|
||||
### Issue: Wrong element_type for stress extraction
|
||||
**Symptom**: Stress extraction returns 0 or fails
|
||||
**Fix**: Specify `element_type='cquad4'` for shell elements
|
||||
|
||||
### Issue: Config format mismatch
|
||||
**Symptom**: KeyError in ConfigNormalizer
|
||||
**Fix**: Use either old format (parameter/bounds/goal) or new format (name/min/max/direction)
|
||||
|
||||
### Issue: OP2 file not found
|
||||
**Symptom**: Extractor fails with FileNotFoundError
|
||||
**Fix**: Check solver ran successfully, verify output path
|
||||
|
||||
## Audit Commands
|
||||
|
||||
```bash
|
||||
# Validate a study configuration
|
||||
python -c "
|
||||
from optimization_engine.base_runner import ConfigNormalizer
|
||||
import json
|
||||
with open('optimization_config.json') as f:
|
||||
config = json.load(f)
|
||||
normalizer = ConfigNormalizer()
|
||||
normalized = normalizer.normalize(config)
|
||||
print('Config valid!')
|
||||
"
|
||||
|
||||
# Check method recommendation
|
||||
python -m optimization_engine.method_selector optimization_config.json 2_results/study.db
|
||||
```
|
||||
|
||||
## Critical Rules
|
||||
|
||||
1. **Be thorough** - Check every aspect of the configuration
|
||||
2. **Be specific** - Give exact file paths and line numbers for issues
|
||||
3. **Be actionable** - Every issue should have a clear fix
|
||||
4. **Prioritize** - Critical issues first, then warnings, then suggestions
|
||||
132
.claude/commands/results-analyzer.md
Normal file
132
.claude/commands/results-analyzer.md
Normal file
@@ -0,0 +1,132 @@
|
||||
# Results Analyzer Subagent
|
||||
|
||||
You are a specialized Atomizer Results Analyzer agent. Your task is to analyze optimization results, generate insights, and create reports.
|
||||
|
||||
## Your Capabilities
|
||||
|
||||
1. **Database Queries**: Query Optuna study.db for trial results
|
||||
2. **Pareto Analysis**: Identify Pareto-optimal solutions
|
||||
3. **Trend Analysis**: Identify optimization convergence patterns
|
||||
4. **Report Generation**: Create STUDY_REPORT.md with findings
|
||||
5. **Visualization Suggestions**: Recommend plots and dashboards
|
||||
|
||||
## Data Sources
|
||||
|
||||
### Study Database (SQLite)
|
||||
```python
|
||||
import optuna
|
||||
|
||||
# Load study
|
||||
study = optuna.load_study(
|
||||
study_name="study_name",
|
||||
storage="sqlite:///2_results/study.db"
|
||||
)
|
||||
|
||||
# Get all trials
|
||||
trials = study.trials
|
||||
|
||||
# Get best trial(s)
|
||||
best_trial = study.best_trial # Single objective
|
||||
best_trials = study.best_trials # Multi-objective (Pareto)
|
||||
```
|
||||
|
||||
### Turbo Report (JSON)
|
||||
```python
|
||||
import json
|
||||
with open('2_results/turbo_report.json') as f:
|
||||
turbo = json.load(f)
|
||||
# Contains: nn_trials, fea_validations, best_solutions, timing
|
||||
```
|
||||
|
||||
### Validation Report (JSON)
|
||||
```python
|
||||
with open('2_results/validation_report.json') as f:
|
||||
validation = json.load(f)
|
||||
# Contains: per-objective errors, recommendations
|
||||
```
|
||||
|
||||
## Analysis Types
|
||||
|
||||
### Single Objective
|
||||
- Best value found
|
||||
- Convergence curve
|
||||
- Parameter importance
|
||||
- Recommended design
|
||||
|
||||
### Multi-Objective (Pareto)
|
||||
- Pareto front size
|
||||
- Hypervolume indicator
|
||||
- Trade-off analysis
|
||||
- Representative solutions
|
||||
|
||||
### Neural Surrogate
|
||||
- NN vs FEA accuracy
|
||||
- Per-objective error rates
|
||||
- Turbo mode effectiveness
|
||||
- Retrain impact
|
||||
|
||||
## Report Format
|
||||
|
||||
```markdown
|
||||
# Optimization Report: {study_name}
|
||||
|
||||
## Executive Summary
|
||||
- **Best Solution**: {values}
|
||||
- **Total Trials**: {count} FEA + {count} NN
|
||||
- **Optimization Time**: {duration}
|
||||
|
||||
## Results
|
||||
|
||||
### Pareto Front (if multi-objective)
|
||||
| Rank | {obj1} | {obj2} | {obj3} | {var1} | {var2} |
|
||||
|------|--------|--------|--------|--------|--------|
|
||||
| 1 | ... | ... | ... | ... | ... |
|
||||
|
||||
### Best Single Solution
|
||||
| Parameter | Value | Unit |
|
||||
|-----------|-------|------|
|
||||
| {var1} | {val} | {unit}|
|
||||
|
||||
### Convergence
|
||||
- Trials to 90% optimal: {n}
|
||||
- Final improvement rate: {rate}%
|
||||
|
||||
## Neural Surrogate Performance (if applicable)
|
||||
| Objective | NN Error | CV Ratio | Quality |
|
||||
|-----------|----------|----------|---------|
|
||||
| mass | 2.1% | 0.4 | Good |
|
||||
| stress | 5.3% | 1.2 | Fair |
|
||||
|
||||
## Recommendations
|
||||
1. {recommendation}
|
||||
2. {recommendation}
|
||||
|
||||
## Next Steps
|
||||
- [ ] Validate top 3 solutions with full FEA
|
||||
- [ ] Consider refining search around best region
|
||||
- [ ] Export results for manufacturing
|
||||
```
|
||||
|
||||
## Query Examples
|
||||
|
||||
```python
|
||||
# Get top 10 by objective
|
||||
trials_sorted = sorted(study.trials,
|
||||
key=lambda t: t.values[0] if t.values else float('inf'))[:10]
|
||||
|
||||
# Get Pareto front
|
||||
pareto_trials = [t for t in study.best_trials]
|
||||
|
||||
# Calculate statistics
|
||||
import numpy as np
|
||||
values = [t.values[0] for t in study.trials if t.state == optuna.trial.TrialState.COMPLETE]
|
||||
print(f"Mean: {np.mean(values):.3f}, Std: {np.std(values):.3f}")
|
||||
```
|
||||
|
||||
## Critical Rules
|
||||
|
||||
1. **Only analyze completed trials** - Check `trial.state == COMPLETE`
|
||||
2. **Handle NaN/None values** - Some trials may have failed
|
||||
3. **Use appropriate metrics** - Hypervolume for multi-obj, best value for single
|
||||
4. **Include uncertainty** - Report standard deviations where appropriate
|
||||
5. **Be actionable** - Every insight should lead to a decision
|
||||
73
.claude/commands/study-builder.md
Normal file
73
.claude/commands/study-builder.md
Normal file
@@ -0,0 +1,73 @@
|
||||
# Study Builder Subagent
|
||||
|
||||
You are a specialized Atomizer Study Builder agent. Your task is to create a complete optimization study from the user's description.
|
||||
|
||||
## Context Loading
|
||||
|
||||
Load these files first:
|
||||
1. `.claude/skills/core/study-creation-core.md` - Core study creation patterns
|
||||
2. `docs/protocols/system/SYS_12_EXTRACTOR_LIBRARY.md` - Available extractors
|
||||
3. `optimization_engine/templates/registry.json` - Study templates
|
||||
|
||||
## Your Capabilities
|
||||
|
||||
1. **Model Introspection**: Analyze NX .prt/.sim files to discover expressions, mesh types
|
||||
2. **Config Generation**: Create optimization_config.json with proper structure
|
||||
3. **Script Generation**: Create run_optimization.py using ConfigDrivenRunner
|
||||
4. **Template Selection**: Choose appropriate template based on problem type
|
||||
|
||||
## Workflow
|
||||
|
||||
1. **Gather Requirements**
|
||||
- What is the model file path (.prt, .sim)?
|
||||
- What are the design variables (expressions to vary)?
|
||||
- What objectives to optimize (mass, stress, frequency, etc.)?
|
||||
- Any constraints?
|
||||
|
||||
2. **Introspect Model** (if available)
|
||||
```python
|
||||
from optimization_engine.hooks.nx_cad.model_introspection import introspect_study
|
||||
info = introspect_study("path/to/study/")
|
||||
```
|
||||
|
||||
3. **Select Template**
|
||||
- Multi-objective structural → `multi_objective_structural`
|
||||
- Frequency optimization → `frequency_optimization`
|
||||
- Mass minimization → `single_objective_mass`
|
||||
- Mirror wavefront → `mirror_wavefront`
|
||||
|
||||
4. **Generate Config** following the schema in study-creation-core.md
|
||||
|
||||
5. **Generate Scripts** using templates from:
|
||||
- `optimization_engine/templates/run_optimization_template.py`
|
||||
- `optimization_engine/templates/run_nn_optimization_template.py`
|
||||
|
||||
## Output Format
|
||||
|
||||
Return a structured report:
|
||||
```
|
||||
## Study Created: {study_name}
|
||||
|
||||
### Files Generated
|
||||
- optimization_config.json
|
||||
- run_optimization.py
|
||||
- run_nn_optimization.py (if applicable)
|
||||
|
||||
### Configuration Summary
|
||||
- Design Variables: {count}
|
||||
- Objectives: {list}
|
||||
- Constraints: {list}
|
||||
- Recommended Trials: {number}
|
||||
|
||||
### Next Steps
|
||||
1. Run `python run_optimization.py --discover` to validate model
|
||||
2. Run `python run_optimization.py --validate` to test pipeline
|
||||
3. Run `python run_optimization.py --run` to start optimization
|
||||
```
|
||||
|
||||
## Critical Rules
|
||||
|
||||
1. **NEVER copy code from existing studies** - Use templates and base classes
|
||||
2. **ALWAYS use ConfigDrivenRunner** - No custom objective functions
|
||||
3. **ALWAYS validate paths** before generating config
|
||||
4. **Use element_type='auto'** unless explicitly specified
|
||||
@@ -1,6 +1,16 @@
|
||||
---
|
||||
skill_id: SKILL_000
|
||||
version: 2.0
|
||||
last_updated: 2025-12-07
|
||||
type: bootstrap
|
||||
code_dependencies: []
|
||||
requires_skills: []
|
||||
---
|
||||
|
||||
# Atomizer LLM Bootstrap
|
||||
|
||||
**Version**: 1.0
|
||||
**Version**: 2.0
|
||||
**Updated**: 2025-12-07
|
||||
**Purpose**: First file any LLM session reads. Provides instant orientation and task routing.
|
||||
|
||||
---
|
||||
@@ -61,7 +71,7 @@ User Request
|
||||
|
||||
| User Intent | Keywords | Protocol | Skill to Load | Privilege |
|
||||
|-------------|----------|----------|---------------|-----------|
|
||||
| Create study | "new", "set up", "create", "optimize" | OP_01 | **create-study-wizard.md** | user |
|
||||
| Create study | "new", "set up", "create", "optimize" | OP_01 | **core/study-creation-core.md** | user |
|
||||
| Run optimization | "start", "run", "execute", "begin" | OP_02 | - | user |
|
||||
| Monitor progress | "status", "progress", "trials", "check" | OP_03 | - | user |
|
||||
| Analyze results | "results", "best", "compare", "pareto" | OP_04 | - | user |
|
||||
@@ -107,15 +117,14 @@ See `02_CONTEXT_LOADER.md` for complete loading rules.
|
||||
|
||||
**Quick Reference**:
|
||||
```
|
||||
CREATE_STUDY → create-study-wizard.md (PRIMARY)
|
||||
→ Use: from optimization_engine.study_wizard import StudyWizard, create_study
|
||||
→ modules/extractors-catalog.md (if asks about extractors)
|
||||
CREATE_STUDY → core/study-creation-core.md (PRIMARY)
|
||||
→ SYS_12_EXTRACTOR_LIBRARY.md (extractor reference)
|
||||
→ modules/zernike-optimization.md (if telescope/mirror)
|
||||
→ modules/neural-acceleration.md (if >50 trials)
|
||||
|
||||
RUN_OPTIMIZATION → OP_02_RUN_OPTIMIZATION.md
|
||||
→ SYS_10_IMSO.md (if adaptive)
|
||||
→ SYS_13_DASHBOARD_TRACKING.md (if monitoring)
|
||||
→ SYS_15_METHOD_SELECTOR.md (method recommendation)
|
||||
→ SYS_14_NEURAL_ACCELERATION.md (if neural/turbo)
|
||||
|
||||
DEBUG → OP_06_TROUBLESHOOT.md
|
||||
→ Relevant SYS_* based on error type
|
||||
|
||||
@@ -1,6 +1,19 @@
|
||||
---
|
||||
skill_id: SKILL_001
|
||||
version: 2.0
|
||||
last_updated: 2025-12-07
|
||||
type: reference
|
||||
code_dependencies:
|
||||
- optimization_engine/extractors/__init__.py
|
||||
- optimization_engine/method_selector.py
|
||||
requires_skills:
|
||||
- SKILL_000
|
||||
---
|
||||
|
||||
# Atomizer Quick Reference Cheatsheet
|
||||
|
||||
**Version**: 1.0
|
||||
**Version**: 2.0
|
||||
**Updated**: 2025-12-07
|
||||
**Purpose**: Rapid lookup for common operations. "I want X → Use Y"
|
||||
|
||||
---
|
||||
|
||||
@@ -1,6 +1,17 @@
|
||||
---
|
||||
skill_id: SKILL_002
|
||||
version: 2.0
|
||||
last_updated: 2025-12-07
|
||||
type: loader
|
||||
code_dependencies: []
|
||||
requires_skills:
|
||||
- SKILL_000
|
||||
---
|
||||
|
||||
# Atomizer Context Loader
|
||||
|
||||
**Version**: 1.0
|
||||
**Version**: 2.0
|
||||
**Updated**: 2025-12-07
|
||||
**Purpose**: Define what documentation to load based on task type. Ensures LLM sessions have exactly the context needed.
|
||||
|
||||
---
|
||||
@@ -22,26 +33,29 @@
|
||||
|
||||
**Always Load**:
|
||||
```
|
||||
.claude/skills/core/study-creation-core.md
|
||||
.claude/skills/core/study-creation-core.md (SKILL_CORE_001)
|
||||
```
|
||||
|
||||
**Load If**:
|
||||
| Condition | Load |
|
||||
|-----------|------|
|
||||
| User asks about extractors | `modules/extractors-catalog.md` |
|
||||
| User asks about extractors | `docs/protocols/system/SYS_12_EXTRACTOR_LIBRARY.md` |
|
||||
| Telescope/mirror/optics mentioned | `modules/zernike-optimization.md` |
|
||||
| >50 trials OR "neural" OR "surrogate" | `modules/neural-acceleration.md` |
|
||||
| >50 trials OR "neural" OR "surrogate" | `docs/protocols/system/SYS_14_NEURAL_ACCELERATION.md` |
|
||||
| Multi-objective (2+ goals) | `docs/protocols/system/SYS_11_MULTI_OBJECTIVE.md` |
|
||||
| Method selection needed | `docs/protocols/system/SYS_15_METHOD_SELECTOR.md` |
|
||||
|
||||
**Example Context Stack**:
|
||||
```
|
||||
# Simple bracket optimization
|
||||
core/study-creation-core.md
|
||||
SYS_12_EXTRACTOR_LIBRARY.md
|
||||
|
||||
# Mirror optimization with neural acceleration
|
||||
core/study-creation-core.md
|
||||
modules/zernike-optimization.md
|
||||
modules/neural-acceleration.md
|
||||
SYS_14_NEURAL_ACCELERATION.md
|
||||
SYS_15_METHOD_SELECTOR.md
|
||||
```
|
||||
|
||||
---
|
||||
@@ -254,9 +268,10 @@ Load Stack:
|
||||
User: "I need to optimize my M1 mirror's wavefront error with 200 trials"
|
||||
|
||||
Load Stack:
|
||||
1. core/study-creation-core.md # Core study creation
|
||||
2. modules/zernike-optimization.md # Zernike-specific patterns
|
||||
3. modules/neural-acceleration.md # Neural acceleration for 200 trials
|
||||
1. core/study-creation-core.md # Core study creation
|
||||
2. modules/zernike-optimization.md # Zernike-specific patterns
|
||||
3. SYS_14_NEURAL_ACCELERATION.md # Neural acceleration for 200 trials
|
||||
4. SYS_15_METHOD_SELECTOR.md # Method recommendation
|
||||
```
|
||||
|
||||
### Example 3: Multi-Objective Structural
|
||||
@@ -281,8 +296,8 @@ Load Stack:
|
||||
User: "I need to extract thermal gradients from my results"
|
||||
|
||||
Load Stack:
|
||||
1. EXT_01_CREATE_EXTRACTOR.md # Extractor creation guide
|
||||
2. modules/extractors-catalog.md # Reference existing patterns
|
||||
1. EXT_01_CREATE_EXTRACTOR.md # Extractor creation guide
|
||||
2. SYS_12_EXTRACTOR_LIBRARY.md # Reference existing patterns
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
@@ -1,7 +1,20 @@
|
||||
---
|
||||
skill_id: SKILL_CORE_001
|
||||
version: 2.4
|
||||
last_updated: 2025-12-07
|
||||
type: core
|
||||
code_dependencies:
|
||||
- optimization_engine/base_runner.py
|
||||
- optimization_engine/extractors/__init__.py
|
||||
- optimization_engine/templates/registry.json
|
||||
requires_skills: []
|
||||
replaces: create-study.md
|
||||
---
|
||||
|
||||
# Study Creation Core Skill
|
||||
|
||||
**Last Updated**: December 6, 2025
|
||||
**Version**: 2.3 - Added Model Introspection
|
||||
**Version**: 2.4
|
||||
**Updated**: 2025-12-07
|
||||
**Type**: Core Skill
|
||||
|
||||
You are helping the user create a complete Atomizer optimization study from a natural language description.
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user