feat: Add MLP surrogate with Turbo Mode for 100x faster optimization
Neural Acceleration (MLP Surrogate): - Add run_nn_optimization.py with hybrid FEA/NN workflow - MLP architecture: 4-layer (64->128->128->64) with BatchNorm/Dropout - Three workflow modes: - --all: Sequential export->train->optimize->validate - --hybrid-loop: Iterative Train->NN->Validate->Retrain cycle - --turbo: Aggressive single-best validation (RECOMMENDED) - Turbo mode: 5000 NN trials + 50 FEA validations in ~12 minutes - Separate nn_study.db to avoid overloading dashboard Performance Results (bracket_pareto_3obj study): - NN prediction errors: mass 1-5%, stress 1-4%, stiffness 5-15% - Found minimum mass designs at boundary (angle~30deg, thick~30mm) - 100x speedup vs pure FEA exploration Protocol Operating System: - Add .claude/skills/ with Bootstrap, Cheatsheet, Context Loader - Add docs/protocols/ with operations (OP_01-06) and system (SYS_10-14) - Update SYS_14_NEURAL_ACCELERATION.md with MLP Turbo Mode docs NX Automation: - Add optimization_engine/hooks/ for NX CAD/CAE automation - Add study_wizard.py for guided study creation - Fix FEM mesh update: load idealized part before UpdateFemodel() New Study: - bracket_pareto_3obj: 3-objective Pareto (mass, stress, stiffness) - 167 FEA trials + 5000 NN trials completed - Demonstrates full hybrid workflow 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
160
docs/protocols/README.md
Normal file
160
docs/protocols/README.md
Normal file
@@ -0,0 +1,160 @@
|
||||
# Atomizer Protocol Operating System (POS)
|
||||
|
||||
**Version**: 1.0
|
||||
**Last Updated**: 2025-12-05
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This directory contains the **Protocol Operating System (POS)** - a 4-layer documentation architecture optimized for LLM consumption.
|
||||
|
||||
---
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
protocols/
|
||||
├── README.md # This file
|
||||
├── operations/ # Layer 2: How-to guides
|
||||
│ ├── OP_01_CREATE_STUDY.md
|
||||
│ ├── OP_02_RUN_OPTIMIZATION.md
|
||||
│ ├── OP_03_MONITOR_PROGRESS.md
|
||||
│ ├── OP_04_ANALYZE_RESULTS.md
|
||||
│ ├── OP_05_EXPORT_TRAINING_DATA.md
|
||||
│ └── OP_06_TROUBLESHOOT.md
|
||||
├── system/ # Layer 3: Core specifications
|
||||
│ ├── SYS_10_IMSO.md
|
||||
│ ├── SYS_11_MULTI_OBJECTIVE.md
|
||||
│ ├── SYS_12_EXTRACTOR_LIBRARY.md
|
||||
│ ├── SYS_13_DASHBOARD_TRACKING.md
|
||||
│ └── SYS_14_NEURAL_ACCELERATION.md
|
||||
└── extensions/ # Layer 4: Extensibility guides
|
||||
├── EXT_01_CREATE_EXTRACTOR.md
|
||||
├── EXT_02_CREATE_HOOK.md
|
||||
├── EXT_03_CREATE_PROTOCOL.md
|
||||
├── EXT_04_CREATE_SKILL.md
|
||||
└── templates/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Layer Descriptions
|
||||
|
||||
### Layer 1: Bootstrap (`.claude/skills/`)
|
||||
Entry point for LLM sessions. Contains:
|
||||
- `00_BOOTSTRAP.md` - Quick orientation and task routing
|
||||
- `01_CHEATSHEET.md` - "I want X → Use Y" lookup
|
||||
- `02_CONTEXT_LOADER.md` - What to load per task
|
||||
- `PROTOCOL_EXECUTION.md` - Meta-protocol for execution
|
||||
|
||||
### Layer 2: Operations (`operations/`)
|
||||
Day-to-day how-to guides:
|
||||
- **OP_01**: Create optimization study
|
||||
- **OP_02**: Run optimization
|
||||
- **OP_03**: Monitor progress
|
||||
- **OP_04**: Analyze results
|
||||
- **OP_05**: Export training data
|
||||
- **OP_06**: Troubleshoot issues
|
||||
|
||||
### Layer 3: System (`system/`)
|
||||
Core technical specifications:
|
||||
- **SYS_10**: Intelligent Multi-Strategy Optimization (IMSO)
|
||||
- **SYS_11**: Multi-Objective Support (MANDATORY)
|
||||
- **SYS_12**: Extractor Library
|
||||
- **SYS_13**: Real-Time Dashboard Tracking
|
||||
- **SYS_14**: Neural Network Acceleration
|
||||
|
||||
### Layer 4: Extensions (`extensions/`)
|
||||
Guides for extending Atomizer:
|
||||
- **EXT_01**: Create new extractor
|
||||
- **EXT_02**: Create lifecycle hook
|
||||
- **EXT_03**: Create new protocol
|
||||
- **EXT_04**: Create new skill
|
||||
|
||||
---
|
||||
|
||||
## Protocol Template
|
||||
|
||||
All protocols follow this structure:
|
||||
|
||||
```markdown
|
||||
# {LAYER}_{NUMBER}_{NAME}.md
|
||||
|
||||
<!--
|
||||
PROTOCOL: {Full Name}
|
||||
LAYER: {Operations|System|Extensions}
|
||||
VERSION: {Major.Minor}
|
||||
STATUS: {Active|Draft|Deprecated}
|
||||
LAST_UPDATED: {YYYY-MM-DD}
|
||||
PRIVILEGE: {user|power_user|admin}
|
||||
LOAD_WITH: [{dependencies}]
|
||||
-->
|
||||
|
||||
## Overview
|
||||
{1-3 sentence description}
|
||||
|
||||
## When to Use
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
|
||||
## Quick Reference
|
||||
{Tables, key parameters}
|
||||
|
||||
## Detailed Specification
|
||||
{Full content}
|
||||
|
||||
## Examples
|
||||
{Working examples}
|
||||
|
||||
## Troubleshooting
|
||||
| Symptom | Cause | Solution |
|
||||
|
||||
## Cross-References
|
||||
- Depends On: []
|
||||
- Used By: []
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Navigation
|
||||
|
||||
### By Task
|
||||
|
||||
| I want to... | Protocol |
|
||||
|--------------|----------|
|
||||
| Create a study | [OP_01](operations/OP_01_CREATE_STUDY.md) |
|
||||
| Run optimization | [OP_02](operations/OP_02_RUN_OPTIMIZATION.md) |
|
||||
| Check progress | [OP_03](operations/OP_03_MONITOR_PROGRESS.md) |
|
||||
| Analyze results | [OP_04](operations/OP_04_ANALYZE_RESULTS.md) |
|
||||
| Export neural data | [OP_05](operations/OP_05_EXPORT_TRAINING_DATA.md) |
|
||||
| Fix errors | [OP_06](operations/OP_06_TROUBLESHOOT.md) |
|
||||
| Add extractor | [EXT_01](extensions/EXT_01_CREATE_EXTRACTOR.md) |
|
||||
|
||||
### By Protocol Number
|
||||
|
||||
| # | Name | Layer |
|
||||
|---|------|-------|
|
||||
| 10 | IMSO | [System](system/SYS_10_IMSO.md) |
|
||||
| 11 | Multi-Objective | [System](system/SYS_11_MULTI_OBJECTIVE.md) |
|
||||
| 12 | Extractors | [System](system/SYS_12_EXTRACTOR_LIBRARY.md) |
|
||||
| 13 | Dashboard | [System](system/SYS_13_DASHBOARD_TRACKING.md) |
|
||||
| 14 | Neural | [System](system/SYS_14_NEURAL_ACCELERATION.md) |
|
||||
|
||||
---
|
||||
|
||||
## Privilege Levels
|
||||
|
||||
| Level | Operations | System | Extensions |
|
||||
|-------|------------|--------|------------|
|
||||
| user | All OP_* | Read SYS_* | None |
|
||||
| power_user | All OP_* | Read SYS_* | EXT_01, EXT_02 |
|
||||
| admin | All | All | All |
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | 2025-12-05 | Initial Protocol Operating System |
|
||||
395
docs/protocols/extensions/EXT_01_CREATE_EXTRACTOR.md
Normal file
395
docs/protocols/extensions/EXT_01_CREATE_EXTRACTOR.md
Normal file
@@ -0,0 +1,395 @@
|
||||
# EXT_01: Create New Extractor
|
||||
|
||||
<!--
|
||||
PROTOCOL: Create New Physics Extractor
|
||||
LAYER: Extensions
|
||||
VERSION: 1.0
|
||||
STATUS: Active
|
||||
LAST_UPDATED: 2025-12-05
|
||||
PRIVILEGE: power_user
|
||||
LOAD_WITH: [SYS_12_EXTRACTOR_LIBRARY]
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
This protocol guides you through creating a new physics extractor for the centralized extractor library. Follow this when you need to extract results not covered by existing extractors.
|
||||
|
||||
**Privilege Required**: power_user or admin
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| Need physics not in library | Follow this protocol |
|
||||
| "create extractor", "new extractor" | Follow this protocol |
|
||||
| Custom result extraction needed | Follow this protocol |
|
||||
|
||||
**First**: Check [SYS_12_EXTRACTOR_LIBRARY](../system/SYS_12_EXTRACTOR_LIBRARY.md) - the functionality may already exist!
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
**Create in**: `optimization_engine/extractors/`
|
||||
**Export from**: `optimization_engine/extractors/__init__.py`
|
||||
**Document in**: Update SYS_12 and this protocol
|
||||
|
||||
**Template location**: `docs/protocols/extensions/templates/extractor_template.py`
|
||||
|
||||
---
|
||||
|
||||
## Step-by-Step Guide
|
||||
|
||||
### Step 1: Verify Need
|
||||
|
||||
Before creating:
|
||||
1. Check existing extractors in [SYS_12](../system/SYS_12_EXTRACTOR_LIBRARY.md)
|
||||
2. Search codebase: `grep -r "your_physics" optimization_engine/`
|
||||
3. Confirm no existing solution
|
||||
|
||||
### Step 1.5: Research NX Open APIs (REQUIRED for NX extractors)
|
||||
|
||||
**If the extractor needs NX Open APIs** (not just pyNastran OP2 parsing):
|
||||
|
||||
```
|
||||
# 1. Search for relevant NX Open APIs
|
||||
siemens_docs_search("inertia properties NXOpen")
|
||||
siemens_docs_search("mass properties body NXOpen.CAE")
|
||||
|
||||
# 2. Fetch detailed documentation for promising classes
|
||||
siemens_docs_fetch("NXOpen.MeasureManager")
|
||||
siemens_docs_fetch("NXOpen.UF.UFWeight")
|
||||
|
||||
# 3. Get method signatures
|
||||
siemens_docs_search("AskMassProperties NXOpen")
|
||||
```
|
||||
|
||||
**When to use NX Open vs pyNastran:**
|
||||
|
||||
| Data Source | Tool | Example |
|
||||
|-------------|------|---------|
|
||||
| OP2 results (stress, disp, freq) | pyNastran | `extract_displacement()` |
|
||||
| CAD properties (mass, inertia) | NX Open | New extractor with NXOpen API |
|
||||
| BDF data (mesh, properties) | pyNastran | `extract_mass_from_bdf()` |
|
||||
| NX expressions | NX Open | `extract_mass_from_expression()` |
|
||||
| FEM model data | NX Open CAE | Needs `NXOpen.CAE.*` APIs |
|
||||
|
||||
**Document the APIs used** in the extractor docstring:
|
||||
```python
|
||||
def extract_inertia(part_file: Path) -> Dict[str, Any]:
|
||||
"""
|
||||
Extract mass and inertia properties from NX part.
|
||||
|
||||
NX Open APIs Used:
|
||||
- NXOpen.MeasureManager.NewMassProperties()
|
||||
- NXOpen.MeasureBodies.InformationUnit
|
||||
- NXOpen.UF.UFWeight.AskProps()
|
||||
|
||||
See: docs.sw.siemens.com for full API reference
|
||||
"""
|
||||
```
|
||||
|
||||
### Step 2: Create Extractor File
|
||||
|
||||
Create `optimization_engine/extractors/extract_{physics}.py`:
|
||||
|
||||
```python
|
||||
"""
|
||||
Extract {Physics Name} from FEA results.
|
||||
|
||||
Author: {Your Name}
|
||||
Created: {Date}
|
||||
Version: 1.0
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any, Optional, Union
|
||||
from pyNastran.op2.op2 import OP2
|
||||
|
||||
|
||||
def extract_{physics}(
|
||||
op2_file: Union[str, Path],
|
||||
subcase: int = 1,
|
||||
# Add other parameters as needed
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Extract {physics description} from OP2 file.
|
||||
|
||||
Args:
|
||||
op2_file: Path to the OP2 results file
|
||||
subcase: Subcase number to extract (default: 1)
|
||||
|
||||
Returns:
|
||||
Dictionary containing:
|
||||
- '{main_result}': The primary result value
|
||||
- '{secondary}': Additional result info
|
||||
- 'subcase': The subcase extracted
|
||||
|
||||
Raises:
|
||||
FileNotFoundError: If OP2 file doesn't exist
|
||||
KeyError: If subcase not found in results
|
||||
ValueError: If result data is invalid
|
||||
|
||||
Example:
|
||||
>>> result = extract_{physics}('model.op2', subcase=1)
|
||||
>>> print(result['{main_result}'])
|
||||
123.45
|
||||
"""
|
||||
op2_file = Path(op2_file)
|
||||
|
||||
if not op2_file.exists():
|
||||
raise FileNotFoundError(f"OP2 file not found: {op2_file}")
|
||||
|
||||
# Read OP2 file
|
||||
op2 = OP2()
|
||||
op2.read_op2(str(op2_file))
|
||||
|
||||
# Extract your physics
|
||||
# TODO: Implement extraction logic
|
||||
|
||||
# Example for displacement-like result:
|
||||
if subcase not in op2.displacements:
|
||||
raise KeyError(f"Subcase {subcase} not found in results")
|
||||
|
||||
data = op2.displacements[subcase]
|
||||
# Process data...
|
||||
|
||||
return {
|
||||
'{main_result}': computed_value,
|
||||
'{secondary}': secondary_value,
|
||||
'subcase': subcase,
|
||||
}
|
||||
|
||||
|
||||
# Optional: Class-based extractor for complex cases
|
||||
class {Physics}Extractor:
|
||||
"""
|
||||
Class-based extractor for {physics} with state management.
|
||||
|
||||
Use when extraction requires multiple steps or configuration.
|
||||
"""
|
||||
|
||||
def __init__(self, op2_file: Union[str, Path], **config):
|
||||
self.op2_file = Path(op2_file)
|
||||
self.config = config
|
||||
self._op2 = None
|
||||
|
||||
def _load_op2(self):
|
||||
"""Lazy load OP2 file."""
|
||||
if self._op2 is None:
|
||||
self._op2 = OP2()
|
||||
self._op2.read_op2(str(self.op2_file))
|
||||
return self._op2
|
||||
|
||||
def extract(self, subcase: int = 1) -> Dict[str, Any]:
|
||||
"""Extract results for given subcase."""
|
||||
op2 = self._load_op2()
|
||||
# Implementation here
|
||||
pass
|
||||
```
|
||||
|
||||
### Step 3: Add to __init__.py
|
||||
|
||||
Edit `optimization_engine/extractors/__init__.py`:
|
||||
|
||||
```python
|
||||
# Add import
|
||||
from .extract_{physics} import extract_{physics}
|
||||
# Or for class
|
||||
from .extract_{physics} import {Physics}Extractor
|
||||
|
||||
# Add to __all__
|
||||
__all__ = [
|
||||
# ... existing exports ...
|
||||
'extract_{physics}',
|
||||
'{Physics}Extractor',
|
||||
]
|
||||
```
|
||||
|
||||
### Step 4: Write Tests
|
||||
|
||||
Create `tests/test_extract_{physics}.py`:
|
||||
|
||||
```python
|
||||
"""Tests for {physics} extractor."""
|
||||
|
||||
import pytest
|
||||
from pathlib import Path
|
||||
from optimization_engine.extractors import extract_{physics}
|
||||
|
||||
|
||||
class TestExtract{Physics}:
|
||||
"""Test suite for {physics} extraction."""
|
||||
|
||||
@pytest.fixture
|
||||
def sample_op2(self, tmp_path):
|
||||
"""Create or copy sample OP2 for testing."""
|
||||
# Either copy existing test file or create mock
|
||||
pass
|
||||
|
||||
def test_basic_extraction(self, sample_op2):
|
||||
"""Test basic extraction works."""
|
||||
result = extract_{physics}(sample_op2)
|
||||
assert '{main_result}' in result
|
||||
assert isinstance(result['{main_result}'], float)
|
||||
|
||||
def test_file_not_found(self):
|
||||
"""Test error handling for missing file."""
|
||||
with pytest.raises(FileNotFoundError):
|
||||
extract_{physics}('nonexistent.op2')
|
||||
|
||||
def test_invalid_subcase(self, sample_op2):
|
||||
"""Test error handling for invalid subcase."""
|
||||
with pytest.raises(KeyError):
|
||||
extract_{physics}(sample_op2, subcase=999)
|
||||
```
|
||||
|
||||
### Step 5: Document
|
||||
|
||||
#### Update SYS_12_EXTRACTOR_LIBRARY.md
|
||||
|
||||
Add to Quick Reference table:
|
||||
```markdown
|
||||
| E{N} | {Physics} | `extract_{physics}()` | .op2 | {unit} |
|
||||
```
|
||||
|
||||
Add detailed section:
|
||||
```markdown
|
||||
### E{N}: {Physics} Extraction
|
||||
|
||||
**Module**: `optimization_engine.extractors.extract_{physics}`
|
||||
|
||||
\`\`\`python
|
||||
from optimization_engine.extractors import extract_{physics}
|
||||
|
||||
result = extract_{physics}(op2_file, subcase=1)
|
||||
{main_result} = result['{main_result}']
|
||||
\`\`\`
|
||||
```
|
||||
|
||||
#### Update skills/modules/extractors-catalog.md
|
||||
|
||||
Add entry following existing pattern.
|
||||
|
||||
### Step 6: Validate
|
||||
|
||||
```bash
|
||||
# Run tests
|
||||
pytest tests/test_extract_{physics}.py -v
|
||||
|
||||
# Test import
|
||||
python -c "from optimization_engine.extractors import extract_{physics}; print('OK')"
|
||||
|
||||
# Test with real file
|
||||
python -c "
|
||||
from optimization_engine.extractors import extract_{physics}
|
||||
result = extract_{physics}('path/to/test.op2')
|
||||
print(result)
|
||||
"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Extractor Design Guidelines
|
||||
|
||||
### Do's
|
||||
|
||||
- Return dictionaries with clear keys
|
||||
- Include metadata (subcase, units, etc.)
|
||||
- Handle edge cases gracefully
|
||||
- Provide clear error messages
|
||||
- Document all parameters and returns
|
||||
- Write tests
|
||||
|
||||
### Don'ts
|
||||
|
||||
- Don't re-parse OP2 multiple times in one call
|
||||
- Don't hardcode paths
|
||||
- Don't swallow exceptions silently
|
||||
- Don't return raw pyNastran objects
|
||||
- Don't modify input files
|
||||
|
||||
### Naming Conventions
|
||||
|
||||
| Type | Convention | Example |
|
||||
|------|------------|---------|
|
||||
| File | `extract_{physics}.py` | `extract_thermal.py` |
|
||||
| Function | `extract_{physics}` | `extract_thermal` |
|
||||
| Class | `{Physics}Extractor` | `ThermalExtractor` |
|
||||
| Return key | lowercase_with_underscores | `max_temperature` |
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Example: Thermal Gradient Extractor
|
||||
|
||||
```python
|
||||
"""Extract thermal gradients from temperature results."""
|
||||
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any
|
||||
from pyNastran.op2.op2 import OP2
|
||||
import numpy as np
|
||||
|
||||
|
||||
def extract_thermal_gradient(
|
||||
op2_file: Path,
|
||||
subcase: int = 1,
|
||||
direction: str = 'magnitude'
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Extract thermal gradient from temperature field.
|
||||
|
||||
Args:
|
||||
op2_file: Path to OP2 file
|
||||
subcase: Subcase number
|
||||
direction: 'magnitude', 'x', 'y', or 'z'
|
||||
|
||||
Returns:
|
||||
Dictionary with gradient results
|
||||
"""
|
||||
op2 = OP2()
|
||||
op2.read_op2(str(op2_file))
|
||||
|
||||
temps = op2.temperatures[subcase]
|
||||
# Calculate gradient...
|
||||
|
||||
return {
|
||||
'max_gradient': max_grad,
|
||||
'mean_gradient': mean_grad,
|
||||
'max_gradient_location': location,
|
||||
'direction': direction,
|
||||
'subcase': subcase,
|
||||
'unit': 'K/mm'
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Cause | Solution |
|
||||
|-------|-------|----------|
|
||||
| Import error | Not added to __init__.py | Add export |
|
||||
| "No module" | Wrong file location | Check path |
|
||||
| KeyError | Wrong OP2 data structure | Debug OP2 contents |
|
||||
| Tests fail | Missing test data | Create fixtures |
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Reference**: [SYS_12_EXTRACTOR_LIBRARY](../system/SYS_12_EXTRACTOR_LIBRARY.md)
|
||||
- **Template**: `templates/extractor_template.py`
|
||||
- **Related**: [EXT_02_CREATE_HOOK](./EXT_02_CREATE_HOOK.md)
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | 2025-12-05 | Initial release |
|
||||
366
docs/protocols/extensions/EXT_02_CREATE_HOOK.md
Normal file
366
docs/protocols/extensions/EXT_02_CREATE_HOOK.md
Normal file
@@ -0,0 +1,366 @@
|
||||
# EXT_02: Create Lifecycle Hook
|
||||
|
||||
<!--
|
||||
PROTOCOL: Create Lifecycle Hook Plugin
|
||||
LAYER: Extensions
|
||||
VERSION: 1.0
|
||||
STATUS: Active
|
||||
LAST_UPDATED: 2025-12-05
|
||||
PRIVILEGE: power_user
|
||||
LOAD_WITH: []
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
This protocol guides you through creating lifecycle hooks that execute at specific points during optimization. Hooks enable custom logic injection without modifying core code.
|
||||
|
||||
**Privilege Required**: power_user or admin
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| Need custom logic at specific point | Follow this protocol |
|
||||
| "create hook", "callback" | Follow this protocol |
|
||||
| Want to log/validate/modify at runtime | Follow this protocol |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
**Hook Points Available**:
|
||||
|
||||
| Hook Point | When It Runs | Use Case |
|
||||
|------------|--------------|----------|
|
||||
| `pre_mesh` | Before meshing | Validate geometry |
|
||||
| `post_mesh` | After meshing | Check mesh quality |
|
||||
| `pre_solve` | Before solver | Log trial start |
|
||||
| `post_solve` | After solver | Validate results |
|
||||
| `post_extraction` | After extraction | Custom metrics |
|
||||
| `post_calculation` | After objectives | Derived quantities |
|
||||
| `custom_objective` | Custom objective | Complex objectives |
|
||||
|
||||
**Create in**: `optimization_engine/plugins/{hook_point}/`
|
||||
|
||||
---
|
||||
|
||||
## Step-by-Step Guide
|
||||
|
||||
### Step 1: Identify Hook Point
|
||||
|
||||
Choose the appropriate hook point:
|
||||
|
||||
```
|
||||
Trial Flow:
|
||||
│
|
||||
├─► PRE_MESH → Validate model before meshing
|
||||
│
|
||||
├─► POST_MESH → Check mesh quality
|
||||
│
|
||||
├─► PRE_SOLVE → Log trial start, validate inputs
|
||||
│
|
||||
├─► POST_SOLVE → Check solve success, capture timing
|
||||
│
|
||||
├─► POST_EXTRACTION → Compute derived quantities
|
||||
│
|
||||
├─► POST_CALCULATION → Final validation, logging
|
||||
│
|
||||
└─► CUSTOM_OBJECTIVE → Custom objective functions
|
||||
```
|
||||
|
||||
### Step 2: Create Hook File
|
||||
|
||||
Create `optimization_engine/plugins/{hook_point}/{hook_name}.py`:
|
||||
|
||||
```python
|
||||
"""
|
||||
{Hook Description}
|
||||
|
||||
Author: {Your Name}
|
||||
Created: {Date}
|
||||
Version: 1.0
|
||||
Hook Point: {hook_point}
|
||||
"""
|
||||
|
||||
from typing import Dict, Any
|
||||
|
||||
|
||||
def {hook_name}_hook(context: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""
|
||||
{Description of what this hook does}.
|
||||
|
||||
Args:
|
||||
context: Dictionary containing:
|
||||
- trial_number: Current trial number
|
||||
- design_params: Current design parameters
|
||||
- results: Results so far (if post-extraction)
|
||||
- config: Optimization config
|
||||
- working_dir: Path to working directory
|
||||
|
||||
Returns:
|
||||
Dictionary with computed values or modifications.
|
||||
Return empty dict if no modifications needed.
|
||||
|
||||
Example:
|
||||
>>> result = {hook_name}_hook({'trial_number': 1, ...})
|
||||
>>> print(result)
|
||||
{'{computed_key}': 123.45}
|
||||
"""
|
||||
# Access context
|
||||
trial_num = context.get('trial_number')
|
||||
design_params = context.get('design_params', {})
|
||||
results = context.get('results', {})
|
||||
|
||||
# Your logic here
|
||||
# ...
|
||||
|
||||
# Return computed values
|
||||
return {
|
||||
'{computed_key}': computed_value,
|
||||
}
|
||||
|
||||
|
||||
def register_hooks(hook_manager):
|
||||
"""
|
||||
Register this hook with the hook manager.
|
||||
|
||||
This function is called automatically when plugins are loaded.
|
||||
|
||||
Args:
|
||||
hook_manager: The HookManager instance
|
||||
"""
|
||||
hook_manager.register_hook(
|
||||
hook_point='{hook_point}',
|
||||
function={hook_name}_hook,
|
||||
name='{hook_name}_hook',
|
||||
description='{Brief description}',
|
||||
priority=100, # Lower = runs earlier
|
||||
enabled=True
|
||||
)
|
||||
```
|
||||
|
||||
### Step 3: Test Hook
|
||||
|
||||
```python
|
||||
# Test in isolation
|
||||
from optimization_engine.plugins.{hook_point}.{hook_name} import {hook_name}_hook
|
||||
|
||||
test_context = {
|
||||
'trial_number': 1,
|
||||
'design_params': {'thickness': 5.0},
|
||||
'results': {'max_stress': 200.0},
|
||||
}
|
||||
|
||||
result = {hook_name}_hook(test_context)
|
||||
print(result)
|
||||
```
|
||||
|
||||
### Step 4: Enable Hook
|
||||
|
||||
Hooks are auto-discovered from the plugins directory. To verify:
|
||||
|
||||
```python
|
||||
from optimization_engine.plugins.hook_manager import HookManager
|
||||
|
||||
manager = HookManager()
|
||||
manager.discover_plugins()
|
||||
print(manager.list_hooks())
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Hook Examples
|
||||
|
||||
### Example 1: Safety Factor Calculator (post_calculation)
|
||||
|
||||
```python
|
||||
"""Calculate safety factor after stress extraction."""
|
||||
|
||||
def safety_factor_hook(context):
|
||||
"""Calculate safety factor from stress results."""
|
||||
results = context.get('results', {})
|
||||
config = context.get('config', {})
|
||||
|
||||
max_stress = results.get('max_von_mises', 0)
|
||||
yield_strength = config.get('material', {}).get('yield_strength', 250)
|
||||
|
||||
if max_stress > 0:
|
||||
safety_factor = yield_strength / max_stress
|
||||
else:
|
||||
safety_factor = float('inf')
|
||||
|
||||
return {
|
||||
'safety_factor': safety_factor,
|
||||
'yield_strength': yield_strength,
|
||||
}
|
||||
|
||||
|
||||
def register_hooks(hook_manager):
|
||||
hook_manager.register_hook(
|
||||
hook_point='post_calculation',
|
||||
function=safety_factor_hook,
|
||||
name='safety_factor_hook',
|
||||
description='Calculate safety factor from stress',
|
||||
priority=100,
|
||||
enabled=True
|
||||
)
|
||||
```
|
||||
|
||||
### Example 2: Trial Logger (pre_solve)
|
||||
|
||||
```python
|
||||
"""Log trial information before solve."""
|
||||
|
||||
import json
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def trial_logger_hook(context):
|
||||
"""Log trial start information."""
|
||||
trial_num = context.get('trial_number')
|
||||
design_params = context.get('design_params', {})
|
||||
working_dir = context.get('working_dir', Path('.'))
|
||||
|
||||
log_entry = {
|
||||
'trial': trial_num,
|
||||
'timestamp': datetime.now().isoformat(),
|
||||
'params': design_params,
|
||||
}
|
||||
|
||||
log_file = working_dir / 'trial_log.jsonl'
|
||||
with open(log_file, 'a') as f:
|
||||
f.write(json.dumps(log_entry) + '\n')
|
||||
|
||||
return {} # No modifications
|
||||
|
||||
|
||||
def register_hooks(hook_manager):
|
||||
hook_manager.register_hook(
|
||||
hook_point='pre_solve',
|
||||
function=trial_logger_hook,
|
||||
name='trial_logger_hook',
|
||||
description='Log trial parameters before solve',
|
||||
priority=10, # Run early
|
||||
enabled=True
|
||||
)
|
||||
```
|
||||
|
||||
### Example 3: Mesh Quality Check (post_mesh)
|
||||
|
||||
```python
|
||||
"""Validate mesh quality after meshing."""
|
||||
|
||||
|
||||
def mesh_quality_hook(context):
|
||||
"""Check mesh quality metrics."""
|
||||
mesh_file = context.get('mesh_file')
|
||||
|
||||
# Check quality metrics
|
||||
quality_issues = []
|
||||
|
||||
# ... quality checks ...
|
||||
|
||||
if quality_issues:
|
||||
context['warnings'] = context.get('warnings', []) + quality_issues
|
||||
|
||||
return {
|
||||
'mesh_quality_passed': len(quality_issues) == 0,
|
||||
'mesh_issues': quality_issues,
|
||||
}
|
||||
|
||||
|
||||
def register_hooks(hook_manager):
|
||||
hook_manager.register_hook(
|
||||
hook_point='post_mesh',
|
||||
function=mesh_quality_hook,
|
||||
name='mesh_quality_hook',
|
||||
description='Validate mesh quality',
|
||||
priority=50,
|
||||
enabled=True
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Hook Context Reference
|
||||
|
||||
### Standard Context Keys
|
||||
|
||||
| Key | Type | Available At | Description |
|
||||
|-----|------|--------------|-------------|
|
||||
| `trial_number` | int | All | Current trial number |
|
||||
| `design_params` | dict | All | Design parameter values |
|
||||
| `config` | dict | All | Optimization config |
|
||||
| `working_dir` | Path | All | Study working directory |
|
||||
| `model_file` | Path | pre_mesh+ | NX model file path |
|
||||
| `mesh_file` | Path | post_mesh+ | Mesh file path |
|
||||
| `op2_file` | Path | post_solve+ | Results file path |
|
||||
| `results` | dict | post_extraction+ | Extracted results |
|
||||
| `objectives` | dict | post_calculation | Computed objectives |
|
||||
|
||||
### Priority Guidelines
|
||||
|
||||
| Priority Range | Use For |
|
||||
|----------------|---------|
|
||||
| 1-50 | Critical hooks that must run first |
|
||||
| 50-100 | Standard hooks |
|
||||
| 100-150 | Logging and monitoring |
|
||||
| 150+ | Cleanup and finalization |
|
||||
|
||||
---
|
||||
|
||||
## Managing Hooks
|
||||
|
||||
### Enable/Disable at Runtime
|
||||
|
||||
```python
|
||||
hook_manager.disable_hook('my_hook')
|
||||
hook_manager.enable_hook('my_hook')
|
||||
```
|
||||
|
||||
### Check Hook Status
|
||||
|
||||
```python
|
||||
hooks = hook_manager.list_hooks()
|
||||
for hook in hooks:
|
||||
print(f"{hook['name']}: {'enabled' if hook['enabled'] else 'disabled'}")
|
||||
```
|
||||
|
||||
### Hook Execution Order
|
||||
|
||||
Hooks at the same point run in priority order (lower first):
|
||||
```
|
||||
Priority 10: trial_logger_hook
|
||||
Priority 50: mesh_quality_hook
|
||||
Priority 100: safety_factor_hook
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Cause | Solution |
|
||||
|-------|-------|----------|
|
||||
| Hook not running | Not registered | Check `register_hooks` function |
|
||||
| Wrong hook point | Misnamed directory | Check directory name matches hook point |
|
||||
| Context missing key | Wrong hook point | Use appropriate hook point for data needed |
|
||||
| Hook error crashes trial | Unhandled exception | Add try/except in hook |
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Related**: [EXT_01_CREATE_EXTRACTOR](./EXT_01_CREATE_EXTRACTOR.md)
|
||||
- **System**: `optimization_engine/plugins/hook_manager.py`
|
||||
- **Template**: `templates/hook_template.py`
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | 2025-12-05 | Initial release |
|
||||
263
docs/protocols/extensions/EXT_03_CREATE_PROTOCOL.md
Normal file
263
docs/protocols/extensions/EXT_03_CREATE_PROTOCOL.md
Normal file
@@ -0,0 +1,263 @@
|
||||
# EXT_03: Create New Protocol
|
||||
|
||||
<!--
|
||||
PROTOCOL: Create New Protocol Document
|
||||
LAYER: Extensions
|
||||
VERSION: 1.0
|
||||
STATUS: Active
|
||||
LAST_UPDATED: 2025-12-05
|
||||
PRIVILEGE: admin
|
||||
LOAD_WITH: []
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
This protocol guides you through creating new protocol documents for the Atomizer Protocol Operating System (POS). Use this when adding significant new system capabilities.
|
||||
|
||||
**Privilege Required**: admin
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| Adding major new system capability | Follow this protocol |
|
||||
| "create protocol", "new protocol" | Follow this protocol |
|
||||
| Need to document architectural pattern | Follow this protocol |
|
||||
|
||||
---
|
||||
|
||||
## Protocol Types
|
||||
|
||||
| Layer | Prefix | Purpose | Example |
|
||||
|-------|--------|---------|---------|
|
||||
| Operations | OP_ | How-to guides | OP_01_CREATE_STUDY |
|
||||
| System | SYS_ | Core specifications | SYS_10_IMSO |
|
||||
| Extensions | EXT_ | Extensibility guides | EXT_01_CREATE_EXTRACTOR |
|
||||
|
||||
---
|
||||
|
||||
## Step-by-Step Guide
|
||||
|
||||
### Step 1: Determine Protocol Type
|
||||
|
||||
- **Operations (OP_)**: User-facing procedures
|
||||
- **System (SYS_)**: Technical specifications
|
||||
- **Extensions (EXT_)**: Developer guides
|
||||
|
||||
### Step 2: Assign Protocol Number
|
||||
|
||||
**Operations**: Sequential (OP_01, OP_02, ...)
|
||||
**System**: By feature area (SYS_10=optimization, SYS_11=multi-obj, etc.)
|
||||
**Extensions**: Sequential (EXT_01, EXT_02, ...)
|
||||
|
||||
Check existing protocols to avoid conflicts.
|
||||
|
||||
### Step 3: Create Protocol File
|
||||
|
||||
Use the template from `templates/protocol_template.md`:
|
||||
|
||||
```markdown
|
||||
# {LAYER}_{NUMBER}_{NAME}.md
|
||||
|
||||
<!--
|
||||
PROTOCOL: {Full Name}
|
||||
LAYER: {Operations|System|Extensions}
|
||||
VERSION: 1.0
|
||||
STATUS: Active
|
||||
LAST_UPDATED: {YYYY-MM-DD}
|
||||
PRIVILEGE: {user|power_user|admin}
|
||||
LOAD_WITH: [{dependencies}]
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
{1-3 sentence description of what this protocol does}
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| {keyword or condition} | Follow this protocol |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
{Tables with key parameters, commands, or mappings}
|
||||
|
||||
---
|
||||
|
||||
## Detailed Specification
|
||||
|
||||
### Section 1: {Topic}
|
||||
|
||||
{Content}
|
||||
|
||||
### Section 2: {Topic}
|
||||
|
||||
{Content}
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: {Scenario}
|
||||
|
||||
{Complete working example}
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Symptom | Cause | Solution |
|
||||
|---------|-------|----------|
|
||||
| {error} | {why} | {fix} |
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Depends On**: [{protocol}]({path})
|
||||
- **Used By**: [{protocol}]({path})
|
||||
- **See Also**: [{related}]({path})
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | {DATE} | Initial release |
|
||||
```
|
||||
|
||||
### Step 4: Write Content
|
||||
|
||||
**Required Sections**:
|
||||
1. Overview - What does this protocol do?
|
||||
2. When to Use - Trigger conditions
|
||||
3. Quick Reference - Fast lookup
|
||||
4. Detailed Specification - Full content
|
||||
5. Examples - Working examples
|
||||
6. Troubleshooting - Common issues
|
||||
7. Cross-References - Related protocols
|
||||
8. Version History - Changes over time
|
||||
|
||||
**Writing Guidelines**:
|
||||
- Front-load important information
|
||||
- Use tables for structured data
|
||||
- Include complete code examples
|
||||
- Provide troubleshooting for common issues
|
||||
|
||||
### Step 5: Update Navigation
|
||||
|
||||
**docs/protocols/README.md**:
|
||||
```markdown
|
||||
| {NUM} | {Name} | [{Layer}]({layer}/{filename}) |
|
||||
```
|
||||
|
||||
**.claude/skills/01_CHEATSHEET.md**:
|
||||
```markdown
|
||||
| {task} | {LAYER}_{NUM} | {key info} |
|
||||
```
|
||||
|
||||
**.claude/skills/02_CONTEXT_LOADER.md**:
|
||||
Add loading rules if needed.
|
||||
|
||||
### Step 6: Update Cross-References
|
||||
|
||||
Add references in related protocols:
|
||||
- "Depends On" in new protocol
|
||||
- "Used By" or "See Also" in existing protocols
|
||||
|
||||
### Step 7: Validate
|
||||
|
||||
```bash
|
||||
# Check markdown syntax
|
||||
# Verify all links work
|
||||
# Test code examples
|
||||
# Ensure consistent formatting
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Protocol Metadata
|
||||
|
||||
### Header Comment Block
|
||||
|
||||
```markdown
|
||||
<!--
|
||||
PROTOCOL: Full Protocol Name
|
||||
LAYER: Operations|System|Extensions
|
||||
VERSION: Major.Minor
|
||||
STATUS: Active|Draft|Deprecated
|
||||
LAST_UPDATED: YYYY-MM-DD
|
||||
PRIVILEGE: user|power_user|admin
|
||||
LOAD_WITH: [SYS_10, SYS_11]
|
||||
-->
|
||||
```
|
||||
|
||||
### Status Values
|
||||
|
||||
| Status | Meaning |
|
||||
|--------|---------|
|
||||
| Draft | In development, not ready for use |
|
||||
| Active | Production ready |
|
||||
| Deprecated | Being phased out |
|
||||
|
||||
### Privilege Levels
|
||||
|
||||
| Level | Who Can Use |
|
||||
|-------|-------------|
|
||||
| user | All users |
|
||||
| power_user | Developers who can extend |
|
||||
| admin | Full system access |
|
||||
|
||||
---
|
||||
|
||||
## Versioning
|
||||
|
||||
### Semantic Versioning
|
||||
|
||||
- **Major (X.0)**: Breaking changes
|
||||
- **Minor (1.X)**: New features, backward compatible
|
||||
- **Patch (1.0.X)**: Bug fixes (usually omit for docs)
|
||||
|
||||
### Version History Format
|
||||
|
||||
```markdown
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 2.0 | 2025-12-15 | Redesigned architecture |
|
||||
| 1.1 | 2025-12-05 | Added neural support |
|
||||
| 1.0 | 2025-11-20 | Initial release |
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Cause | Solution |
|
||||
|-------|-------|----------|
|
||||
| Protocol not found | Wrong path | Check location and README |
|
||||
| LLM not loading | Missing from context loader | Update 02_CONTEXT_LOADER.md |
|
||||
| Broken links | Path changed | Update cross-references |
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Template**: `templates/protocol_template.md`
|
||||
- **Navigation**: `docs/protocols/README.md`
|
||||
- **Context Loading**: `.claude/skills/02_CONTEXT_LOADER.md`
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | 2025-12-05 | Initial release |
|
||||
331
docs/protocols/extensions/EXT_04_CREATE_SKILL.md
Normal file
331
docs/protocols/extensions/EXT_04_CREATE_SKILL.md
Normal file
@@ -0,0 +1,331 @@
|
||||
# EXT_04: Create New Skill
|
||||
|
||||
<!--
|
||||
PROTOCOL: Create New Skill or Module
|
||||
LAYER: Extensions
|
||||
VERSION: 1.0
|
||||
STATUS: Active
|
||||
LAST_UPDATED: 2025-12-05
|
||||
PRIVILEGE: admin
|
||||
LOAD_WITH: []
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
This protocol guides you through creating new skills or skill modules for the LLM instruction system. Skills provide task-specific guidance to Claude sessions.
|
||||
|
||||
**Privilege Required**: admin
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| Need new LLM capability | Follow this protocol |
|
||||
| "create skill", "new skill" | Follow this protocol |
|
||||
| Task pattern needs documentation | Follow this protocol |
|
||||
|
||||
---
|
||||
|
||||
## Skill Types
|
||||
|
||||
| Type | Location | Purpose | Example |
|
||||
|------|----------|---------|---------|
|
||||
| Bootstrap | `.claude/skills/0X_*.md` | LLM orientation | 00_BOOTSTRAP.md |
|
||||
| Core | `.claude/skills/core/` | Always-load skills | study-creation-core.md |
|
||||
| Module | `.claude/skills/modules/` | Optional, load-on-demand | extractors-catalog.md |
|
||||
| Dev | `.claude/skills/DEV_*.md` | Developer workflows | DEV_DOCUMENTATION.md |
|
||||
|
||||
---
|
||||
|
||||
## Step-by-Step Guide
|
||||
|
||||
### Step 1: Determine Skill Type
|
||||
|
||||
**Bootstrap (0X_)**: System-level LLM guidance
|
||||
- Task classification
|
||||
- Context loading rules
|
||||
- Execution patterns
|
||||
|
||||
**Core**: Essential task skills that are always loaded
|
||||
- Study creation
|
||||
- Run optimization (basic)
|
||||
|
||||
**Module**: Specialized skills loaded on demand
|
||||
- Specific extractors
|
||||
- Domain-specific (Zernike, neural)
|
||||
- Advanced features
|
||||
|
||||
**Dev (DEV_)**: Developer-facing workflows
|
||||
- Documentation maintenance
|
||||
- Testing procedures
|
||||
- Contribution guides
|
||||
|
||||
### Step 2: Create Skill File
|
||||
|
||||
#### For Core/Module Skills
|
||||
|
||||
```markdown
|
||||
# {Skill Name}
|
||||
|
||||
**Version**: 1.0
|
||||
**Purpose**: {One-line description}
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
{What this skill enables Claude to do}
|
||||
|
||||
---
|
||||
|
||||
## When to Load
|
||||
|
||||
This skill should be loaded when:
|
||||
- {Condition 1}
|
||||
- {Condition 2}
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
{Tables with key patterns, commands}
|
||||
|
||||
---
|
||||
|
||||
## Detailed Instructions
|
||||
|
||||
### Pattern 1: {Name}
|
||||
|
||||
{Step-by-step instructions}
|
||||
|
||||
**Example**:
|
||||
\`\`\`python
|
||||
{code example}
|
||||
\`\`\`
|
||||
|
||||
### Pattern 2: {Name}
|
||||
|
||||
{Step-by-step instructions}
|
||||
|
||||
---
|
||||
|
||||
## Code Templates
|
||||
|
||||
### Template 1: {Name}
|
||||
|
||||
\`\`\`python
|
||||
{copy-paste ready code}
|
||||
\`\`\`
|
||||
|
||||
---
|
||||
|
||||
## Validation
|
||||
|
||||
Before completing:
|
||||
- [ ] {Check 1}
|
||||
- [ ] {Check 2}
|
||||
|
||||
---
|
||||
|
||||
## Related
|
||||
|
||||
- **Protocol**: [{related}]({path})
|
||||
- **Module**: [{related}]({path})
|
||||
```
|
||||
|
||||
### Step 3: Register Skill
|
||||
|
||||
#### For Bootstrap Skills
|
||||
|
||||
Add to `00_BOOTSTRAP.md` task classification tree.
|
||||
|
||||
#### For Core Skills
|
||||
|
||||
Add to `02_CONTEXT_LOADER.md`:
|
||||
```yaml
|
||||
{TASK_TYPE}:
|
||||
always_load:
|
||||
- core/{skill_name}.md
|
||||
```
|
||||
|
||||
#### For Modules
|
||||
|
||||
Add to `02_CONTEXT_LOADER.md`:
|
||||
```yaml
|
||||
{TASK_TYPE}:
|
||||
load_if:
|
||||
- modules/{skill_name}.md: "{condition}"
|
||||
```
|
||||
|
||||
### Step 4: Update Navigation
|
||||
|
||||
Add to `01_CHEATSHEET.md` if relevant to common tasks.
|
||||
|
||||
### Step 5: Test
|
||||
|
||||
Test with fresh Claude session:
|
||||
1. Start new conversation
|
||||
2. Describe task that should trigger skill
|
||||
3. Verify correct skill is loaded
|
||||
4. Verify skill instructions are followed
|
||||
|
||||
---
|
||||
|
||||
## Skill Design Guidelines
|
||||
|
||||
### Structure
|
||||
|
||||
- **Front-load**: Most important info first
|
||||
- **Tables**: Use for structured data
|
||||
- **Code blocks**: Complete, copy-paste ready
|
||||
- **Checklists**: For validation steps
|
||||
|
||||
### Content
|
||||
|
||||
- **Task-focused**: What should Claude DO?
|
||||
- **Prescriptive**: Clear instructions, not options
|
||||
- **Examples**: Show expected patterns
|
||||
- **Validation**: How to verify success
|
||||
|
||||
### Length Guidelines
|
||||
|
||||
| Skill Type | Target Lines | Rationale |
|
||||
|------------|--------------|-----------|
|
||||
| Bootstrap | 100-200 | Quick orientation |
|
||||
| Core | 500-1000 | Comprehensive task guide |
|
||||
| Module | 150-400 | Focused specialization |
|
||||
|
||||
### Avoid
|
||||
|
||||
- Duplicating protocol content (reference instead)
|
||||
- Vague instructions ("consider" → "do")
|
||||
- Missing examples
|
||||
- Untested code
|
||||
|
||||
---
|
||||
|
||||
## Module vs Protocol
|
||||
|
||||
**Skills** teach Claude HOW to interact:
|
||||
- Conversation patterns
|
||||
- Code templates
|
||||
- Validation steps
|
||||
- User interaction
|
||||
|
||||
**Protocols** document WHAT exists:
|
||||
- Technical specifications
|
||||
- Configuration options
|
||||
- Architecture details
|
||||
- Troubleshooting
|
||||
|
||||
Skills REFERENCE protocols, don't duplicate them.
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Example: Domain-Specific Module
|
||||
|
||||
`modules/thermal-optimization.md`:
|
||||
```markdown
|
||||
# Thermal Optimization Module
|
||||
|
||||
**Version**: 1.0
|
||||
**Purpose**: Specialized guidance for thermal FEA optimization
|
||||
|
||||
---
|
||||
|
||||
## When to Load
|
||||
|
||||
Load when:
|
||||
- "thermal", "temperature", "heat" in user request
|
||||
- Optimizing for thermal properties
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Physics | Extractor | Unit |
|
||||
|---------|-----------|------|
|
||||
| Max temp | E11 | K |
|
||||
| Gradient | E12 | K/mm |
|
||||
| Heat flux | E13 | W/m² |
|
||||
|
||||
---
|
||||
|
||||
## Objective Patterns
|
||||
|
||||
### Minimize Max Temperature
|
||||
|
||||
\`\`\`python
|
||||
from optimization_engine.extractors import extract_temperature
|
||||
|
||||
def objective(trial):
|
||||
# ... run simulation ...
|
||||
temp_result = extract_temperature(op2_file)
|
||||
return temp_result['max_temperature']
|
||||
\`\`\`
|
||||
|
||||
### Minimize Thermal Gradient
|
||||
|
||||
\`\`\`python
|
||||
from optimization_engine.extractors import extract_thermal_gradient
|
||||
|
||||
def objective(trial):
|
||||
# ... run simulation ...
|
||||
grad_result = extract_thermal_gradient(op2_file)
|
||||
return grad_result['max_gradient']
|
||||
\`\`\`
|
||||
|
||||
---
|
||||
|
||||
## Configuration Example
|
||||
|
||||
\`\`\`json
|
||||
{
|
||||
"objectives": [
|
||||
{
|
||||
"name": "max_temperature",
|
||||
"type": "minimize",
|
||||
"unit": "K",
|
||||
"description": "Maximum temperature in component"
|
||||
}
|
||||
]
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
---
|
||||
|
||||
## Related
|
||||
|
||||
- **Extractors**: E11, E12, E13 in SYS_12
|
||||
- **Protocol**: See OP_01 for study creation
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Cause | Solution |
|
||||
|-------|-------|----------|
|
||||
| Skill not loaded | Not in context loader | Add loading rule |
|
||||
| Wrong skill loaded | Ambiguous triggers | Refine conditions |
|
||||
| Instructions not followed | Too vague | Make prescriptive |
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Context Loader**: `.claude/skills/02_CONTEXT_LOADER.md`
|
||||
- **Bootstrap**: `.claude/skills/00_BOOTSTRAP.md`
|
||||
- **Related**: [EXT_03_CREATE_PROTOCOL](./EXT_03_CREATE_PROTOCOL.md)
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | 2025-12-05 | Initial release |
|
||||
186
docs/protocols/extensions/templates/extractor_template.py
Normal file
186
docs/protocols/extensions/templates/extractor_template.py
Normal file
@@ -0,0 +1,186 @@
|
||||
"""
|
||||
Extract {Physics Name} from FEA results.
|
||||
|
||||
This is a template for creating new physics extractors.
|
||||
Copy this file to optimization_engine/extractors/extract_{physics}.py
|
||||
and customize for your specific physics extraction.
|
||||
|
||||
Author: {Your Name}
|
||||
Created: {Date}
|
||||
Version: 1.0
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any, Optional, Union
|
||||
from pyNastran.op2.op2 import OP2
|
||||
|
||||
|
||||
def extract_{physics}(
|
||||
op2_file: Union[str, Path],
|
||||
subcase: int = 1,
|
||||
# Add other parameters specific to your physics
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Extract {physics description} from OP2 file.
|
||||
|
||||
Args:
|
||||
op2_file: Path to the OP2 results file
|
||||
subcase: Subcase number to extract (default: 1)
|
||||
# Document other parameters
|
||||
|
||||
Returns:
|
||||
Dictionary containing:
|
||||
- '{main_result}': The primary result value ({unit})
|
||||
- '{secondary_result}': Secondary result info
|
||||
- 'subcase': The subcase extracted
|
||||
- 'unit': Unit of the result
|
||||
|
||||
Raises:
|
||||
FileNotFoundError: If OP2 file doesn't exist
|
||||
KeyError: If subcase not found in results
|
||||
ValueError: If result data is invalid
|
||||
|
||||
Example:
|
||||
>>> result = extract_{physics}('model.op2', subcase=1)
|
||||
>>> print(result['{main_result}'])
|
||||
123.45
|
||||
>>> print(result['unit'])
|
||||
'{unit}'
|
||||
"""
|
||||
# Convert to Path for consistency
|
||||
op2_file = Path(op2_file)
|
||||
|
||||
# Validate file exists
|
||||
if not op2_file.exists():
|
||||
raise FileNotFoundError(f"OP2 file not found: {op2_file}")
|
||||
|
||||
# Read OP2 file
|
||||
op2 = OP2()
|
||||
op2.read_op2(str(op2_file))
|
||||
|
||||
# =========================================
|
||||
# CUSTOMIZE: Your extraction logic here
|
||||
# =========================================
|
||||
|
||||
# Example: Access displacement data
|
||||
# if subcase not in op2.displacements:
|
||||
# raise KeyError(f"Subcase {subcase} not found in displacement results")
|
||||
# data = op2.displacements[subcase]
|
||||
|
||||
# Example: Access stress data
|
||||
# if subcase not in op2.cquad4_stress:
|
||||
# raise KeyError(f"Subcase {subcase} not found in stress results")
|
||||
# stress_data = op2.cquad4_stress[subcase]
|
||||
|
||||
# Example: Process data
|
||||
# values = data.data # numpy array
|
||||
# max_value = values.max()
|
||||
# max_index = values.argmax()
|
||||
|
||||
# =========================================
|
||||
# Replace with your actual computation
|
||||
# =========================================
|
||||
main_result = 0.0 # TODO: Compute actual value
|
||||
secondary_result = 0 # TODO: Compute actual value
|
||||
|
||||
return {
|
||||
'{main_result}': main_result,
|
||||
'{secondary_result}': secondary_result,
|
||||
'subcase': subcase,
|
||||
'unit': '{unit}',
|
||||
}
|
||||
|
||||
|
||||
# Optional: Class-based extractor for complex cases
|
||||
class {Physics}Extractor:
|
||||
"""
|
||||
Class-based extractor for {physics} with state management.
|
||||
|
||||
Use this pattern when:
|
||||
- Extraction requires multiple steps
|
||||
- You need to cache the OP2 data
|
||||
- Configuration is complex
|
||||
|
||||
Example:
|
||||
>>> extractor = {Physics}Extractor('model.op2', config={'option': value})
|
||||
>>> result = extractor.extract(subcase=1)
|
||||
>>> print(result)
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
op2_file: Union[str, Path],
|
||||
bdf_file: Optional[Union[str, Path]] = None,
|
||||
**config
|
||||
):
|
||||
"""
|
||||
Initialize the extractor.
|
||||
|
||||
Args:
|
||||
op2_file: Path to OP2 results file
|
||||
bdf_file: Optional path to BDF mesh file (for node coordinates)
|
||||
**config: Additional configuration options
|
||||
"""
|
||||
self.op2_file = Path(op2_file)
|
||||
self.bdf_file = Path(bdf_file) if bdf_file else None
|
||||
self.config = config
|
||||
self._op2 = None # Lazy-loaded
|
||||
|
||||
def _load_op2(self) -> OP2:
|
||||
"""Lazy load OP2 file (caches result)."""
|
||||
if self._op2 is None:
|
||||
self._op2 = OP2()
|
||||
self._op2.read_op2(str(self.op2_file))
|
||||
return self._op2
|
||||
|
||||
def extract(self, subcase: int = 1) -> Dict[str, Any]:
|
||||
"""
|
||||
Extract results for given subcase.
|
||||
|
||||
Args:
|
||||
subcase: Subcase number
|
||||
|
||||
Returns:
|
||||
Dictionary with extraction results
|
||||
"""
|
||||
op2 = self._load_op2()
|
||||
|
||||
# TODO: Implement your extraction logic
|
||||
# Use self.config for configuration options
|
||||
|
||||
return {
|
||||
'{main_result}': 0.0,
|
||||
'subcase': subcase,
|
||||
}
|
||||
|
||||
def extract_all_subcases(self) -> Dict[int, Dict[str, Any]]:
|
||||
"""
|
||||
Extract results for all available subcases.
|
||||
|
||||
Returns:
|
||||
Dictionary mapping subcase number to results
|
||||
"""
|
||||
op2 = self._load_op2()
|
||||
|
||||
# TODO: Find available subcases
|
||||
# available_subcases = list(op2.displacements.keys())
|
||||
|
||||
results = {}
|
||||
# for sc in available_subcases:
|
||||
# results[sc] = self.extract(subcase=sc)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
# =========================================
|
||||
# After creating your extractor:
|
||||
# 1. Add to optimization_engine/extractors/__init__.py:
|
||||
# from .extract_{physics} import extract_{physics}
|
||||
# __all__ = [..., 'extract_{physics}']
|
||||
#
|
||||
# 2. Update docs/protocols/system/SYS_12_EXTRACTOR_LIBRARY.md
|
||||
# - Add to Quick Reference table
|
||||
# - Add detailed section with example
|
||||
#
|
||||
# 3. Create test file: tests/test_extract_{physics}.py
|
||||
# =========================================
|
||||
213
docs/protocols/extensions/templates/hook_template.py
Normal file
213
docs/protocols/extensions/templates/hook_template.py
Normal file
@@ -0,0 +1,213 @@
|
||||
"""
|
||||
{Hook Name} - Lifecycle Hook Plugin
|
||||
|
||||
This is a template for creating new lifecycle hooks.
|
||||
Copy this file to optimization_engine/plugins/{hook_point}/{hook_name}.py
|
||||
|
||||
Available hook points:
|
||||
- pre_mesh: Before meshing
|
||||
- post_mesh: After meshing
|
||||
- pre_solve: Before solver execution
|
||||
- post_solve: After solver completion
|
||||
- post_extraction: After result extraction
|
||||
- post_calculation: After objective calculation
|
||||
- custom_objective: Custom objective functions
|
||||
|
||||
Author: {Your Name}
|
||||
Created: {Date}
|
||||
Version: 1.0
|
||||
Hook Point: {hook_point}
|
||||
"""
|
||||
|
||||
from typing import Dict, Any, Optional
|
||||
from pathlib import Path
|
||||
import json
|
||||
from datetime import datetime
|
||||
|
||||
|
||||
def {hook_name}_hook(context: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""
|
||||
{Description of what this hook does}.
|
||||
|
||||
This hook runs at the {hook_point} stage of the optimization trial.
|
||||
|
||||
Args:
|
||||
context: Dictionary containing trial context:
|
||||
- trial_number (int): Current trial number
|
||||
- design_params (dict): Current design parameter values
|
||||
- config (dict): Optimization configuration
|
||||
- working_dir (Path): Study working directory
|
||||
|
||||
For post_solve and later:
|
||||
- op2_file (Path): Path to OP2 results file
|
||||
- solve_success (bool): Whether solve succeeded
|
||||
- solve_time (float): Solve duration in seconds
|
||||
|
||||
For post_extraction and later:
|
||||
- results (dict): Extracted results so far
|
||||
|
||||
For post_calculation:
|
||||
- objectives (dict): Computed objective values
|
||||
- constraints (dict): Constraint values
|
||||
|
||||
Returns:
|
||||
Dictionary with computed values or modifications.
|
||||
These values are added to the trial context.
|
||||
Return empty dict {} if no modifications needed.
|
||||
|
||||
Raises:
|
||||
Exception: Any exception will be logged but won't stop the trial
|
||||
unless you want it to (raise optuna.TrialPruned instead)
|
||||
|
||||
Example:
|
||||
>>> context = {'trial_number': 1, 'design_params': {'x': 5.0}}
|
||||
>>> result = {hook_name}_hook(context)
|
||||
>>> print(result)
|
||||
{{'{computed_key}': 123.45}}
|
||||
"""
|
||||
# =========================================
|
||||
# Access context values
|
||||
# =========================================
|
||||
trial_num = context.get('trial_number', 0)
|
||||
design_params = context.get('design_params', {})
|
||||
config = context.get('config', {})
|
||||
working_dir = context.get('working_dir', Path('.'))
|
||||
|
||||
# For post_solve hooks and later:
|
||||
# op2_file = context.get('op2_file')
|
||||
# solve_success = context.get('solve_success', False)
|
||||
|
||||
# For post_extraction hooks and later:
|
||||
# results = context.get('results', {})
|
||||
|
||||
# For post_calculation hooks:
|
||||
# objectives = context.get('objectives', {})
|
||||
# constraints = context.get('constraints', {})
|
||||
|
||||
# =========================================
|
||||
# Your hook logic here
|
||||
# =========================================
|
||||
|
||||
# Example: Log trial start (pre_solve hook)
|
||||
# print(f"[Hook] Trial {trial_num} starting with params: {design_params}")
|
||||
|
||||
# Example: Compute derived quantity (post_extraction hook)
|
||||
# max_stress = results.get('max_von_mises', 0)
|
||||
# yield_strength = config.get('material', {}).get('yield_strength', 250)
|
||||
# safety_factor = yield_strength / max(max_stress, 1e-6)
|
||||
|
||||
# Example: Write log file (post_calculation hook)
|
||||
# log_entry = {
|
||||
# 'trial': trial_num,
|
||||
# 'timestamp': datetime.now().isoformat(),
|
||||
# 'objectives': context.get('objectives', {}),
|
||||
# }
|
||||
# with open(working_dir / 'trial_log.jsonl', 'a') as f:
|
||||
# f.write(json.dumps(log_entry) + '\n')
|
||||
|
||||
# =========================================
|
||||
# Return computed values
|
||||
# =========================================
|
||||
|
||||
# Values returned here are added to the context
|
||||
# and can be accessed by later hooks or the optimizer
|
||||
|
||||
return {
|
||||
# '{computed_key}': computed_value,
|
||||
}
|
||||
|
||||
|
||||
def register_hooks(hook_manager) -> None:
|
||||
"""
|
||||
Register this hook with the hook manager.
|
||||
|
||||
This function is called automatically when plugins are discovered.
|
||||
It must be named exactly 'register_hooks' and take one argument.
|
||||
|
||||
Args:
|
||||
hook_manager: The HookManager instance from optimization_engine
|
||||
"""
|
||||
hook_manager.register_hook(
|
||||
hook_point='{hook_point}', # pre_mesh, post_mesh, pre_solve, etc.
|
||||
function={hook_name}_hook,
|
||||
name='{hook_name}_hook',
|
||||
description='{Brief description of what this hook does}',
|
||||
priority=100, # Lower number = runs earlier (1-200 typical range)
|
||||
enabled=True # Set to False to disable by default
|
||||
)
|
||||
|
||||
|
||||
# =========================================
|
||||
# Optional: Helper functions
|
||||
# =========================================
|
||||
|
||||
def _helper_function(data: Any) -> Any:
|
||||
"""
|
||||
Private helper function for the hook.
|
||||
|
||||
Keep hook logic clean by extracting complex operations
|
||||
into helper functions.
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
# =========================================
|
||||
# After creating your hook:
|
||||
#
|
||||
# 1. Place in correct directory:
|
||||
# optimization_engine/plugins/{hook_point}/{hook_name}.py
|
||||
#
|
||||
# 2. Hook is auto-discovered - no __init__.py changes needed
|
||||
#
|
||||
# 3. Test the hook:
|
||||
# python -c "
|
||||
# from optimization_engine.plugins.hook_manager import HookManager
|
||||
# hm = HookManager()
|
||||
# hm.discover_plugins()
|
||||
# print(hm.list_hooks())
|
||||
# "
|
||||
#
|
||||
# 4. Update documentation if significant:
|
||||
# - Add to EXT_02_CREATE_HOOK.md examples section
|
||||
# =========================================
|
||||
|
||||
|
||||
# =========================================
|
||||
# Example hooks for reference
|
||||
# =========================================
|
||||
|
||||
def example_logger_hook(context: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Example: Simple trial logger for pre_solve."""
|
||||
trial = context.get('trial_number', 0)
|
||||
params = context.get('design_params', {})
|
||||
print(f"[LOG] Trial {trial} starting: {params}")
|
||||
return {}
|
||||
|
||||
|
||||
def example_safety_factor_hook(context: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Example: Safety factor calculator for post_extraction."""
|
||||
results = context.get('results', {})
|
||||
max_stress = results.get('max_von_mises', 0)
|
||||
|
||||
if max_stress > 0:
|
||||
safety_factor = 250.0 / max_stress # Assuming 250 MPa yield
|
||||
else:
|
||||
safety_factor = float('inf')
|
||||
|
||||
return {'safety_factor': safety_factor}
|
||||
|
||||
|
||||
def example_validator_hook(context: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Example: Result validator for post_solve."""
|
||||
import optuna
|
||||
|
||||
solve_success = context.get('solve_success', False)
|
||||
op2_file = context.get('op2_file')
|
||||
|
||||
if not solve_success:
|
||||
raise optuna.TrialPruned("Solve failed")
|
||||
|
||||
if op2_file and not Path(op2_file).exists():
|
||||
raise optuna.TrialPruned("OP2 file not generated")
|
||||
|
||||
return {'validation_passed': True}
|
||||
112
docs/protocols/extensions/templates/protocol_template.md
Normal file
112
docs/protocols/extensions/templates/protocol_template.md
Normal file
@@ -0,0 +1,112 @@
|
||||
# {LAYER}_{NUMBER}_{NAME}
|
||||
|
||||
<!--
|
||||
PROTOCOL: {Full Protocol Name}
|
||||
LAYER: {Operations|System|Extensions}
|
||||
VERSION: 1.0
|
||||
STATUS: Active
|
||||
LAST_UPDATED: {YYYY-MM-DD}
|
||||
PRIVILEGE: {user|power_user|admin}
|
||||
LOAD_WITH: [{dependency_protocols}]
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
{1-3 sentence description of what this protocol does and why it exists.}
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| {keyword or user intent} | Follow this protocol |
|
||||
| {condition} | Follow this protocol |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
{Key information in table format for fast lookup}
|
||||
|
||||
| Parameter | Default | Description |
|
||||
|-----------|---------|-------------|
|
||||
| {param} | {value} | {description} |
|
||||
|
||||
---
|
||||
|
||||
## Detailed Specification
|
||||
|
||||
### Section 1: {Topic}
|
||||
|
||||
{Detailed content}
|
||||
|
||||
```python
|
||||
# Code example if applicable
|
||||
```
|
||||
|
||||
### Section 2: {Topic}
|
||||
|
||||
{Detailed content}
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
{If applicable, show configuration examples}
|
||||
|
||||
```json
|
||||
{
|
||||
"setting": "value"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: {Scenario Name}
|
||||
|
||||
{Complete working example with context}
|
||||
|
||||
```python
|
||||
# Full working code example
|
||||
```
|
||||
|
||||
### Example 2: {Scenario Name}
|
||||
|
||||
{Another example showing different use case}
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Symptom | Cause | Solution |
|
||||
|---------|-------|----------|
|
||||
| {error message or symptom} | {root cause} | {how to fix} |
|
||||
| {symptom} | {cause} | {solution} |
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Depends On**: [{protocol_name}]({relative_path})
|
||||
- **Used By**: [{protocol_name}]({relative_path})
|
||||
- **See Also**: [{related_doc}]({path})
|
||||
|
||||
---
|
||||
|
||||
## Implementation Files
|
||||
|
||||
{If applicable, list the code files that implement this protocol}
|
||||
|
||||
- `path/to/file.py` - {description}
|
||||
- `path/to/other.py` - {description}
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | {YYYY-MM-DD} | Initial release |
|
||||
403
docs/protocols/operations/OP_01_CREATE_STUDY.md
Normal file
403
docs/protocols/operations/OP_01_CREATE_STUDY.md
Normal file
@@ -0,0 +1,403 @@
|
||||
# OP_01: Create Optimization Study
|
||||
|
||||
<!--
|
||||
PROTOCOL: Create Optimization Study
|
||||
LAYER: Operations
|
||||
VERSION: 1.0
|
||||
STATUS: Active
|
||||
LAST_UPDATED: 2025-12-05
|
||||
PRIVILEGE: user
|
||||
LOAD_WITH: [core/study-creation-core.md]
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
This protocol guides you through creating a complete Atomizer optimization study from scratch. It covers gathering requirements, generating configuration files, and validating setup.
|
||||
|
||||
**Skill to Load**: `.claude/skills/core/study-creation-core.md`
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| "new study", "create study" | Follow this protocol |
|
||||
| "set up optimization" | Follow this protocol |
|
||||
| "optimize my design" | Follow this protocol |
|
||||
| User provides NX model | Assess and follow this protocol |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
**Required Outputs**:
|
||||
| File | Purpose | Location |
|
||||
|------|---------|----------|
|
||||
| `optimization_config.json` | Design vars, objectives, constraints | `1_setup/` |
|
||||
| `run_optimization.py` | Execution script | Study root |
|
||||
| `README.md` | Engineering documentation | Study root |
|
||||
| `STUDY_REPORT.md` | Results template | Study root |
|
||||
|
||||
**Study Structure**:
|
||||
```
|
||||
studies/{study_name}/
|
||||
├── 1_setup/
|
||||
│ ├── model/ # NX files (.prt, .sim, .fem)
|
||||
│ └── optimization_config.json
|
||||
├── 2_results/ # Created during run
|
||||
├── README.md # MANDATORY
|
||||
├── STUDY_REPORT.md # MANDATORY
|
||||
└── run_optimization.py
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Detailed Steps
|
||||
|
||||
### Step 1: Gather Requirements
|
||||
|
||||
**Ask the user**:
|
||||
1. What are you trying to optimize? (objective)
|
||||
2. What can you change? (design variables)
|
||||
3. What limits must be respected? (constraints)
|
||||
4. Where are your NX files?
|
||||
|
||||
**Example Dialog**:
|
||||
```
|
||||
User: "I want to optimize my bracket"
|
||||
You: "What should I optimize for - minimum mass, maximum stiffness,
|
||||
target frequency, or something else?"
|
||||
User: "Minimize mass while keeping stress below 250 MPa"
|
||||
```
|
||||
|
||||
### Step 2: Analyze Model (Introspection)
|
||||
|
||||
**MANDATORY**: When user provides NX files, run comprehensive introspection:
|
||||
|
||||
```python
|
||||
from optimization_engine.hooks.nx_cad.model_introspection import (
|
||||
introspect_part,
|
||||
introspect_simulation,
|
||||
introspect_op2,
|
||||
introspect_study
|
||||
)
|
||||
|
||||
# Introspect the part file to get expressions, mass, features
|
||||
part_info = introspect_part("C:/path/to/model.prt")
|
||||
|
||||
# Introspect the simulation to get solutions, BCs, loads
|
||||
sim_info = introspect_simulation("C:/path/to/model.sim")
|
||||
|
||||
# If OP2 exists, check what results are available
|
||||
op2_info = introspect_op2("C:/path/to/results.op2")
|
||||
|
||||
# Or introspect entire study directory at once
|
||||
study_info = introspect_study("studies/my_study/")
|
||||
```
|
||||
|
||||
**Introspection Report Contents**:
|
||||
|
||||
| Source | Information Extracted |
|
||||
|--------|----------------------|
|
||||
| `.prt` | Expressions (count, values, types), bodies, mass, material, features |
|
||||
| `.sim` | Solutions, boundary conditions, loads, materials, mesh info, output requests |
|
||||
| `.op2` | Available results (displacement, stress, strain, SPC forces, etc.), subcases |
|
||||
|
||||
**Generate Introspection Report** at study creation:
|
||||
1. Save report to `studies/{study_name}/MODEL_INTROSPECTION.md`
|
||||
2. Include summary of what's available for optimization
|
||||
3. List potential design variables (expressions)
|
||||
4. List extractable results (from OP2)
|
||||
|
||||
**Key Questions Answered by Introspection**:
|
||||
- What expressions exist? (potential design variables)
|
||||
- What solution types? (static, modal, etc.)
|
||||
- What results are available in OP2? (displacement, stress, SPC forces)
|
||||
- Multi-solution required? (static + modal = set `solution_name=None`)
|
||||
|
||||
### Step 3: Select Protocol
|
||||
|
||||
Based on objectives:
|
||||
|
||||
| Scenario | Protocol | Sampler |
|
||||
|----------|----------|---------|
|
||||
| Single objective | Protocol 10 (IMSO) | TPE, CMA-ES, or GP |
|
||||
| 2-3 objectives | Protocol 11 | NSGA-II |
|
||||
| >50 trials, need speed | Protocol 14 | + Neural acceleration |
|
||||
|
||||
See [SYS_10_IMSO](../system/SYS_10_IMSO.md), [SYS_11_MULTI_OBJECTIVE](../system/SYS_11_MULTI_OBJECTIVE.md).
|
||||
|
||||
### Step 4: Select Extractors
|
||||
|
||||
Match physics to extractors from [SYS_12_EXTRACTOR_LIBRARY](../system/SYS_12_EXTRACTOR_LIBRARY.md):
|
||||
|
||||
| Need | Extractor ID | Function |
|
||||
|------|--------------|----------|
|
||||
| Max displacement | E1 | `extract_displacement()` |
|
||||
| Natural frequency | E2 | `extract_frequency()` |
|
||||
| Von Mises stress | E3 | `extract_solid_stress()` |
|
||||
| Mass from BDF | E4 | `extract_mass_from_bdf()` |
|
||||
| Mass from NX | E5 | `extract_mass_from_expression()` |
|
||||
| Wavefront error | E8-E10 | Zernike extractors |
|
||||
|
||||
### Step 5: Generate Configuration
|
||||
|
||||
Create `optimization_config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"study_name": "bracket_optimization",
|
||||
"description": "Minimize bracket mass while meeting stress constraint",
|
||||
|
||||
"design_variables": [
|
||||
{
|
||||
"name": "thickness",
|
||||
"type": "continuous",
|
||||
"min": 2.0,
|
||||
"max": 10.0,
|
||||
"unit": "mm",
|
||||
"description": "Wall thickness"
|
||||
}
|
||||
],
|
||||
|
||||
"objectives": [
|
||||
{
|
||||
"name": "mass",
|
||||
"type": "minimize",
|
||||
"unit": "kg",
|
||||
"description": "Total bracket mass"
|
||||
}
|
||||
],
|
||||
|
||||
"constraints": [
|
||||
{
|
||||
"name": "max_stress",
|
||||
"type": "less_than",
|
||||
"value": 250.0,
|
||||
"unit": "MPa",
|
||||
"description": "Maximum allowable von Mises stress"
|
||||
}
|
||||
],
|
||||
|
||||
"simulation": {
|
||||
"model_file": "1_setup/model/bracket.prt",
|
||||
"sim_file": "1_setup/model/bracket.sim",
|
||||
"solver": "nastran",
|
||||
"solution_name": null
|
||||
},
|
||||
|
||||
"optimization_settings": {
|
||||
"protocol": "protocol_10_single_objective",
|
||||
"sampler": "TPESampler",
|
||||
"n_trials": 50
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 6: Generate run_optimization.py
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python
|
||||
"""
|
||||
{study_name} - Optimization Runner
|
||||
Generated by Atomizer LLM
|
||||
"""
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
# Add optimization engine to path
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent.parent))
|
||||
|
||||
from optimization_engine.nx_solver import NXSolver
|
||||
from optimization_engine.extractors import extract_displacement, extract_solid_stress
|
||||
|
||||
# Paths
|
||||
STUDY_DIR = Path(__file__).parent
|
||||
MODEL_DIR = STUDY_DIR / "1_setup" / "model"
|
||||
RESULTS_DIR = STUDY_DIR / "2_results"
|
||||
|
||||
def objective(trial):
|
||||
"""Optimization objective function."""
|
||||
# Sample design variables
|
||||
thickness = trial.suggest_float("thickness", 2.0, 10.0)
|
||||
|
||||
# Update NX model and solve
|
||||
nx_solver = NXSolver(...)
|
||||
result = nx_solver.run_simulation(
|
||||
sim_file=MODEL_DIR / "bracket.sim",
|
||||
working_dir=MODEL_DIR,
|
||||
expression_updates={"thickness": thickness}
|
||||
)
|
||||
|
||||
if not result['success']:
|
||||
raise optuna.TrialPruned("Simulation failed")
|
||||
|
||||
# Extract results using library extractors
|
||||
op2_file = result['op2_file']
|
||||
stress_result = extract_solid_stress(op2_file)
|
||||
max_stress = stress_result['max_von_mises']
|
||||
|
||||
# Check constraint
|
||||
if max_stress > 250.0:
|
||||
raise optuna.TrialPruned(f"Stress constraint violated: {max_stress} MPa")
|
||||
|
||||
# Return objective
|
||||
mass = extract_mass(...)
|
||||
return mass
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Run optimization
|
||||
import optuna
|
||||
study = optuna.create_study(direction="minimize")
|
||||
study.optimize(objective, n_trials=50)
|
||||
```
|
||||
|
||||
### Step 7: Generate Documentation
|
||||
|
||||
**README.md** (11 sections required):
|
||||
1. Engineering Problem
|
||||
2. Mathematical Formulation
|
||||
3. Optimization Algorithm
|
||||
4. Simulation Pipeline
|
||||
5. Result Extraction Methods
|
||||
6. Neural Acceleration (if applicable)
|
||||
7. Study File Structure
|
||||
8. Results Location
|
||||
9. Quick Start
|
||||
10. Configuration Reference
|
||||
11. References
|
||||
|
||||
**STUDY_REPORT.md** (template):
|
||||
```markdown
|
||||
# Study Report: {study_name}
|
||||
|
||||
## Executive Summary
|
||||
- Trials completed: _pending_
|
||||
- Best objective: _pending_
|
||||
- Constraint satisfaction: _pending_
|
||||
|
||||
## Optimization Progress
|
||||
_To be filled after run_
|
||||
|
||||
## Best Designs Found
|
||||
_To be filled after run_
|
||||
|
||||
## Recommendations
|
||||
_To be filled after analysis_
|
||||
```
|
||||
|
||||
### Step 8: Validate NX Model File Chain
|
||||
|
||||
**CRITICAL**: NX simulation files have parent-child dependencies. ALL linked files must be copied to the study folder.
|
||||
|
||||
**Required File Chain Check**:
|
||||
```
|
||||
.sim (Simulation)
|
||||
└── .fem (FEM)
|
||||
└── _i.prt (Idealized Part) ← OFTEN MISSING!
|
||||
└── .prt (Geometry Part)
|
||||
```
|
||||
|
||||
**Validation Steps**:
|
||||
1. Open the `.sim` file in NX
|
||||
2. Go to **Assemblies → Assembly Navigator** or check **Part Navigator**
|
||||
3. Identify ALL child components (especially `*_i.prt` idealized parts)
|
||||
4. Copy ALL linked files to `1_setup/model/`
|
||||
|
||||
**Common Issue**: The `_i.prt` (idealized part) is often forgotten. Without it:
|
||||
- `UpdateFemodel()` runs but mesh doesn't change
|
||||
- Geometry changes don't propagate to FEM
|
||||
- All optimization trials produce identical results
|
||||
|
||||
**File Checklist**:
|
||||
| File Pattern | Description | Required |
|
||||
|--------------|-------------|----------|
|
||||
| `*.prt` | Geometry part | ✅ Always |
|
||||
| `*_i.prt` | Idealized part | ✅ If FEM uses idealization |
|
||||
| `*.fem` | FEM file | ✅ Always |
|
||||
| `*.sim` | Simulation file | ✅ Always |
|
||||
|
||||
**Introspection should report**:
|
||||
- List of all parts referenced by .sim
|
||||
- Warning if any referenced parts are missing from study folder
|
||||
|
||||
### Step 9: Final Validation Checklist
|
||||
|
||||
Before running:
|
||||
|
||||
- [ ] NX files exist in `1_setup/model/`
|
||||
- [ ] **ALL child parts copied** (especially `*_i.prt`)
|
||||
- [ ] Expression names match model
|
||||
- [ ] Config validates (JSON schema)
|
||||
- [ ] `run_optimization.py` has no syntax errors
|
||||
- [ ] README.md has all 11 sections
|
||||
- [ ] STUDY_REPORT.md template exists
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Simple Bracket
|
||||
|
||||
```
|
||||
User: "Optimize my bracket.prt for minimum mass, stress < 250 MPa"
|
||||
|
||||
Generated config:
|
||||
- 1 design variable (thickness)
|
||||
- 1 objective (minimize mass)
|
||||
- 1 constraint (stress < 250)
|
||||
- Protocol 10, TPE sampler
|
||||
- 50 trials
|
||||
```
|
||||
|
||||
### Example 2: Multi-Objective Beam
|
||||
|
||||
```
|
||||
User: "Minimize mass AND maximize stiffness for my beam"
|
||||
|
||||
Generated config:
|
||||
- 2 design variables (width, height)
|
||||
- 2 objectives (minimize mass, maximize stiffness)
|
||||
- Protocol 11, NSGA-II sampler
|
||||
- 50 trials (Pareto front)
|
||||
```
|
||||
|
||||
### Example 3: Telescope Mirror
|
||||
|
||||
```
|
||||
User: "Minimize wavefront error at 40deg vs 20deg reference"
|
||||
|
||||
Generated config:
|
||||
- Multiple design variables (mount positions)
|
||||
- 1 objective (minimize relative WFE)
|
||||
- Zernike extractor E9
|
||||
- Protocol 10
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Symptom | Cause | Solution |
|
||||
|---------|-------|----------|
|
||||
| "Expression not found" | Name mismatch | Verify expression names in NX |
|
||||
| "No feasible designs" | Constraints too tight | Relax constraint values |
|
||||
| Config validation fails | Missing required field | Check JSON schema |
|
||||
| Import error | Wrong path | Check sys.path setup |
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Depends On**: [SYS_12_EXTRACTOR_LIBRARY](../system/SYS_12_EXTRACTOR_LIBRARY.md)
|
||||
- **Next Step**: [OP_02_RUN_OPTIMIZATION](./OP_02_RUN_OPTIMIZATION.md)
|
||||
- **Skill**: `.claude/skills/core/study-creation-core.md`
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | 2025-12-05 | Initial release |
|
||||
297
docs/protocols/operations/OP_02_RUN_OPTIMIZATION.md
Normal file
297
docs/protocols/operations/OP_02_RUN_OPTIMIZATION.md
Normal file
@@ -0,0 +1,297 @@
|
||||
# OP_02: Run Optimization
|
||||
|
||||
<!--
|
||||
PROTOCOL: Run Optimization
|
||||
LAYER: Operations
|
||||
VERSION: 1.0
|
||||
STATUS: Active
|
||||
LAST_UPDATED: 2025-12-05
|
||||
PRIVILEGE: user
|
||||
LOAD_WITH: []
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
This protocol covers executing optimization runs, including pre-flight validation, execution modes, monitoring, and handling common issues.
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| "start", "run", "execute" | Follow this protocol |
|
||||
| "begin optimization" | Follow this protocol |
|
||||
| Study setup complete | Execute this protocol |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
**Start Command**:
|
||||
```bash
|
||||
conda activate atomizer
|
||||
cd studies/{study_name}
|
||||
python run_optimization.py
|
||||
```
|
||||
|
||||
**Common Options**:
|
||||
| Flag | Purpose |
|
||||
|------|---------|
|
||||
| `--n-trials 100` | Override trial count |
|
||||
| `--resume` | Continue interrupted run |
|
||||
| `--test` | Run single trial for validation |
|
||||
| `--export-training` | Export data for neural training |
|
||||
|
||||
---
|
||||
|
||||
## Pre-Flight Checklist
|
||||
|
||||
Before running, verify:
|
||||
|
||||
- [ ] **Environment**: `conda activate atomizer`
|
||||
- [ ] **Config exists**: `1_setup/optimization_config.json`
|
||||
- [ ] **Script exists**: `run_optimization.py`
|
||||
- [ ] **Model files**: NX files in `1_setup/model/`
|
||||
- [ ] **No conflicts**: No other optimization running on same study
|
||||
- [ ] **Disk space**: Sufficient for results
|
||||
|
||||
**Quick Validation**:
|
||||
```bash
|
||||
python run_optimization.py --test
|
||||
```
|
||||
This runs a single trial to verify setup.
|
||||
|
||||
---
|
||||
|
||||
## Execution Modes
|
||||
|
||||
### 1. Standard Run
|
||||
|
||||
```bash
|
||||
python run_optimization.py
|
||||
```
|
||||
Uses settings from `optimization_config.json`.
|
||||
|
||||
### 2. Override Trials
|
||||
|
||||
```bash
|
||||
python run_optimization.py --n-trials 100
|
||||
```
|
||||
Override trial count from config.
|
||||
|
||||
### 3. Resume Interrupted
|
||||
|
||||
```bash
|
||||
python run_optimization.py --resume
|
||||
```
|
||||
Continues from last completed trial.
|
||||
|
||||
### 4. Neural Acceleration
|
||||
|
||||
```bash
|
||||
python run_optimization.py --neural
|
||||
```
|
||||
Requires trained surrogate model.
|
||||
|
||||
### 5. Export Training Data
|
||||
|
||||
```bash
|
||||
python run_optimization.py --export-training
|
||||
```
|
||||
Saves BDF/OP2 for neural network training.
|
||||
|
||||
---
|
||||
|
||||
## Monitoring Progress
|
||||
|
||||
### Option 1: Console Output
|
||||
The script prints progress:
|
||||
```
|
||||
Trial 15/50 complete. Best: 0.234 kg
|
||||
Trial 16/50 complete. Best: 0.234 kg
|
||||
```
|
||||
|
||||
### Option 2: Dashboard
|
||||
See [SYS_13_DASHBOARD_TRACKING](../system/SYS_13_DASHBOARD_TRACKING.md).
|
||||
|
||||
```bash
|
||||
# Start dashboard (separate terminal)
|
||||
cd atomizer-dashboard/backend && python -m uvicorn api.main:app --port 8000
|
||||
cd atomizer-dashboard/frontend && npm run dev
|
||||
|
||||
# Open browser
|
||||
http://localhost:3000
|
||||
```
|
||||
|
||||
### Option 3: Query Database
|
||||
|
||||
```bash
|
||||
python -c "
|
||||
import optuna
|
||||
study = optuna.load_study('study_name', 'sqlite:///2_results/study.db')
|
||||
print(f'Trials: {len(study.trials)}')
|
||||
print(f'Best value: {study.best_value}')
|
||||
"
|
||||
```
|
||||
|
||||
### Option 4: Optuna Dashboard
|
||||
|
||||
```bash
|
||||
optuna-dashboard sqlite:///2_results/study.db
|
||||
# Open http://localhost:8080
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## During Execution
|
||||
|
||||
### What Happens Per Trial
|
||||
|
||||
1. **Sample parameters**: Optuna suggests design variable values
|
||||
2. **Update model**: NX expressions updated via journal
|
||||
3. **Solve**: NX Nastran runs FEA simulation
|
||||
4. **Extract results**: Extractors read OP2 file
|
||||
5. **Evaluate**: Check constraints, compute objectives
|
||||
6. **Record**: Trial stored in Optuna database
|
||||
|
||||
### Normal Output
|
||||
|
||||
```
|
||||
[2025-12-05 10:15:30] Trial 1 started
|
||||
[2025-12-05 10:17:45] NX solve complete (135.2s)
|
||||
[2025-12-05 10:17:46] Extraction complete
|
||||
[2025-12-05 10:17:46] Trial 1 complete: mass=0.342 kg, stress=198.5 MPa
|
||||
|
||||
[2025-12-05 10:17:47] Trial 2 started
|
||||
...
|
||||
```
|
||||
|
||||
### Expected Timing
|
||||
|
||||
| Operation | Typical Time |
|
||||
|-----------|--------------|
|
||||
| NX solve | 30s - 30min |
|
||||
| Extraction | <1s |
|
||||
| Per trial total | 1-30 min |
|
||||
| 50 trials | 1-24 hours |
|
||||
|
||||
---
|
||||
|
||||
## Handling Issues
|
||||
|
||||
### Trial Failed / Pruned
|
||||
|
||||
```
|
||||
[WARNING] Trial 12 pruned: Stress constraint violated (312.5 MPa > 250 MPa)
|
||||
```
|
||||
**Normal behavior** - optimizer learns from failures.
|
||||
|
||||
### NX Session Timeout
|
||||
|
||||
```
|
||||
[ERROR] NX session timeout after 600s
|
||||
```
|
||||
**Solution**: Increase timeout in config or simplify model.
|
||||
|
||||
### Expression Not Found
|
||||
|
||||
```
|
||||
[ERROR] Expression 'thicknes' not found in model
|
||||
```
|
||||
**Solution**: Check spelling, verify expression exists in NX.
|
||||
|
||||
### OP2 File Missing
|
||||
|
||||
```
|
||||
[ERROR] OP2 file not found: model.op2
|
||||
```
|
||||
**Solution**: Check NX solve completed. Review NX log file.
|
||||
|
||||
### Database Locked
|
||||
|
||||
```
|
||||
[ERROR] Database is locked
|
||||
```
|
||||
**Solution**: Another process using database. Wait or kill stale process.
|
||||
|
||||
---
|
||||
|
||||
## Stopping and Resuming
|
||||
|
||||
### Graceful Stop
|
||||
Press `Ctrl+C` once. Current trial completes, then exits.
|
||||
|
||||
### Force Stop
|
||||
Press `Ctrl+C` twice. Immediate exit (may lose current trial).
|
||||
|
||||
### Resume
|
||||
```bash
|
||||
python run_optimization.py --resume
|
||||
```
|
||||
Continues from last completed trial. Same study database used.
|
||||
|
||||
---
|
||||
|
||||
## Post-Run Actions
|
||||
|
||||
After optimization completes:
|
||||
|
||||
1. **Check results**:
|
||||
```bash
|
||||
python -c "import optuna; s=optuna.load_study(...); print(s.best_params)"
|
||||
```
|
||||
|
||||
2. **View in dashboard**: `http://localhost:3000`
|
||||
|
||||
3. **Generate report**: See [OP_04_ANALYZE_RESULTS](./OP_04_ANALYZE_RESULTS.md)
|
||||
|
||||
4. **Update STUDY_REPORT.md**: Fill in results template
|
||||
|
||||
---
|
||||
|
||||
## Protocol Integration
|
||||
|
||||
### With Protocol 10 (IMSO)
|
||||
If enabled, optimization runs in two phases:
|
||||
1. Characterization (10-30 trials)
|
||||
2. Optimization (remaining trials)
|
||||
|
||||
Dashboard shows phase transitions.
|
||||
|
||||
### With Protocol 11 (Multi-Objective)
|
||||
If 2+ objectives, uses NSGA-II. Returns Pareto front, not single best.
|
||||
|
||||
### With Protocol 13 (Dashboard)
|
||||
Writes `optimizer_state.json` every trial for real-time updates.
|
||||
|
||||
### With Protocol 14 (Neural)
|
||||
If `--neural` flag, uses trained surrogate for fast evaluation.
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Symptom | Cause | Solution |
|
||||
|---------|-------|----------|
|
||||
| "ModuleNotFoundError" | Wrong environment | `conda activate atomizer` |
|
||||
| All trials pruned | Constraints too tight | Relax constraints |
|
||||
| Very slow | Model too complex | Simplify mesh, increase timeout |
|
||||
| No improvement | Wrong sampler | Try different algorithm |
|
||||
| "NX license error" | License unavailable | Check NX license server |
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Preceded By**: [OP_01_CREATE_STUDY](./OP_01_CREATE_STUDY.md)
|
||||
- **Followed By**: [OP_03_MONITOR_PROGRESS](./OP_03_MONITOR_PROGRESS.md), [OP_04_ANALYZE_RESULTS](./OP_04_ANALYZE_RESULTS.md)
|
||||
- **Integrates With**: [SYS_10_IMSO](../system/SYS_10_IMSO.md), [SYS_13_DASHBOARD_TRACKING](../system/SYS_13_DASHBOARD_TRACKING.md)
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | 2025-12-05 | Initial release |
|
||||
246
docs/protocols/operations/OP_03_MONITOR_PROGRESS.md
Normal file
246
docs/protocols/operations/OP_03_MONITOR_PROGRESS.md
Normal file
@@ -0,0 +1,246 @@
|
||||
# OP_03: Monitor Progress
|
||||
|
||||
<!--
|
||||
PROTOCOL: Monitor Optimization Progress
|
||||
LAYER: Operations
|
||||
VERSION: 1.0
|
||||
STATUS: Active
|
||||
LAST_UPDATED: 2025-12-05
|
||||
PRIVILEGE: user
|
||||
LOAD_WITH: [SYS_13_DASHBOARD_TRACKING]
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
This protocol covers monitoring optimization progress through console output, dashboard, database queries, and Optuna's built-in tools.
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| "status", "progress" | Follow this protocol |
|
||||
| "how many trials" | Query database |
|
||||
| "what's happening" | Check console or dashboard |
|
||||
| "is it running" | Check process status |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Method | Command/URL | Best For |
|
||||
|--------|-------------|----------|
|
||||
| Console | Watch terminal output | Quick check |
|
||||
| Dashboard | `http://localhost:3000` | Visual monitoring |
|
||||
| Database query | Python one-liner | Scripted checks |
|
||||
| Optuna Dashboard | `http://localhost:8080` | Detailed analysis |
|
||||
|
||||
---
|
||||
|
||||
## Monitoring Methods
|
||||
|
||||
### 1. Console Output
|
||||
|
||||
If running in foreground, watch terminal:
|
||||
```
|
||||
[10:15:30] Trial 15/50 started
|
||||
[10:17:45] Trial 15/50 complete: mass=0.234 kg (best: 0.212 kg)
|
||||
[10:17:46] Trial 16/50 started
|
||||
```
|
||||
|
||||
### 2. Atomizer Dashboard
|
||||
|
||||
**Start Dashboard** (if not running):
|
||||
```bash
|
||||
# Terminal 1: Backend
|
||||
cd atomizer-dashboard/backend
|
||||
python -m uvicorn api.main:app --reload --port 8000
|
||||
|
||||
# Terminal 2: Frontend
|
||||
cd atomizer-dashboard/frontend
|
||||
npm run dev
|
||||
```
|
||||
|
||||
**View at**: `http://localhost:3000`
|
||||
|
||||
**Features**:
|
||||
- Real-time trial progress bar
|
||||
- Current optimizer phase (if Protocol 10)
|
||||
- Pareto front visualization (if multi-objective)
|
||||
- Parallel coordinates plot
|
||||
- Convergence chart
|
||||
|
||||
### 3. Database Query
|
||||
|
||||
**Quick status**:
|
||||
```bash
|
||||
python -c "
|
||||
import optuna
|
||||
study = optuna.load_study(
|
||||
study_name='my_study',
|
||||
storage='sqlite:///studies/my_study/2_results/study.db'
|
||||
)
|
||||
print(f'Trials completed: {len(study.trials)}')
|
||||
print(f'Best value: {study.best_value}')
|
||||
print(f'Best params: {study.best_params}')
|
||||
"
|
||||
```
|
||||
|
||||
**Detailed status**:
|
||||
```python
|
||||
import optuna
|
||||
|
||||
study = optuna.load_study(
|
||||
study_name='my_study',
|
||||
storage='sqlite:///studies/my_study/2_results/study.db'
|
||||
)
|
||||
|
||||
# Trial counts by state
|
||||
from collections import Counter
|
||||
states = Counter(t.state.name for t in study.trials)
|
||||
print(f"Complete: {states.get('COMPLETE', 0)}")
|
||||
print(f"Pruned: {states.get('PRUNED', 0)}")
|
||||
print(f"Failed: {states.get('FAIL', 0)}")
|
||||
print(f"Running: {states.get('RUNNING', 0)}")
|
||||
|
||||
# Best trials
|
||||
if len(study.directions) > 1:
|
||||
print(f"Pareto front size: {len(study.best_trials)}")
|
||||
else:
|
||||
print(f"Best value: {study.best_value}")
|
||||
```
|
||||
|
||||
### 4. Optuna Dashboard
|
||||
|
||||
```bash
|
||||
optuna-dashboard sqlite:///studies/my_study/2_results/study.db
|
||||
# Open http://localhost:8080
|
||||
```
|
||||
|
||||
**Features**:
|
||||
- Trial history table
|
||||
- Parameter importance
|
||||
- Optimization history plot
|
||||
- Slice plot (parameter vs objective)
|
||||
|
||||
### 5. Check Running Processes
|
||||
|
||||
```bash
|
||||
# Linux/Mac
|
||||
ps aux | grep run_optimization
|
||||
|
||||
# Windows
|
||||
tasklist | findstr python
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key Metrics to Monitor
|
||||
|
||||
### Trial Progress
|
||||
- Completed trials vs target
|
||||
- Completion rate (trials/hour)
|
||||
- Estimated time remaining
|
||||
|
||||
### Objective Improvement
|
||||
- Current best value
|
||||
- Improvement trend
|
||||
- Plateau detection
|
||||
|
||||
### Constraint Satisfaction
|
||||
- Feasibility rate (% passing constraints)
|
||||
- Most violated constraint
|
||||
|
||||
### For Protocol 10 (IMSO)
|
||||
- Current phase (Characterization vs Optimization)
|
||||
- Current strategy (TPE, GP, CMA-ES)
|
||||
- Characterization confidence
|
||||
|
||||
### For Protocol 11 (Multi-Objective)
|
||||
- Pareto front size
|
||||
- Hypervolume indicator
|
||||
- Spread of solutions
|
||||
|
||||
---
|
||||
|
||||
## Interpreting Results
|
||||
|
||||
### Healthy Optimization
|
||||
```
|
||||
Trial 45/50: mass=0.198 kg (best: 0.195 kg)
|
||||
Feasibility rate: 78%
|
||||
```
|
||||
- Progress toward target
|
||||
- Reasonable feasibility rate (60-90%)
|
||||
- Gradual improvement
|
||||
|
||||
### Potential Issues
|
||||
|
||||
**All Trials Pruned**:
|
||||
```
|
||||
Trial 20 pruned: constraint violated
|
||||
Trial 21 pruned: constraint violated
|
||||
...
|
||||
```
|
||||
→ Constraints too tight. Consider relaxing.
|
||||
|
||||
**No Improvement**:
|
||||
```
|
||||
Trial 30: best=0.234 (unchanged since trial 8)
|
||||
Trial 31: best=0.234 (unchanged since trial 8)
|
||||
```
|
||||
→ May have converged, or stuck in local minimum.
|
||||
|
||||
**High Failure Rate**:
|
||||
```
|
||||
Failed: 15/50 (30%)
|
||||
```
|
||||
→ Model issues. Check NX logs.
|
||||
|
||||
---
|
||||
|
||||
## Real-Time State File
|
||||
|
||||
If using Protocol 10, check:
|
||||
```bash
|
||||
cat studies/my_study/2_results/intelligent_optimizer/optimizer_state.json
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"timestamp": "2025-12-05T10:15:30",
|
||||
"trial_number": 29,
|
||||
"total_trials": 50,
|
||||
"current_phase": "adaptive_optimization",
|
||||
"current_strategy": "GP_UCB",
|
||||
"is_multi_objective": false
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Symptom | Cause | Solution |
|
||||
|---------|-------|----------|
|
||||
| Dashboard shows old data | Backend not running | Start backend |
|
||||
| "No study found" | Wrong path | Check study name and path |
|
||||
| Trial count not increasing | Process stopped | Check if still running |
|
||||
| Dashboard not updating | Polling issue | Refresh browser |
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Preceded By**: [OP_02_RUN_OPTIMIZATION](./OP_02_RUN_OPTIMIZATION.md)
|
||||
- **Followed By**: [OP_04_ANALYZE_RESULTS](./OP_04_ANALYZE_RESULTS.md)
|
||||
- **Integrates With**: [SYS_13_DASHBOARD_TRACKING](../system/SYS_13_DASHBOARD_TRACKING.md)
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | 2025-12-05 | Initial release |
|
||||
302
docs/protocols/operations/OP_04_ANALYZE_RESULTS.md
Normal file
302
docs/protocols/operations/OP_04_ANALYZE_RESULTS.md
Normal file
@@ -0,0 +1,302 @@
|
||||
# OP_04: Analyze Results
|
||||
|
||||
<!--
|
||||
PROTOCOL: Analyze Optimization Results
|
||||
LAYER: Operations
|
||||
VERSION: 1.0
|
||||
STATUS: Active
|
||||
LAST_UPDATED: 2025-12-05
|
||||
PRIVILEGE: user
|
||||
LOAD_WITH: []
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
This protocol covers analyzing optimization results, including extracting best solutions, generating reports, comparing designs, and interpreting Pareto fronts.
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| "results", "what did we find" | Follow this protocol |
|
||||
| "best design" | Extract best trial |
|
||||
| "compare", "trade-off" | Pareto analysis |
|
||||
| "report" | Generate summary |
|
||||
| Optimization complete | Analyze and document |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
**Key Outputs**:
|
||||
| Output | Location | Purpose |
|
||||
|--------|----------|---------|
|
||||
| Best parameters | `study.best_params` | Optimal design |
|
||||
| Pareto front | `study.best_trials` | Trade-off solutions |
|
||||
| Trial history | `study.trials` | Full exploration |
|
||||
| Intelligence report | `intelligent_optimizer/` | Algorithm insights |
|
||||
|
||||
---
|
||||
|
||||
## Analysis Methods
|
||||
|
||||
### 1. Single-Objective Results
|
||||
|
||||
```python
|
||||
import optuna
|
||||
|
||||
study = optuna.load_study(
|
||||
study_name='my_study',
|
||||
storage='sqlite:///2_results/study.db'
|
||||
)
|
||||
|
||||
# Best result
|
||||
print(f"Best value: {study.best_value}")
|
||||
print(f"Best parameters: {study.best_params}")
|
||||
print(f"Best trial: #{study.best_trial.number}")
|
||||
|
||||
# Get full best trial details
|
||||
best = study.best_trial
|
||||
print(f"User attributes: {best.user_attrs}")
|
||||
```
|
||||
|
||||
### 2. Multi-Objective Results (Pareto Front)
|
||||
|
||||
```python
|
||||
import optuna
|
||||
|
||||
study = optuna.load_study(
|
||||
study_name='my_study',
|
||||
storage='sqlite:///2_results/study.db'
|
||||
)
|
||||
|
||||
# All Pareto-optimal solutions
|
||||
pareto_trials = study.best_trials
|
||||
print(f"Pareto front size: {len(pareto_trials)}")
|
||||
|
||||
# Print all Pareto solutions
|
||||
for trial in pareto_trials:
|
||||
print(f"Trial {trial.number}: {trial.values} - {trial.params}")
|
||||
|
||||
# Find extremes
|
||||
# Assuming objectives: [stiffness (max), mass (min)]
|
||||
best_stiffness = max(pareto_trials, key=lambda t: t.values[0])
|
||||
lightest = min(pareto_trials, key=lambda t: t.values[1])
|
||||
|
||||
print(f"Best stiffness: Trial {best_stiffness.number}")
|
||||
print(f"Lightest: Trial {lightest.number}")
|
||||
```
|
||||
|
||||
### 3. Parameter Importance
|
||||
|
||||
```python
|
||||
import optuna
|
||||
|
||||
study = optuna.load_study(...)
|
||||
|
||||
# Parameter importance (which parameters matter most)
|
||||
importance = optuna.importance.get_param_importances(study)
|
||||
for param, score in importance.items():
|
||||
print(f"{param}: {score:.3f}")
|
||||
```
|
||||
|
||||
### 4. Constraint Analysis
|
||||
|
||||
```python
|
||||
# Find feasibility rate
|
||||
completed = [t for t in study.trials if t.state == optuna.trial.TrialState.COMPLETE]
|
||||
pruned = [t for t in study.trials if t.state == optuna.trial.TrialState.PRUNED]
|
||||
|
||||
feasibility_rate = len(completed) / (len(completed) + len(pruned))
|
||||
print(f"Feasibility rate: {feasibility_rate:.1%}")
|
||||
|
||||
# Analyze why trials were pruned
|
||||
for trial in pruned[:5]: # First 5 pruned
|
||||
reason = trial.user_attrs.get('pruning_reason', 'Unknown')
|
||||
print(f"Trial {trial.number}: {reason}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Visualization
|
||||
|
||||
### Using Optuna Dashboard
|
||||
|
||||
```bash
|
||||
optuna-dashboard sqlite:///2_results/study.db
|
||||
# Open http://localhost:8080
|
||||
```
|
||||
|
||||
**Available Plots**:
|
||||
- Optimization history
|
||||
- Parameter importance
|
||||
- Slice plot (parameter vs objective)
|
||||
- Parallel coordinates
|
||||
- Contour plot (2D parameter interaction)
|
||||
|
||||
### Using Atomizer Dashboard
|
||||
|
||||
Navigate to `http://localhost:3000` and select study.
|
||||
|
||||
**Features**:
|
||||
- Pareto front plot with normalization
|
||||
- Parallel coordinates with selection
|
||||
- Real-time convergence chart
|
||||
|
||||
### Custom Visualization
|
||||
|
||||
```python
|
||||
import matplotlib.pyplot as plt
|
||||
import optuna
|
||||
|
||||
study = optuna.load_study(...)
|
||||
|
||||
# Plot optimization history
|
||||
fig = optuna.visualization.plot_optimization_history(study)
|
||||
fig.show()
|
||||
|
||||
# Plot parameter importance
|
||||
fig = optuna.visualization.plot_param_importances(study)
|
||||
fig.show()
|
||||
|
||||
# Plot Pareto front (multi-objective)
|
||||
if len(study.directions) > 1:
|
||||
fig = optuna.visualization.plot_pareto_front(study)
|
||||
fig.show()
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Generate Reports
|
||||
|
||||
### Update STUDY_REPORT.md
|
||||
|
||||
After analysis, fill in the template:
|
||||
|
||||
```markdown
|
||||
# Study Report: bracket_optimization
|
||||
|
||||
## Executive Summary
|
||||
- **Trials completed**: 50
|
||||
- **Best mass**: 0.195 kg
|
||||
- **Best parameters**: thickness=4.2mm, width=25.8mm
|
||||
- **Constraint satisfaction**: All constraints met
|
||||
|
||||
## Optimization Progress
|
||||
- Initial best: 0.342 kg (trial 1)
|
||||
- Final best: 0.195 kg (trial 38)
|
||||
- Improvement: 43%
|
||||
|
||||
## Best Designs Found
|
||||
|
||||
### Design 1 (Overall Best)
|
||||
| Parameter | Value |
|
||||
|-----------|-------|
|
||||
| thickness | 4.2 mm |
|
||||
| width | 25.8 mm |
|
||||
|
||||
| Metric | Value | Constraint |
|
||||
|--------|-------|------------|
|
||||
| Mass | 0.195 kg | - |
|
||||
| Max stress | 238.5 MPa | < 250 MPa ✓ |
|
||||
|
||||
## Engineering Recommendations
|
||||
1. Recommended design: Trial 38 parameters
|
||||
2. Safety margin: 4.6% on stress constraint
|
||||
3. Consider manufacturing tolerance analysis
|
||||
```
|
||||
|
||||
### Export to CSV
|
||||
|
||||
```python
|
||||
import pandas as pd
|
||||
|
||||
# All trials to DataFrame
|
||||
trials_data = []
|
||||
for trial in study.trials:
|
||||
if trial.state == optuna.trial.TrialState.COMPLETE:
|
||||
row = {'trial': trial.number, 'value': trial.value}
|
||||
row.update(trial.params)
|
||||
trials_data.append(row)
|
||||
|
||||
df = pd.DataFrame(trials_data)
|
||||
df.to_csv('optimization_results.csv', index=False)
|
||||
```
|
||||
|
||||
### Export Best Design for FEA Validation
|
||||
|
||||
```python
|
||||
# Get best parameters
|
||||
best_params = study.best_params
|
||||
|
||||
# Format for NX expression update
|
||||
for name, value in best_params.items():
|
||||
print(f"{name} = {value}")
|
||||
|
||||
# Or save as JSON
|
||||
import json
|
||||
with open('best_design.json', 'w') as f:
|
||||
json.dump(best_params, f, indent=2)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Intelligence Report (Protocol 10)
|
||||
|
||||
If using Protocol 10, check intelligence files:
|
||||
|
||||
```bash
|
||||
# Landscape analysis
|
||||
cat 2_results/intelligent_optimizer/intelligence_report.json
|
||||
|
||||
# Characterization progress
|
||||
cat 2_results/intelligent_optimizer/characterization_progress.json
|
||||
```
|
||||
|
||||
**Key Insights**:
|
||||
- Landscape classification (smooth/rugged, unimodal/multimodal)
|
||||
- Algorithm recommendation rationale
|
||||
- Parameter correlations
|
||||
- Confidence metrics
|
||||
|
||||
---
|
||||
|
||||
## Validation Checklist
|
||||
|
||||
Before finalizing results:
|
||||
|
||||
- [ ] Best solution satisfies all constraints
|
||||
- [ ] Results are physically reasonable
|
||||
- [ ] Parameter values within manufacturing limits
|
||||
- [ ] Consider re-running FEA on best design to confirm
|
||||
- [ ] Document any anomalies or surprises
|
||||
- [ ] Update STUDY_REPORT.md
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Symptom | Cause | Solution |
|
||||
|---------|-------|----------|
|
||||
| Best value seems wrong | Constraint not enforced | Check objective function |
|
||||
| No Pareto solutions | All trials failed | Check constraints |
|
||||
| Unexpected best params | Local minimum | Try different starting points |
|
||||
| Can't load study | Wrong path | Verify database location |
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Preceded By**: [OP_02_RUN_OPTIMIZATION](./OP_02_RUN_OPTIMIZATION.md), [OP_03_MONITOR_PROGRESS](./OP_03_MONITOR_PROGRESS.md)
|
||||
- **Related**: [SYS_11_MULTI_OBJECTIVE](../system/SYS_11_MULTI_OBJECTIVE.md) for Pareto analysis
|
||||
- **Skill**: `.claude/skills/generate-report.md`
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | 2025-12-05 | Initial release |
|
||||
294
docs/protocols/operations/OP_05_EXPORT_TRAINING_DATA.md
Normal file
294
docs/protocols/operations/OP_05_EXPORT_TRAINING_DATA.md
Normal file
@@ -0,0 +1,294 @@
|
||||
# OP_05: Export Training Data
|
||||
|
||||
<!--
|
||||
PROTOCOL: Export Neural Network Training Data
|
||||
LAYER: Operations
|
||||
VERSION: 1.0
|
||||
STATUS: Active
|
||||
LAST_UPDATED: 2025-12-05
|
||||
PRIVILEGE: user
|
||||
LOAD_WITH: [SYS_14_NEURAL_ACCELERATION]
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
This protocol covers exporting FEA simulation data for training neural network surrogates. Proper data export enables Protocol 14 (Neural Acceleration).
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| "export training data" | Follow this protocol |
|
||||
| "neural network data" | Follow this protocol |
|
||||
| Planning >50 trials | Consider export for acceleration |
|
||||
| Want to train surrogate | Follow this protocol |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
**Export Command**:
|
||||
```bash
|
||||
python run_optimization.py --export-training
|
||||
```
|
||||
|
||||
**Output Structure**:
|
||||
```
|
||||
atomizer_field_training_data/{study_name}/
|
||||
├── trial_0001/
|
||||
│ ├── input/model.bdf
|
||||
│ ├── output/model.op2
|
||||
│ └── metadata.json
|
||||
├── trial_0002/
|
||||
│ └── ...
|
||||
└── study_summary.json
|
||||
```
|
||||
|
||||
**Recommended Data Volume**:
|
||||
| Complexity | Training Samples | Validation Samples |
|
||||
|------------|-----------------|-------------------|
|
||||
| Simple (2-3 params) | 50-100 | 20-30 |
|
||||
| Medium (4-6 params) | 100-200 | 30-50 |
|
||||
| Complex (7+ params) | 200-500 | 50-100 |
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
### Enable Export in Config
|
||||
|
||||
Add to `optimization_config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"training_data_export": {
|
||||
"enabled": true,
|
||||
"export_dir": "atomizer_field_training_data/my_study",
|
||||
"export_bdf": true,
|
||||
"export_op2": true,
|
||||
"export_fields": ["displacement", "stress"],
|
||||
"include_failed": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Configuration Options
|
||||
|
||||
| Option | Type | Default | Description |
|
||||
|--------|------|---------|-------------|
|
||||
| `enabled` | bool | false | Enable export |
|
||||
| `export_dir` | string | - | Output directory |
|
||||
| `export_bdf` | bool | true | Save Nastran input |
|
||||
| `export_op2` | bool | true | Save binary results |
|
||||
| `export_fields` | list | all | Which result fields |
|
||||
| `include_failed` | bool | false | Include failed trials |
|
||||
|
||||
---
|
||||
|
||||
## Export Workflow
|
||||
|
||||
### Step 1: Run with Export Enabled
|
||||
|
||||
```bash
|
||||
conda activate atomizer
|
||||
cd studies/my_study
|
||||
python run_optimization.py --export-training
|
||||
```
|
||||
|
||||
Or run standard optimization with config export enabled.
|
||||
|
||||
### Step 2: Verify Export
|
||||
|
||||
```bash
|
||||
ls atomizer_field_training_data/my_study/
|
||||
# Should see trial_0001/, trial_0002/, etc.
|
||||
|
||||
# Check a trial
|
||||
ls atomizer_field_training_data/my_study/trial_0001/
|
||||
# input/model.bdf
|
||||
# output/model.op2
|
||||
# metadata.json
|
||||
```
|
||||
|
||||
### Step 3: Check Metadata
|
||||
|
||||
```bash
|
||||
cat atomizer_field_training_data/my_study/trial_0001/metadata.json
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"trial_number": 1,
|
||||
"design_parameters": {
|
||||
"thickness": 5.2,
|
||||
"width": 30.0
|
||||
},
|
||||
"objectives": {
|
||||
"mass": 0.234,
|
||||
"max_stress": 198.5
|
||||
},
|
||||
"constraints_satisfied": true,
|
||||
"simulation_time": 145.2
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Check Study Summary
|
||||
|
||||
```bash
|
||||
cat atomizer_field_training_data/my_study/study_summary.json
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"study_name": "my_study",
|
||||
"total_trials": 50,
|
||||
"successful_exports": 47,
|
||||
"failed_exports": 3,
|
||||
"design_parameters": ["thickness", "width"],
|
||||
"objectives": ["mass", "max_stress"],
|
||||
"export_timestamp": "2025-12-05T15:30:00"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Data Quality Checks
|
||||
|
||||
### Verify Sample Count
|
||||
|
||||
```python
|
||||
from pathlib import Path
|
||||
import json
|
||||
|
||||
export_dir = Path("atomizer_field_training_data/my_study")
|
||||
trials = list(export_dir.glob("trial_*"))
|
||||
print(f"Exported trials: {len(trials)}")
|
||||
|
||||
# Check for missing files
|
||||
for trial_dir in trials:
|
||||
bdf = trial_dir / "input" / "model.bdf"
|
||||
op2 = trial_dir / "output" / "model.op2"
|
||||
meta = trial_dir / "metadata.json"
|
||||
|
||||
if not all([bdf.exists(), op2.exists(), meta.exists()]):
|
||||
print(f"Missing files in {trial_dir}")
|
||||
```
|
||||
|
||||
### Check Parameter Coverage
|
||||
|
||||
```python
|
||||
import json
|
||||
import numpy as np
|
||||
|
||||
# Load all metadata
|
||||
params = []
|
||||
for trial_dir in export_dir.glob("trial_*"):
|
||||
with open(trial_dir / "metadata.json") as f:
|
||||
meta = json.load(f)
|
||||
params.append(meta["design_parameters"])
|
||||
|
||||
# Check coverage
|
||||
import pandas as pd
|
||||
df = pd.DataFrame(params)
|
||||
print(df.describe())
|
||||
|
||||
# Look for gaps
|
||||
for col in df.columns:
|
||||
print(f"{col}: min={df[col].min():.2f}, max={df[col].max():.2f}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Space-Filling Sampling
|
||||
|
||||
For best neural network training, use space-filling designs:
|
||||
|
||||
### Latin Hypercube Sampling
|
||||
|
||||
```python
|
||||
from scipy.stats import qmc
|
||||
|
||||
# Generate space-filling samples
|
||||
n_samples = 100
|
||||
n_params = 4
|
||||
|
||||
sampler = qmc.LatinHypercube(d=n_params)
|
||||
samples = sampler.random(n=n_samples)
|
||||
|
||||
# Scale to parameter bounds
|
||||
lower = [2.0, 20.0, 5.0, 1.0]
|
||||
upper = [10.0, 50.0, 15.0, 5.0]
|
||||
scaled = qmc.scale(samples, lower, upper)
|
||||
```
|
||||
|
||||
### Sobol Sequence
|
||||
|
||||
```python
|
||||
sampler = qmc.Sobol(d=n_params)
|
||||
samples = sampler.random(n=n_samples)
|
||||
scaled = qmc.scale(samples, lower, upper)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Next Steps After Export
|
||||
|
||||
### 1. Parse to Neural Format
|
||||
|
||||
```bash
|
||||
cd atomizer-field
|
||||
python batch_parser.py ../atomizer_field_training_data/my_study
|
||||
```
|
||||
|
||||
### 2. Split Train/Validation
|
||||
|
||||
```python
|
||||
from sklearn.model_selection import train_test_split
|
||||
|
||||
# 80/20 split
|
||||
train_trials, val_trials = train_test_split(
|
||||
all_trials,
|
||||
test_size=0.2,
|
||||
random_state=42
|
||||
)
|
||||
```
|
||||
|
||||
### 3. Train Model
|
||||
|
||||
```bash
|
||||
python train_parametric.py \
|
||||
--train_dir ../training_data/parsed \
|
||||
--val_dir ../validation_data/parsed \
|
||||
--epochs 200
|
||||
```
|
||||
|
||||
See [SYS_14_NEURAL_ACCELERATION](../system/SYS_14_NEURAL_ACCELERATION.md) for full training workflow.
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Symptom | Cause | Solution |
|
||||
|---------|-------|----------|
|
||||
| No export directory | Export not enabled | Add `training_data_export` to config |
|
||||
| Missing OP2 files | Solve failed | Check `include_failed: false` |
|
||||
| Incomplete metadata | Extraction error | Check extractor logs |
|
||||
| Low sample count | Too many failures | Relax constraints |
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Related**: [SYS_14_NEURAL_ACCELERATION](../system/SYS_14_NEURAL_ACCELERATION.md)
|
||||
- **Preceded By**: [OP_02_RUN_OPTIMIZATION](./OP_02_RUN_OPTIMIZATION.md)
|
||||
- **Skill**: `.claude/skills/modules/neural-acceleration.md`
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | 2025-12-05 | Initial release |
|
||||
437
docs/protocols/operations/OP_06_TROUBLESHOOT.md
Normal file
437
docs/protocols/operations/OP_06_TROUBLESHOOT.md
Normal file
@@ -0,0 +1,437 @@
|
||||
# OP_06: Troubleshoot
|
||||
|
||||
<!--
|
||||
PROTOCOL: Troubleshoot Optimization Issues
|
||||
LAYER: Operations
|
||||
VERSION: 1.0
|
||||
STATUS: Active
|
||||
LAST_UPDATED: 2025-12-05
|
||||
PRIVILEGE: user
|
||||
LOAD_WITH: []
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
This protocol provides systematic troubleshooting for common optimization issues, covering NX errors, extraction failures, database problems, and performance issues.
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| "error", "failed" | Follow this protocol |
|
||||
| "not working", "crashed" | Follow this protocol |
|
||||
| "help", "stuck" | Follow this protocol |
|
||||
| Unexpected behavior | Follow this protocol |
|
||||
|
||||
---
|
||||
|
||||
## Quick Diagnostic
|
||||
|
||||
```bash
|
||||
# 1. Check environment
|
||||
conda activate atomizer
|
||||
python --version # Should be 3.9+
|
||||
|
||||
# 2. Check study structure
|
||||
ls studies/my_study/
|
||||
# Should have: 1_setup/, run_optimization.py
|
||||
|
||||
# 3. Check model files
|
||||
ls studies/my_study/1_setup/model/
|
||||
# Should have: .prt, .sim files
|
||||
|
||||
# 4. Test single trial
|
||||
python run_optimization.py --test
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Categories
|
||||
|
||||
### 1. Environment Errors
|
||||
|
||||
#### "ModuleNotFoundError: No module named 'optuna'"
|
||||
|
||||
**Cause**: Wrong Python environment
|
||||
|
||||
**Solution**:
|
||||
```bash
|
||||
conda activate atomizer
|
||||
# Verify
|
||||
conda list | grep optuna
|
||||
```
|
||||
|
||||
#### "Python version mismatch"
|
||||
|
||||
**Cause**: Wrong Python version
|
||||
|
||||
**Solution**:
|
||||
```bash
|
||||
python --version # Need 3.9+
|
||||
conda activate atomizer
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2. NX Model Setup Errors
|
||||
|
||||
#### "All optimization trials produce identical results"
|
||||
|
||||
**Cause**: Missing idealized part (`*_i.prt`) or broken file chain
|
||||
|
||||
**Symptoms**:
|
||||
- Journal shows "FE model updated" but results don't change
|
||||
- DAT files have same node coordinates with different expressions
|
||||
- OP2 file timestamps update but values are identical
|
||||
|
||||
**Root Cause**: NX simulation files have a parent-child hierarchy:
|
||||
```
|
||||
.sim → .fem → _i.prt → .prt (geometry)
|
||||
```
|
||||
|
||||
If the `_i.prt` (idealized part) is missing or not properly linked, `UpdateFemodel()` runs but the mesh doesn't regenerate because:
|
||||
- FEM mesh is tied to idealized geometry, not master geometry
|
||||
- Without idealized part updating, FEM has nothing new to mesh against
|
||||
|
||||
**Solution**:
|
||||
1. **Check file chain in NX**:
|
||||
- Open `.sim` file
|
||||
- Go to **Part Navigator** or **Assembly Navigator**
|
||||
- List ALL referenced parts
|
||||
|
||||
2. **Copy ALL linked files** to study folder:
|
||||
```bash
|
||||
# Typical file set needed:
|
||||
Model.prt # Geometry
|
||||
Model_fem1_i.prt # Idealized part ← OFTEN MISSING!
|
||||
Model_fem1.fem # FEM file
|
||||
Model_sim1.sim # Simulation file
|
||||
```
|
||||
|
||||
3. **Verify links are intact**:
|
||||
- Open model in NX after copying
|
||||
- Check that updates propagate: Geometry → Idealized → FEM → Sim
|
||||
|
||||
4. **CRITICAL CODE FIX** (already implemented in `solve_simulation.py`):
|
||||
The idealized part MUST be explicitly loaded before `UpdateFemodel()`:
|
||||
```python
|
||||
# Load idealized part BEFORE updating FEM
|
||||
for filename in os.listdir(working_dir):
|
||||
if '_i.prt' in filename.lower():
|
||||
idealized_part, status = theSession.Parts.Open(path)
|
||||
break
|
||||
|
||||
# Now UpdateFemodel() will work correctly
|
||||
feModel.UpdateFemodel()
|
||||
```
|
||||
Without loading the `_i.prt`, NX cannot propagate geometry changes to the mesh.
|
||||
|
||||
**Prevention**: Always use introspection to list all parts referenced by a simulation.
|
||||
|
||||
---
|
||||
|
||||
### 3. NX/Solver Errors
|
||||
|
||||
#### "NX session timeout after 600s"
|
||||
|
||||
**Cause**: Model too complex or NX stuck
|
||||
|
||||
**Solution**:
|
||||
1. Increase timeout in config:
|
||||
```json
|
||||
"simulation": {
|
||||
"timeout": 1200
|
||||
}
|
||||
```
|
||||
2. Simplify mesh if possible
|
||||
3. Check NX license availability
|
||||
|
||||
#### "Expression 'xxx' not found in model"
|
||||
|
||||
**Cause**: Expression name mismatch
|
||||
|
||||
**Solution**:
|
||||
1. Open model in NX
|
||||
2. Go to Tools → Expressions
|
||||
3. Verify exact expression name (case-sensitive)
|
||||
4. Update config to match
|
||||
|
||||
#### "NX license error"
|
||||
|
||||
**Cause**: License server unavailable
|
||||
|
||||
**Solution**:
|
||||
1. Check license server status
|
||||
2. Wait and retry
|
||||
3. Contact IT if persistent
|
||||
|
||||
#### "NX solve failed - check log"
|
||||
|
||||
**Cause**: Nastran solver error
|
||||
|
||||
**Solution**:
|
||||
1. Find log file: `1_setup/model/*.log` or `*.f06`
|
||||
2. Search for "FATAL" or "ERROR"
|
||||
3. Common causes:
|
||||
- Singular stiffness matrix (constraints issue)
|
||||
- Bad mesh (distorted elements)
|
||||
- Missing material properties
|
||||
|
||||
---
|
||||
|
||||
### 3. Extraction Errors
|
||||
|
||||
#### "OP2 file not found"
|
||||
|
||||
**Cause**: Solve didn't produce output
|
||||
|
||||
**Solution**:
|
||||
1. Check if solve completed
|
||||
2. Look for `.op2` file in model directory
|
||||
3. Check NX log for solve errors
|
||||
|
||||
#### "No displacement data for subcase X"
|
||||
|
||||
**Cause**: Wrong subcase number
|
||||
|
||||
**Solution**:
|
||||
1. Check available subcases in OP2:
|
||||
```python
|
||||
from pyNastran.op2.op2 import OP2
|
||||
op2 = OP2()
|
||||
op2.read_op2('model.op2')
|
||||
print(op2.displacements.keys())
|
||||
```
|
||||
2. Update subcase in extractor call
|
||||
|
||||
#### "Element type 'xxx' not supported"
|
||||
|
||||
**Cause**: Extractor doesn't support element type
|
||||
|
||||
**Solution**:
|
||||
1. Check available types in extractor
|
||||
2. Common types: `cquad4`, `ctria3`, `ctetra`, `chexa`
|
||||
3. May need different extractor
|
||||
|
||||
---
|
||||
|
||||
### 4. Database Errors
|
||||
|
||||
#### "Database is locked"
|
||||
|
||||
**Cause**: Another process using database
|
||||
|
||||
**Solution**:
|
||||
1. Check for running processes:
|
||||
```bash
|
||||
ps aux | grep run_optimization
|
||||
```
|
||||
2. Kill stale process if needed
|
||||
3. Wait for other optimization to finish
|
||||
|
||||
#### "Study 'xxx' not found"
|
||||
|
||||
**Cause**: Wrong study name or path
|
||||
|
||||
**Solution**:
|
||||
1. Check exact study name in database:
|
||||
```python
|
||||
import optuna
|
||||
storage = optuna.storages.RDBStorage('sqlite:///study.db')
|
||||
print(storage.get_all_study_summaries())
|
||||
```
|
||||
2. Use correct name when loading
|
||||
|
||||
#### "IntegrityError: UNIQUE constraint failed"
|
||||
|
||||
**Cause**: Duplicate trial number
|
||||
|
||||
**Solution**:
|
||||
1. Don't run multiple optimizations on same study simultaneously
|
||||
2. Use `--resume` flag for continuation
|
||||
|
||||
---
|
||||
|
||||
### 5. Constraint/Feasibility Errors
|
||||
|
||||
#### "All trials pruned"
|
||||
|
||||
**Cause**: No feasible region
|
||||
|
||||
**Solution**:
|
||||
1. Check constraint values:
|
||||
```python
|
||||
# In objective function, print constraint values
|
||||
print(f"Stress: {stress}, limit: 250")
|
||||
```
|
||||
2. Relax constraints
|
||||
3. Widen design variable bounds
|
||||
|
||||
#### "No improvement after N trials"
|
||||
|
||||
**Cause**: Stuck in local minimum or converged
|
||||
|
||||
**Solution**:
|
||||
1. Check if truly converged (good result)
|
||||
2. Try different starting region
|
||||
3. Use different sampler
|
||||
4. Increase exploration (lower `n_startup_trials`)
|
||||
|
||||
---
|
||||
|
||||
### 6. Performance Issues
|
||||
|
||||
#### "Trials running very slowly"
|
||||
|
||||
**Cause**: Complex model or inefficient extraction
|
||||
|
||||
**Solution**:
|
||||
1. Profile time per component:
|
||||
```python
|
||||
import time
|
||||
start = time.time()
|
||||
# ... operation ...
|
||||
print(f"Took: {time.time() - start:.1f}s")
|
||||
```
|
||||
2. Simplify mesh if NX is slow
|
||||
3. Check extraction isn't re-parsing OP2 multiple times
|
||||
|
||||
#### "Memory error"
|
||||
|
||||
**Cause**: Large OP2 file or many trials
|
||||
|
||||
**Solution**:
|
||||
1. Clear Python memory between trials
|
||||
2. Don't store all results in memory
|
||||
3. Use database for persistence
|
||||
|
||||
---
|
||||
|
||||
## Diagnostic Commands
|
||||
|
||||
### Quick Health Check
|
||||
|
||||
```bash
|
||||
# Environment
|
||||
conda activate atomizer
|
||||
python -c "import optuna; print('Optuna OK')"
|
||||
python -c "import pyNastran; print('pyNastran OK')"
|
||||
|
||||
# Study structure
|
||||
ls -la studies/my_study/
|
||||
|
||||
# Config validity
|
||||
python -c "
|
||||
import json
|
||||
with open('studies/my_study/1_setup/optimization_config.json') as f:
|
||||
config = json.load(f)
|
||||
print('Config OK')
|
||||
print(f'Objectives: {len(config.get(\"objectives\", []))}')
|
||||
"
|
||||
|
||||
# Database status
|
||||
python -c "
|
||||
import optuna
|
||||
study = optuna.load_study('my_study', 'sqlite:///studies/my_study/2_results/study.db')
|
||||
print(f'Trials: {len(study.trials)}')
|
||||
"
|
||||
```
|
||||
|
||||
### NX Log Analysis
|
||||
|
||||
```bash
|
||||
# Find latest log
|
||||
ls -lt studies/my_study/1_setup/model/*.log | head -1
|
||||
|
||||
# Search for errors
|
||||
grep -i "error\|fatal\|fail" studies/my_study/1_setup/model/*.log
|
||||
```
|
||||
|
||||
### Trial Failure Analysis
|
||||
|
||||
```python
|
||||
import optuna
|
||||
|
||||
study = optuna.load_study(...)
|
||||
|
||||
# Failed trials
|
||||
failed = [t for t in study.trials
|
||||
if t.state == optuna.trial.TrialState.FAIL]
|
||||
print(f"Failed: {len(failed)}")
|
||||
|
||||
for t in failed[:5]:
|
||||
print(f"Trial {t.number}: {t.user_attrs}")
|
||||
|
||||
# Pruned trials
|
||||
pruned = [t for t in study.trials
|
||||
if t.state == optuna.trial.TrialState.PRUNED]
|
||||
print(f"Pruned: {len(pruned)}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Recovery Actions
|
||||
|
||||
### Reset Study (Start Fresh)
|
||||
|
||||
```bash
|
||||
# Backup first
|
||||
cp -r studies/my_study/2_results studies/my_study/2_results_backup
|
||||
|
||||
# Delete results
|
||||
rm -rf studies/my_study/2_results/*
|
||||
|
||||
# Run fresh
|
||||
python run_optimization.py
|
||||
```
|
||||
|
||||
### Resume Interrupted Study
|
||||
|
||||
```bash
|
||||
python run_optimization.py --resume
|
||||
```
|
||||
|
||||
### Restore from Backup
|
||||
|
||||
```bash
|
||||
cp -r studies/my_study/2_results_backup/* studies/my_study/2_results/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Getting Help
|
||||
|
||||
### Information to Provide
|
||||
|
||||
When asking for help, include:
|
||||
1. Error message (full traceback)
|
||||
2. Config file contents
|
||||
3. Study structure (`ls -la`)
|
||||
4. What you tried
|
||||
5. NX log excerpt (if NX error)
|
||||
|
||||
### Log Locations
|
||||
|
||||
| Log | Location |
|
||||
|-----|----------|
|
||||
| Optimization | Console output or redirect to file |
|
||||
| NX Solve | `1_setup/model/*.log`, `*.f06` |
|
||||
| Database | `2_results/study.db` (query with optuna) |
|
||||
| Intelligence | `2_results/intelligent_optimizer/*.json` |
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Related**: All operation protocols
|
||||
- **System**: [SYS_10_IMSO](../system/SYS_10_IMSO.md), [SYS_12_EXTRACTOR_LIBRARY](../system/SYS_12_EXTRACTOR_LIBRARY.md)
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | 2025-12-05 | Initial release |
|
||||
341
docs/protocols/system/SYS_10_IMSO.md
Normal file
341
docs/protocols/system/SYS_10_IMSO.md
Normal file
@@ -0,0 +1,341 @@
|
||||
# SYS_10: Intelligent Multi-Strategy Optimization (IMSO)
|
||||
|
||||
<!--
|
||||
PROTOCOL: Intelligent Multi-Strategy Optimization
|
||||
LAYER: System
|
||||
VERSION: 2.1
|
||||
STATUS: Active
|
||||
LAST_UPDATED: 2025-12-05
|
||||
PRIVILEGE: user
|
||||
LOAD_WITH: []
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
Protocol 10 implements adaptive optimization that automatically characterizes the problem landscape and selects the best optimization algorithm. This two-phase approach combines automated landscape analysis with algorithm-specific optimization.
|
||||
|
||||
**Key Innovation**: Adaptive characterization phase that intelligently determines when enough exploration has been done, then transitions to the optimal algorithm.
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| Single-objective optimization | Use this protocol |
|
||||
| "adaptive", "intelligent", "IMSO" mentioned | Load this protocol |
|
||||
| User unsure which algorithm to use | Recommend this protocol |
|
||||
| Complex landscape suspected | Use this protocol |
|
||||
|
||||
**Do NOT use when**: Multi-objective optimization needed (use SYS_11 instead)
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Parameter | Default | Range | Description |
|
||||
|-----------|---------|-------|-------------|
|
||||
| `min_trials` | 10 | 5-50 | Minimum characterization trials |
|
||||
| `max_trials` | 30 | 10-100 | Maximum characterization trials |
|
||||
| `confidence_threshold` | 0.85 | 0.0-1.0 | Stopping confidence level |
|
||||
| `check_interval` | 5 | 1-10 | Trials between checks |
|
||||
|
||||
**Landscape → Algorithm Mapping**:
|
||||
|
||||
| Landscape Type | Primary Strategy | Fallback |
|
||||
|----------------|------------------|----------|
|
||||
| smooth_unimodal | GP-BO | CMA-ES |
|
||||
| smooth_multimodal | GP-BO | TPE |
|
||||
| rugged_unimodal | TPE | CMA-ES |
|
||||
| rugged_multimodal | TPE | - |
|
||||
| noisy | TPE | - |
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ PHASE 1: ADAPTIVE CHARACTERIZATION STUDY │
|
||||
│ ───────────────────────────────────────────────────────── │
|
||||
│ Sampler: Random/Sobol (unbiased exploration) │
|
||||
│ Trials: 10-30 (adapts to problem complexity) │
|
||||
│ │
|
||||
│ Every 5 trials: │
|
||||
│ → Analyze landscape metrics │
|
||||
│ → Check metric convergence │
|
||||
│ → Calculate characterization confidence │
|
||||
│ → Decide if ready to stop │
|
||||
│ │
|
||||
│ Stop when: │
|
||||
│ ✓ Confidence ≥ 85% │
|
||||
│ ✓ OR max trials reached (30) │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ TRANSITION: LANDSCAPE ANALYSIS & STRATEGY SELECTION │
|
||||
│ ───────────────────────────────────────────────────────── │
|
||||
│ Analyze: │
|
||||
│ - Smoothness (0-1) │
|
||||
│ - Multimodality (number of modes) │
|
||||
│ - Parameter correlation │
|
||||
│ - Noise level │
|
||||
│ │
|
||||
│ Classify & Recommend: │
|
||||
│ smooth_unimodal → GP-BO (best) or CMA-ES │
|
||||
│ smooth_multimodal → GP-BO │
|
||||
│ rugged_multimodal → TPE │
|
||||
│ rugged_unimodal → TPE or CMA-ES │
|
||||
│ noisy → TPE (most robust) │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ PHASE 2: OPTIMIZATION STUDY │
|
||||
│ ───────────────────────────────────────────────────────── │
|
||||
│ Sampler: Recommended from Phase 1 │
|
||||
│ Warm Start: Initialize from best characterization point │
|
||||
│ Trials: User-specified (default 50) │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Core Components
|
||||
|
||||
### 1. Adaptive Characterization (`adaptive_characterization.py`)
|
||||
|
||||
**Confidence Calculation**:
|
||||
```python
|
||||
confidence = (
|
||||
0.40 * metric_stability_score + # Are metrics converging?
|
||||
0.30 * parameter_coverage_score + # Explored enough space?
|
||||
0.20 * sample_adequacy_score + # Enough samples for complexity?
|
||||
0.10 * landscape_clarity_score # Clear classification?
|
||||
)
|
||||
```
|
||||
|
||||
**Stopping Criteria**:
|
||||
- **Minimum trials**: 10 (baseline data requirement)
|
||||
- **Maximum trials**: 30 (prevent over-characterization)
|
||||
- **Confidence threshold**: 85% (high confidence required)
|
||||
- **Check interval**: Every 5 trials
|
||||
|
||||
**Adaptive Behavior**:
|
||||
```python
|
||||
# Simple problem (smooth, unimodal, low noise):
|
||||
if smoothness > 0.6 and unimodal and noise < 0.3:
|
||||
required_samples = 10 + dimensionality
|
||||
# Stops at ~10-15 trials
|
||||
|
||||
# Complex problem (multimodal with N modes):
|
||||
if multimodal and n_modes > 2:
|
||||
required_samples = 10 + 5 * n_modes + 2 * dimensionality
|
||||
# Continues to ~20-30 trials
|
||||
```
|
||||
|
||||
### 2. Landscape Analyzer (`landscape_analyzer.py`)
|
||||
|
||||
**Metrics Computed**:
|
||||
|
||||
| Metric | Method | Interpretation |
|
||||
|--------|--------|----------------|
|
||||
| Smoothness (0-1) | Spearman correlation | >0.6: Good for CMA-ES, GP-BO |
|
||||
| Multimodality | DBSCAN clustering | Detects distinct good regions |
|
||||
| Correlation | Parameter-objective correlation | Identifies influential params |
|
||||
| Noise (0-1) | Local consistency check | True simulation instability |
|
||||
|
||||
**Landscape Classifications**:
|
||||
- `smooth_unimodal`: Single smooth bowl
|
||||
- `smooth_multimodal`: Multiple smooth regions
|
||||
- `rugged_unimodal`: Single rugged region
|
||||
- `rugged_multimodal`: Multiple rugged regions
|
||||
- `noisy`: High noise level
|
||||
|
||||
### 3. Strategy Selector (`strategy_selector.py`)
|
||||
|
||||
**Algorithm Characteristics**:
|
||||
|
||||
**GP-BO (Gaussian Process Bayesian Optimization)**:
|
||||
- Best for: Smooth, expensive functions (like FEA)
|
||||
- Explicit surrogate model with uncertainty quantification
|
||||
- Acquisition function balances exploration/exploitation
|
||||
|
||||
**CMA-ES (Covariance Matrix Adaptation Evolution Strategy)**:
|
||||
- Best for: Smooth unimodal problems
|
||||
- Fast convergence to local optimum
|
||||
- Adapts search distribution to landscape
|
||||
|
||||
**TPE (Tree-structured Parzen Estimator)**:
|
||||
- Best for: Multimodal, rugged, or noisy problems
|
||||
- Robust to noise and discontinuities
|
||||
- Good global exploration
|
||||
|
||||
### 4. Intelligent Optimizer (`intelligent_optimizer.py`)
|
||||
|
||||
**Workflow**:
|
||||
1. Create characterization study (Random/Sobol sampler)
|
||||
2. Run adaptive characterization with stopping criterion
|
||||
3. Analyze final landscape
|
||||
4. Select optimal strategy
|
||||
5. Create optimization study with recommended sampler
|
||||
6. Warm-start from best characterization point
|
||||
7. Run optimization
|
||||
8. Generate intelligence report
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
Add to `optimization_config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"intelligent_optimization": {
|
||||
"enabled": true,
|
||||
"characterization": {
|
||||
"min_trials": 10,
|
||||
"max_trials": 30,
|
||||
"confidence_threshold": 0.85,
|
||||
"check_interval": 5
|
||||
},
|
||||
"landscape_analysis": {
|
||||
"min_trials_for_analysis": 10
|
||||
},
|
||||
"strategy_selection": {
|
||||
"allow_cmaes": true,
|
||||
"allow_gpbo": true,
|
||||
"allow_tpe": true
|
||||
}
|
||||
},
|
||||
"trials": {
|
||||
"n_trials": 50
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from pathlib import Path
|
||||
from optimization_engine.intelligent_optimizer import IntelligentOptimizer
|
||||
|
||||
# Create optimizer
|
||||
optimizer = IntelligentOptimizer(
|
||||
study_name="my_optimization",
|
||||
study_dir=Path("studies/my_study/2_results"),
|
||||
config=optimization_config,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Define design variables
|
||||
design_vars = {
|
||||
'parameter1': (lower_bound, upper_bound),
|
||||
'parameter2': (lower_bound, upper_bound)
|
||||
}
|
||||
|
||||
# Run Protocol 10
|
||||
results = optimizer.optimize(
|
||||
objective_function=my_objective,
|
||||
design_variables=design_vars,
|
||||
n_trials=50,
|
||||
target_value=target,
|
||||
tolerance=0.1
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Performance Benefits
|
||||
|
||||
**Efficiency**:
|
||||
- **Simple problems**: Early stop at ~10-15 trials (33% reduction)
|
||||
- **Complex problems**: Extended characterization at ~20-30 trials
|
||||
- **Right algorithm**: Uses optimal strategy for landscape type
|
||||
|
||||
**Example Performance** (Circular Plate Frequency Tuning):
|
||||
- TPE alone: ~95 trials to target
|
||||
- Random search: ~150+ trials
|
||||
- **Protocol 10**: ~56 trials (**41% reduction**)
|
||||
|
||||
---
|
||||
|
||||
## Intelligence Reports
|
||||
|
||||
Protocol 10 generates three tracking files:
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `characterization_progress.json` | Metric evolution, confidence progression, stopping decision |
|
||||
| `intelligence_report.json` | Final landscape classification, parameter correlations, strategy recommendation |
|
||||
| `strategy_transitions.json` | Phase transitions, algorithm switches, performance metrics |
|
||||
|
||||
**Location**: `studies/{study_name}/2_results/intelligent_optimizer/`
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Symptom | Cause | Solution |
|
||||
|---------|-------|----------|
|
||||
| Characterization takes too long | Complex landscape | Increase `max_trials` or accept longer characterization |
|
||||
| Wrong algorithm selected | Insufficient exploration | Lower `confidence_threshold` or increase `min_trials` |
|
||||
| Poor convergence | Mismatch between landscape and algorithm | Review `intelligence_report.json`, consider manual override |
|
||||
| "No characterization data" | Study not using Protocol 10 | Enable `intelligent_optimization.enabled: true` |
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Depends On**: None
|
||||
- **Used By**: [OP_01_CREATE_STUDY](../operations/OP_01_CREATE_STUDY.md), [OP_02_RUN_OPTIMIZATION](../operations/OP_02_RUN_OPTIMIZATION.md)
|
||||
- **Integrates With**: [SYS_13_DASHBOARD_TRACKING](./SYS_13_DASHBOARD_TRACKING.md)
|
||||
- **See Also**: [SYS_11_MULTI_OBJECTIVE](./SYS_11_MULTI_OBJECTIVE.md) for multi-objective optimization
|
||||
|
||||
---
|
||||
|
||||
## Implementation Files
|
||||
|
||||
- `optimization_engine/intelligent_optimizer.py` - Main orchestrator
|
||||
- `optimization_engine/adaptive_characterization.py` - Stopping criterion
|
||||
- `optimization_engine/landscape_analyzer.py` - Landscape metrics
|
||||
- `optimization_engine/strategy_selector.py` - Algorithm recommendation
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 2.1 | 2025-11-20 | Fixed strategy selector timing, multimodality detection, added simulation validation |
|
||||
| 2.0 | 2025-11-20 | Added adaptive characterization, two-study architecture |
|
||||
| 1.0 | 2025-11-19 | Initial implementation |
|
||||
|
||||
### Version 2.1 Bug Fixes Detail
|
||||
|
||||
**Fix #1: Strategy Selector - Use Characterization Trial Count**
|
||||
|
||||
*Problem*: Strategy selector used total trial count (including pruned) instead of characterization trial count, causing wrong algorithm selection after characterization.
|
||||
|
||||
*Solution* (`strategy_selector.py`): Use `char_trials = landscape.get('total_trials', trials_completed)` for decisions.
|
||||
|
||||
**Fix #2: Improved Multimodality Detection**
|
||||
|
||||
*Problem*: False multimodality detected on smooth continuous surfaces (2 modes detected when problem was unimodal).
|
||||
|
||||
*Solution* (`landscape_analyzer.py`): Added heuristic - if only 2 modes with smoothness > 0.6 and noise < 0.2, reclassify as unimodal (smooth continuous manifold).
|
||||
|
||||
**Fix #3: Simulation Validation**
|
||||
|
||||
*Problem*: 20% pruning rate due to extreme parameters causing mesh/solver failures.
|
||||
|
||||
*Solution*: Created `simulation_validator.py` with:
|
||||
- Hard limits (reject invalid parameters)
|
||||
- Soft limits (warn about risky parameters)
|
||||
- Aspect ratio checks
|
||||
- Model-specific validation rules
|
||||
|
||||
*Impact*: Reduced pruning rate from 20% to ~5%.
|
||||
338
docs/protocols/system/SYS_11_MULTI_OBJECTIVE.md
Normal file
338
docs/protocols/system/SYS_11_MULTI_OBJECTIVE.md
Normal file
@@ -0,0 +1,338 @@
|
||||
# SYS_11: Multi-Objective Support
|
||||
|
||||
<!--
|
||||
PROTOCOL: Multi-Objective Optimization Support
|
||||
LAYER: System
|
||||
VERSION: 1.0
|
||||
STATUS: Active (MANDATORY)
|
||||
LAST_UPDATED: 2025-12-05
|
||||
PRIVILEGE: user
|
||||
LOAD_WITH: []
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
**ALL** optimization engines in Atomizer **MUST** support both single-objective and multi-objective optimization without requiring code changes. This protocol ensures system robustness and prevents runtime failures when handling Pareto optimization.
|
||||
|
||||
**Key Requirement**: Code must work with both `study.best_trial` (single) and `study.best_trials` (multi) APIs.
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| 2+ objectives defined in config | Use NSGA-II sampler |
|
||||
| "pareto", "multi-objective" mentioned | Load this protocol |
|
||||
| "tradeoff", "competing goals" | Suggest multi-objective approach |
|
||||
| "minimize X AND maximize Y" | Configure as multi-objective |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
**Single vs. Multi-Objective API**:
|
||||
|
||||
| Operation | Single-Objective | Multi-Objective |
|
||||
|-----------|-----------------|-----------------|
|
||||
| Best trial | `study.best_trial` | `study.best_trials[0]` |
|
||||
| Best params | `study.best_params` | `trial.params` |
|
||||
| Best value | `study.best_value` | `trial.values` (tuple) |
|
||||
| Direction | `direction='minimize'` | `directions=['minimize', 'maximize']` |
|
||||
| Sampler | TPE, CMA-ES, GP | NSGA-II (mandatory) |
|
||||
|
||||
---
|
||||
|
||||
## The Problem This Solves
|
||||
|
||||
Previously, optimization components only supported single-objective. When used with multi-objective studies:
|
||||
|
||||
1. Trials run successfully
|
||||
2. Trials saved to database
|
||||
3. **CRASH** when compiling results
|
||||
- `study.best_trial` raises RuntimeError
|
||||
- No tracking files generated
|
||||
- Silent failures
|
||||
|
||||
**Root Cause**: Optuna has different APIs:
|
||||
|
||||
```python
|
||||
# Single-Objective (works)
|
||||
study.best_trial # Returns Trial object
|
||||
study.best_params # Returns dict
|
||||
study.best_value # Returns float
|
||||
|
||||
# Multi-Objective (RAISES RuntimeError)
|
||||
study.best_trial # ❌ RuntimeError
|
||||
study.best_params # ❌ RuntimeError
|
||||
study.best_value # ❌ RuntimeError
|
||||
study.best_trials # ✓ Returns LIST of Pareto-optimal trials
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Solution Pattern
|
||||
|
||||
### 1. Always Check Study Type
|
||||
|
||||
```python
|
||||
is_multi_objective = len(study.directions) > 1
|
||||
```
|
||||
|
||||
### 2. Use Conditional Access
|
||||
|
||||
```python
|
||||
if is_multi_objective:
|
||||
best_trials = study.best_trials
|
||||
if best_trials:
|
||||
# Select representative trial (e.g., first Pareto solution)
|
||||
representative_trial = best_trials[0]
|
||||
best_params = representative_trial.params
|
||||
best_value = representative_trial.values # Tuple
|
||||
best_trial_num = representative_trial.number
|
||||
else:
|
||||
best_params = {}
|
||||
best_value = None
|
||||
best_trial_num = None
|
||||
else:
|
||||
# Single-objective: safe to use standard API
|
||||
best_params = study.best_params
|
||||
best_value = study.best_value
|
||||
best_trial_num = study.best_trial.number
|
||||
```
|
||||
|
||||
### 3. Return Rich Metadata
|
||||
|
||||
Always include in results:
|
||||
|
||||
```python
|
||||
{
|
||||
'best_params': best_params,
|
||||
'best_value': best_value, # float or tuple
|
||||
'best_trial': best_trial_num,
|
||||
'is_multi_objective': is_multi_objective,
|
||||
'pareto_front_size': len(study.best_trials) if is_multi_objective else 1,
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Checklist
|
||||
|
||||
When creating or modifying any optimization component:
|
||||
|
||||
- [ ] **Study Creation**: Support `directions` parameter
|
||||
```python
|
||||
if len(objectives) > 1:
|
||||
directions = [obj['type'] for obj in objectives] # ['minimize', 'maximize']
|
||||
study = optuna.create_study(directions=directions, ...)
|
||||
else:
|
||||
study = optuna.create_study(direction='minimize', ...)
|
||||
```
|
||||
|
||||
- [ ] **Result Compilation**: Check `len(study.directions) > 1`
|
||||
- [ ] **Best Trial Access**: Use conditional logic
|
||||
- [ ] **Logging**: Print Pareto front size for multi-objective
|
||||
- [ ] **Reports**: Handle tuple objectives in visualization
|
||||
- [ ] **Testing**: Test with BOTH single and multi-objective cases
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
**Multi-Objective Config Example**:
|
||||
|
||||
```json
|
||||
{
|
||||
"objectives": [
|
||||
{
|
||||
"name": "stiffness",
|
||||
"type": "maximize",
|
||||
"description": "Structural stiffness (N/mm)",
|
||||
"unit": "N/mm"
|
||||
},
|
||||
{
|
||||
"name": "mass",
|
||||
"type": "minimize",
|
||||
"description": "Total mass (kg)",
|
||||
"unit": "kg"
|
||||
}
|
||||
],
|
||||
"optimization_settings": {
|
||||
"sampler": "NSGAIISampler",
|
||||
"n_trials": 50
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Objective Function Return Format**:
|
||||
|
||||
```python
|
||||
# Single-objective: return float
|
||||
def objective_single(trial):
|
||||
# ... compute ...
|
||||
return objective_value # float
|
||||
|
||||
# Multi-objective: return tuple
|
||||
def objective_multi(trial):
|
||||
# ... compute ...
|
||||
return (stiffness, mass) # tuple of floats
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Semantic Directions
|
||||
|
||||
Use semantic direction values - no negative tricks:
|
||||
|
||||
```python
|
||||
# ✅ CORRECT: Semantic directions
|
||||
objectives = [
|
||||
{"name": "stiffness", "type": "maximize"},
|
||||
{"name": "mass", "type": "minimize"}
|
||||
]
|
||||
# Return: (stiffness, mass) - both positive values
|
||||
|
||||
# ❌ WRONG: Negative trick
|
||||
def objective(trial):
|
||||
return (-stiffness, mass) # Don't negate to fake maximize
|
||||
```
|
||||
|
||||
Optuna handles directions correctly when you specify `directions=['maximize', 'minimize']`.
|
||||
|
||||
---
|
||||
|
||||
## Testing Protocol
|
||||
|
||||
Before marking any optimization component complete:
|
||||
|
||||
### Test 1: Single-Objective
|
||||
```python
|
||||
# Config with 1 objective
|
||||
directions = None # or ['minimize']
|
||||
# Run optimization
|
||||
# Verify: completes without errors
|
||||
```
|
||||
|
||||
### Test 2: Multi-Objective
|
||||
```python
|
||||
# Config with 2+ objectives
|
||||
directions = ['minimize', 'minimize']
|
||||
# Run optimization
|
||||
# Verify: completes without errors
|
||||
# Verify: ALL tracking files generated
|
||||
```
|
||||
|
||||
### Test 3: Verify Outputs
|
||||
- `2_results/study.db` exists
|
||||
- `2_results/intelligent_optimizer/` has tracking files
|
||||
- `2_results/optimization_summary.json` exists
|
||||
- No RuntimeError in logs
|
||||
|
||||
---
|
||||
|
||||
## NSGA-II Configuration
|
||||
|
||||
For multi-objective optimization, use NSGA-II:
|
||||
|
||||
```python
|
||||
import optuna
|
||||
from optuna.samplers import NSGAIISampler
|
||||
|
||||
sampler = NSGAIISampler(
|
||||
population_size=50, # Pareto front population
|
||||
mutation_prob=None, # Auto-computed
|
||||
crossover_prob=0.9, # Recombination rate
|
||||
swapping_prob=0.5, # Gene swapping probability
|
||||
seed=42 # Reproducibility
|
||||
)
|
||||
|
||||
study = optuna.create_study(
|
||||
directions=['maximize', 'minimize'],
|
||||
sampler=sampler,
|
||||
study_name="multi_objective_study",
|
||||
storage="sqlite:///study.db"
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pareto Front Handling
|
||||
|
||||
### Accessing Pareto Solutions
|
||||
|
||||
```python
|
||||
if is_multi_objective:
|
||||
pareto_trials = study.best_trials
|
||||
print(f"Found {len(pareto_trials)} Pareto-optimal solutions")
|
||||
|
||||
for trial in pareto_trials:
|
||||
print(f"Trial {trial.number}: {trial.values}")
|
||||
print(f" Params: {trial.params}")
|
||||
```
|
||||
|
||||
### Selecting Representative Solution
|
||||
|
||||
```python
|
||||
# Option 1: First Pareto solution
|
||||
representative = study.best_trials[0]
|
||||
|
||||
# Option 2: Weighted selection
|
||||
def weighted_selection(trials, weights):
|
||||
best_score = float('inf')
|
||||
best_trial = None
|
||||
for trial in trials:
|
||||
score = sum(w * v for w, v in zip(weights, trial.values))
|
||||
if score < best_score:
|
||||
best_score = score
|
||||
best_trial = trial
|
||||
return best_trial
|
||||
|
||||
# Option 3: Knee point (maximum distance from ideal line)
|
||||
# Requires more complex computation
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Symptom | Cause | Solution |
|
||||
|---------|-------|----------|
|
||||
| RuntimeError on `best_trial` | Multi-objective study using single API | Use conditional check pattern |
|
||||
| Empty Pareto front | No feasible solutions | Check constraints, relax if needed |
|
||||
| Only 1 Pareto solution | Objectives not conflicting | Verify objectives are truly competing |
|
||||
| NSGA-II with single objective | Wrong config | Use TPE/CMA-ES for single-objective |
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Depends On**: None (mandatory for all)
|
||||
- **Used By**: All optimization components
|
||||
- **Integrates With**:
|
||||
- [SYS_10_IMSO](./SYS_10_IMSO.md) (selects NSGA-II for multi-objective)
|
||||
- [SYS_13_DASHBOARD_TRACKING](./SYS_13_DASHBOARD_TRACKING.md) (Pareto visualization)
|
||||
- **See Also**: [OP_04_ANALYZE_RESULTS](../operations/OP_04_ANALYZE_RESULTS.md) for Pareto analysis
|
||||
|
||||
---
|
||||
|
||||
## Implementation Files
|
||||
|
||||
Files that implement this protocol:
|
||||
- `optimization_engine/intelligent_optimizer.py` - `_compile_results()` method
|
||||
- `optimization_engine/study_continuation.py` - Result handling
|
||||
- `optimization_engine/hybrid_study_creator.py` - Study creation
|
||||
|
||||
Files requiring this protocol:
|
||||
- [ ] `optimization_engine/study_continuation.py`
|
||||
- [ ] `optimization_engine/hybrid_study_creator.py`
|
||||
- [ ] `optimization_engine/intelligent_setup.py`
|
||||
- [ ] `optimization_engine/llm_optimization_runner.py`
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | 2025-11-20 | Initial release, mandatory for all engines |
|
||||
435
docs/protocols/system/SYS_13_DASHBOARD_TRACKING.md
Normal file
435
docs/protocols/system/SYS_13_DASHBOARD_TRACKING.md
Normal file
@@ -0,0 +1,435 @@
|
||||
# SYS_13: Real-Time Dashboard Tracking
|
||||
|
||||
<!--
|
||||
PROTOCOL: Real-Time Dashboard Tracking
|
||||
LAYER: System
|
||||
VERSION: 1.0
|
||||
STATUS: Active
|
||||
LAST_UPDATED: 2025-12-05
|
||||
PRIVILEGE: user
|
||||
LOAD_WITH: [SYS_10_IMSO, SYS_11_MULTI_OBJECTIVE]
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
Protocol 13 implements a comprehensive real-time web dashboard for monitoring optimization studies. It provides live visualization of optimizer state, Pareto fronts, parallel coordinates, and trial history with automatic updates every trial.
|
||||
|
||||
**Key Feature**: Every trial completion writes state to JSON, enabling live browser updates.
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| "dashboard", "visualization" mentioned | Load this protocol |
|
||||
| "real-time", "monitoring" requested | Enable dashboard tracking |
|
||||
| Multi-objective study | Dashboard shows Pareto front |
|
||||
| Want to see progress visually | Point to `localhost:3000` |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
**Dashboard URLs**:
|
||||
| Service | URL | Purpose |
|
||||
|---------|-----|---------|
|
||||
| Frontend | `http://localhost:3000` | Main dashboard |
|
||||
| Backend API | `http://localhost:8000` | REST API |
|
||||
| Optuna Dashboard | `http://localhost:8080` | Alternative viewer |
|
||||
|
||||
**Start Commands**:
|
||||
```bash
|
||||
# Backend
|
||||
cd atomizer-dashboard/backend
|
||||
python -m uvicorn api.main:app --reload --port 8000
|
||||
|
||||
# Frontend
|
||||
cd atomizer-dashboard/frontend
|
||||
npm run dev
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
Trial Completion (Optuna)
|
||||
│
|
||||
▼
|
||||
Realtime Callback (optimization_engine/realtime_tracking.py)
|
||||
│
|
||||
▼
|
||||
Write optimizer_state.json
|
||||
│
|
||||
▼
|
||||
Backend API /optimizer-state endpoint
|
||||
│
|
||||
▼
|
||||
Frontend Components (2s polling)
|
||||
│
|
||||
▼
|
||||
User sees live updates in browser
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Backend Components
|
||||
|
||||
### 1. Real-Time Tracking System (`realtime_tracking.py`)
|
||||
|
||||
**Purpose**: Write JSON state files after every trial completion.
|
||||
|
||||
**Integration** (in `intelligent_optimizer.py`):
|
||||
```python
|
||||
from optimization_engine.realtime_tracking import create_realtime_callback
|
||||
|
||||
# Create callback
|
||||
callback = create_realtime_callback(
|
||||
tracking_dir=results_dir / "intelligent_optimizer",
|
||||
optimizer_ref=self,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Register with Optuna
|
||||
study.optimize(objective, n_trials=n_trials, callbacks=[callback])
|
||||
```
|
||||
|
||||
**Data Structure** (`optimizer_state.json`):
|
||||
```json
|
||||
{
|
||||
"timestamp": "2025-11-21T15:27:28.828930",
|
||||
"trial_number": 29,
|
||||
"total_trials": 50,
|
||||
"current_phase": "adaptive_optimization",
|
||||
"current_strategy": "GP_UCB",
|
||||
"is_multi_objective": true,
|
||||
"study_directions": ["maximize", "minimize"]
|
||||
}
|
||||
```
|
||||
|
||||
### 2. REST API Endpoints
|
||||
|
||||
**Base**: `/api/optimization/studies/{study_id}/`
|
||||
|
||||
| Endpoint | Method | Returns |
|
||||
|----------|--------|---------|
|
||||
| `/metadata` | GET | Objectives, design vars, constraints with units |
|
||||
| `/optimizer-state` | GET | Current phase, strategy, progress |
|
||||
| `/pareto-front` | GET | Pareto-optimal solutions (multi-objective) |
|
||||
| `/history` | GET | All trial history |
|
||||
| `/` | GET | List all studies |
|
||||
|
||||
**Unit Inference**:
|
||||
```python
|
||||
def _infer_objective_unit(objective: Dict) -> str:
|
||||
name = objective.get("name", "").lower()
|
||||
desc = objective.get("description", "").lower()
|
||||
|
||||
if "frequency" in name or "hz" in desc:
|
||||
return "Hz"
|
||||
elif "stiffness" in name or "n/mm" in desc:
|
||||
return "N/mm"
|
||||
elif "mass" in name or "kg" in desc:
|
||||
return "kg"
|
||||
# ... more patterns
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Frontend Components
|
||||
|
||||
### 1. OptimizerPanel (`components/OptimizerPanel.tsx`)
|
||||
|
||||
**Displays**:
|
||||
- Current phase (Characterization, Exploration, Exploitation, Adaptive)
|
||||
- Current strategy (TPE, GP, NSGA-II, etc.)
|
||||
- Progress bar with trial count
|
||||
- Multi-objective indicator
|
||||
|
||||
```
|
||||
┌─────────────────────────────────┐
|
||||
│ Intelligent Optimizer Status │
|
||||
├─────────────────────────────────┤
|
||||
│ Phase: [Adaptive Optimization] │
|
||||
│ Strategy: [GP_UCB] │
|
||||
│ Progress: [████████░░] 29/50 │
|
||||
│ Multi-Objective: ✓ │
|
||||
└─────────────────────────────────┘
|
||||
```
|
||||
|
||||
### 2. ParetoPlot (`components/ParetoPlot.tsx`)
|
||||
|
||||
**Features**:
|
||||
- Scatter plot of Pareto-optimal solutions
|
||||
- Pareto front line connecting optimal points
|
||||
- **3 Normalization Modes**:
|
||||
- **Raw**: Original engineering values
|
||||
- **Min-Max**: Scales to [0, 1]
|
||||
- **Z-Score**: Standardizes to mean=0, std=1
|
||||
- Tooltip shows raw values regardless of normalization
|
||||
- Color-coded: green=feasible, red=infeasible
|
||||
|
||||
### 3. ParallelCoordinatesPlot (`components/ParallelCoordinatesPlot.tsx`)
|
||||
|
||||
**Features**:
|
||||
- High-dimensional visualization (objectives + design variables)
|
||||
- Interactive trial selection
|
||||
- Normalized [0, 1] axes
|
||||
- Color coding: green (feasible), red (infeasible), yellow (selected)
|
||||
|
||||
```
|
||||
Stiffness Mass support_angle tip_thickness
|
||||
│ │ │ │
|
||||
│ ╱─────╲ ╱ │
|
||||
│ ╱ ╲─────────╱ │
|
||||
│ ╱ ╲ │
|
||||
```
|
||||
|
||||
### 4. Dashboard Layout
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────┐
|
||||
│ Study Selection │
|
||||
├──────────────────────────────────────────────────┤
|
||||
│ Metrics Grid (Best, Avg, Trials, Pruned) │
|
||||
├──────────────────────────────────────────────────┤
|
||||
│ [OptimizerPanel] [ParetoPlot] │
|
||||
├──────────────────────────────────────────────────┤
|
||||
│ [ParallelCoordinatesPlot - Full Width] │
|
||||
├──────────────────────────────────────────────────┤
|
||||
│ [Convergence] [Parameter Space] │
|
||||
├──────────────────────────────────────────────────┤
|
||||
│ [Recent Trials Table] │
|
||||
└──────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
**In `optimization_config.json`**:
|
||||
```json
|
||||
{
|
||||
"dashboard_settings": {
|
||||
"enabled": true,
|
||||
"port": 8000,
|
||||
"realtime_updates": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Study Requirements**:
|
||||
- Must use Protocol 10 (IntelligentOptimizer) for optimizer state
|
||||
- Must have `optimization_config.json` with objectives and design_variables
|
||||
- Real-time tracking enabled automatically with Protocol 10
|
||||
|
||||
---
|
||||
|
||||
## Usage Workflow
|
||||
|
||||
### 1. Start Dashboard
|
||||
|
||||
```bash
|
||||
# Terminal 1: Backend
|
||||
cd atomizer-dashboard/backend
|
||||
python -m uvicorn api.main:app --reload --port 8000
|
||||
|
||||
# Terminal 2: Frontend
|
||||
cd atomizer-dashboard/frontend
|
||||
npm run dev
|
||||
```
|
||||
|
||||
### 2. Start Optimization
|
||||
|
||||
```bash
|
||||
cd studies/my_study
|
||||
conda activate atomizer
|
||||
python run_optimization.py --n-trials 50
|
||||
```
|
||||
|
||||
### 3. View Dashboard
|
||||
|
||||
- Open browser to `http://localhost:3000`
|
||||
- Select study from dropdown
|
||||
- Watch real-time updates every trial
|
||||
|
||||
### 4. Interact with Plots
|
||||
|
||||
- Toggle normalization on Pareto plot
|
||||
- Click lines in parallel coordinates to select trials
|
||||
- Hover for detailed trial information
|
||||
|
||||
---
|
||||
|
||||
## Performance
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Backend endpoint latency | ~10ms |
|
||||
| Frontend polling interval | 2 seconds |
|
||||
| Real-time write overhead | <5ms per trial |
|
||||
| Dashboard initial load | <500ms |
|
||||
|
||||
---
|
||||
|
||||
## Integration with Other Protocols
|
||||
|
||||
### Protocol 10 Integration
|
||||
- Real-time callback integrated into `IntelligentOptimizer.optimize()`
|
||||
- Tracks phase transitions (characterization → adaptive optimization)
|
||||
- Reports strategy changes
|
||||
|
||||
### Protocol 11 Integration
|
||||
- Pareto front endpoint checks `len(study.directions) > 1`
|
||||
- Dashboard conditionally renders Pareto plots
|
||||
- Uses Optuna's `study.best_trials` for Pareto front
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Symptom | Cause | Solution |
|
||||
|---------|-------|----------|
|
||||
| "No Pareto front data yet" | Single-objective or no trials | Wait for trials, check objectives |
|
||||
| OptimizerPanel shows "Not available" | Not using Protocol 10 | Enable IntelligentOptimizer |
|
||||
| Units not showing | Missing unit in config | Add `unit` field or use pattern in description |
|
||||
| Dashboard not updating | Backend not running | Start backend with uvicorn |
|
||||
| CORS errors | Backend/frontend mismatch | Check ports, restart both |
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Depends On**: [SYS_10_IMSO](./SYS_10_IMSO.md), [SYS_11_MULTI_OBJECTIVE](./SYS_11_MULTI_OBJECTIVE.md)
|
||||
- **Used By**: [OP_03_MONITOR_PROGRESS](../operations/OP_03_MONITOR_PROGRESS.md)
|
||||
- **See Also**: Optuna Dashboard for alternative visualization
|
||||
|
||||
---
|
||||
|
||||
## Implementation Files
|
||||
|
||||
**Backend**:
|
||||
- `atomizer-dashboard/backend/api/main.py` - FastAPI app
|
||||
- `atomizer-dashboard/backend/api/routes/optimization.py` - Endpoints
|
||||
- `optimization_engine/realtime_tracking.py` - Callback system
|
||||
|
||||
**Frontend**:
|
||||
- `atomizer-dashboard/frontend/src/pages/Dashboard.tsx` - Main page
|
||||
- `atomizer-dashboard/frontend/src/components/OptimizerPanel.tsx`
|
||||
- `atomizer-dashboard/frontend/src/components/ParetoPlot.tsx`
|
||||
- `atomizer-dashboard/frontend/src/components/ParallelCoordinatesPlot.tsx`
|
||||
|
||||
---
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### Backend API Example (FastAPI)
|
||||
|
||||
```python
|
||||
@router.get("/studies/{study_id}/pareto-front")
|
||||
async def get_pareto_front(study_id: str):
|
||||
"""Get Pareto-optimal solutions for multi-objective studies."""
|
||||
study = optuna.load_study(study_name=study_id, storage=storage)
|
||||
|
||||
if len(study.directions) == 1:
|
||||
return {"is_multi_objective": False}
|
||||
|
||||
return {
|
||||
"is_multi_objective": True,
|
||||
"pareto_front": [
|
||||
{
|
||||
"trial_number": t.number,
|
||||
"values": t.values,
|
||||
"params": t.params,
|
||||
"user_attrs": dict(t.user_attrs)
|
||||
}
|
||||
for t in study.best_trials
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Frontend OptimizerPanel (React/TypeScript)
|
||||
|
||||
```typescript
|
||||
export function OptimizerPanel({ studyId }: { studyId: string }) {
|
||||
const [state, setState] = useState<OptimizerState | null>(null);
|
||||
|
||||
useEffect(() => {
|
||||
const fetchState = async () => {
|
||||
const res = await fetch(`/api/optimization/studies/${studyId}/optimizer-state`);
|
||||
setState(await res.json());
|
||||
};
|
||||
fetchState();
|
||||
const interval = setInterval(fetchState, 1000);
|
||||
return () => clearInterval(interval);
|
||||
}, [studyId]);
|
||||
|
||||
return (
|
||||
<Card title="Optimizer Status">
|
||||
<div>Phase: {state?.current_phase}</div>
|
||||
<div>Strategy: {state?.current_strategy}</div>
|
||||
<ProgressBar value={state?.trial_number} max={state?.total_trials} />
|
||||
</Card>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### Callback Integration
|
||||
|
||||
**CRITICAL**: Every `study.optimize()` call must include the realtime callback:
|
||||
|
||||
```python
|
||||
# In IntelligentOptimizer
|
||||
self.realtime_callback = create_realtime_callback(
|
||||
tracking_dir=self.tracking_dir,
|
||||
optimizer_ref=self,
|
||||
verbose=self.verbose
|
||||
)
|
||||
|
||||
# Register with ALL optimize calls
|
||||
self.study.optimize(
|
||||
objective_function,
|
||||
n_trials=check_interval,
|
||||
callbacks=[self.realtime_callback] # Required for real-time updates
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Chart Library Options
|
||||
|
||||
The dashboard supports two chart libraries:
|
||||
|
||||
| Feature | Recharts | Plotly |
|
||||
|---------|----------|--------|
|
||||
| Load Speed | Fast | Slower (lazy loaded) |
|
||||
| Interactivity | Basic | Advanced |
|
||||
| Export | Screenshot | PNG/SVG native |
|
||||
| 3D Support | No | Yes |
|
||||
| Real-time Updates | Better | Good |
|
||||
|
||||
**Recommendation**: Use Recharts during active optimization, Plotly for post-analysis.
|
||||
|
||||
### Quick Start
|
||||
|
||||
```bash
|
||||
# Both backend and frontend
|
||||
python start_dashboard.py
|
||||
|
||||
# Or manually:
|
||||
cd atomizer-dashboard/backend && python -m uvicorn main:app --port 8000
|
||||
cd atomizer-dashboard/frontend && npm run dev
|
||||
```
|
||||
|
||||
Access at: `http://localhost:3003`
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.2 | 2025-12-05 | Added chart library options |
|
||||
| 1.1 | 2025-12-05 | Added implementation code snippets |
|
||||
| 1.0 | 2025-11-21 | Initial release with real-time tracking |
|
||||
564
docs/protocols/system/SYS_14_NEURAL_ACCELERATION.md
Normal file
564
docs/protocols/system/SYS_14_NEURAL_ACCELERATION.md
Normal file
@@ -0,0 +1,564 @@
|
||||
# SYS_14: Neural Network Acceleration
|
||||
|
||||
<!--
|
||||
PROTOCOL: Neural Network Surrogate Acceleration
|
||||
LAYER: System
|
||||
VERSION: 2.0
|
||||
STATUS: Active
|
||||
LAST_UPDATED: 2025-12-06
|
||||
PRIVILEGE: user
|
||||
LOAD_WITH: [SYS_10_IMSO, SYS_11_MULTI_OBJECTIVE]
|
||||
-->
|
||||
|
||||
## Overview
|
||||
|
||||
Atomizer provides **neural network surrogate acceleration** enabling 100-1000x faster optimization by replacing expensive FEA evaluations with instant neural predictions.
|
||||
|
||||
**Two approaches available**:
|
||||
1. **MLP Surrogate** (Simple, integrated) - 4-layer MLP trained on FEA data, runs within study
|
||||
2. **GNN Field Predictor** (Advanced) - Graph neural network for full field predictions
|
||||
|
||||
**Key Innovation**: Train once on FEA data, then explore 5,000-50,000+ designs in the time it takes to run 50 FEA trials.
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Action |
|
||||
|---------|--------|
|
||||
| >50 trials needed | Consider neural acceleration |
|
||||
| "neural", "surrogate", "NN" mentioned | Load this protocol |
|
||||
| "fast", "acceleration", "speed" needed | Suggest neural acceleration |
|
||||
| Training data available | Enable surrogate |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
**Performance Comparison**:
|
||||
|
||||
| Metric | Traditional FEA | Neural Network | Improvement |
|
||||
|--------|-----------------|----------------|-------------|
|
||||
| Time per evaluation | 10-30 minutes | 4.5 milliseconds | **2,000-500,000x** |
|
||||
| Trials per hour | 2-6 | 800,000+ | **1000x** |
|
||||
| Design exploration | ~50 designs | ~50,000 designs | **1000x** |
|
||||
|
||||
**Model Types**:
|
||||
|
||||
| Model | Purpose | Use When |
|
||||
|-------|---------|----------|
|
||||
| **MLP Surrogate** | Direct objective prediction | Simple studies, quick setup |
|
||||
| Field Predictor GNN | Full displacement/stress fields | Need field visualization |
|
||||
| Parametric Predictor GNN | Direct objective prediction | Complex geometry, need accuracy |
|
||||
| Ensemble | Uncertainty quantification | Need confidence bounds |
|
||||
|
||||
---
|
||||
|
||||
## MLP Surrogate (Recommended for Quick Start)
|
||||
|
||||
### Overview
|
||||
|
||||
The MLP (Multi-Layer Perceptron) surrogate is a simple but effective neural network that predicts objectives directly from design parameters. It's integrated into the study workflow via `run_nn_optimization.py`.
|
||||
|
||||
### Architecture
|
||||
|
||||
```
|
||||
Input Layer (N design variables)
|
||||
↓
|
||||
Linear(N, 64) + ReLU + BatchNorm + Dropout(0.1)
|
||||
↓
|
||||
Linear(64, 128) + ReLU + BatchNorm + Dropout(0.1)
|
||||
↓
|
||||
Linear(128, 128) + ReLU + BatchNorm + Dropout(0.1)
|
||||
↓
|
||||
Linear(128, 64) + ReLU + BatchNorm + Dropout(0.1)
|
||||
↓
|
||||
Linear(64, M objectives)
|
||||
```
|
||||
|
||||
**Parameters**: ~34,000 trainable
|
||||
|
||||
### Workflow Modes
|
||||
|
||||
#### 1. Standard Hybrid Mode (`--all`)
|
||||
|
||||
Run all phases sequentially:
|
||||
```bash
|
||||
python run_nn_optimization.py --all
|
||||
```
|
||||
|
||||
Phases:
|
||||
1. **Export**: Extract training data from existing FEA trials
|
||||
2. **Train**: Train MLP surrogate (300 epochs default)
|
||||
3. **NN-Optimize**: Run 1000 NN trials with NSGA-II
|
||||
4. **Validate**: Validate top 10 candidates with FEA
|
||||
|
||||
#### 2. Hybrid Loop Mode (`--hybrid-loop`)
|
||||
|
||||
Iterative refinement:
|
||||
```bash
|
||||
python run_nn_optimization.py --hybrid-loop --iterations 5 --nn-trials 500
|
||||
```
|
||||
|
||||
Each iteration:
|
||||
1. Train/retrain surrogate from current FEA data
|
||||
2. Run NN optimization
|
||||
3. Validate top candidates with FEA
|
||||
4. Add validated results to training set
|
||||
5. Repeat until convergence (max error < 5%)
|
||||
|
||||
#### 3. Turbo Mode (`--turbo`) ⚡ RECOMMENDED
|
||||
|
||||
Aggressive single-best validation:
|
||||
```bash
|
||||
python run_nn_optimization.py --turbo --nn-trials 5000 --batch-size 100 --retrain-every 10
|
||||
```
|
||||
|
||||
Strategy:
|
||||
- Run NN in small batches (100 trials)
|
||||
- Validate ONLY the single best candidate with FEA
|
||||
- Add to training data immediately
|
||||
- Retrain surrogate every N FEA validations
|
||||
- Repeat until total NN budget exhausted
|
||||
|
||||
**Example**: 5,000 NN trials with batch=100 → 50 FEA validations in ~12 minutes
|
||||
|
||||
### Configuration
|
||||
|
||||
```json
|
||||
{
|
||||
"neural_acceleration": {
|
||||
"enabled": true,
|
||||
"min_training_points": 50,
|
||||
"auto_train": true,
|
||||
"epochs": 300,
|
||||
"validation_split": 0.2,
|
||||
"nn_trials": 1000,
|
||||
"validate_top_n": 10,
|
||||
"model_file": "surrogate_best.pt",
|
||||
"separate_nn_database": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Important**: `separate_nn_database: true` stores NN trials in `nn_study.db` instead of `study.db` to avoid overloading the dashboard with thousands of NN-only results.
|
||||
|
||||
### Typical Accuracy
|
||||
|
||||
| Objective | Expected Error |
|
||||
|-----------|----------------|
|
||||
| Mass | 1-5% |
|
||||
| Stress | 1-4% |
|
||||
| Stiffness | 5-15% |
|
||||
|
||||
### Output Files
|
||||
|
||||
```
|
||||
2_results/
|
||||
├── study.db # Main FEA + validated results (dashboard)
|
||||
├── nn_study.db # NN-only results (not in dashboard)
|
||||
├── surrogate_best.pt # Trained model weights
|
||||
├── training_data.json # Normalized training data
|
||||
├── nn_optimization_state.json # NN optimization state
|
||||
├── nn_pareto_front.json # NN-predicted Pareto front
|
||||
├── validation_report.json # FEA validation results
|
||||
└── turbo_report.json # Turbo mode results (if used)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## GNN Field Predictor (Advanced)
|
||||
|
||||
### Core Components
|
||||
|
||||
| Component | File | Purpose |
|
||||
|-----------|------|---------|
|
||||
| BDF/OP2 Parser | `neural_field_parser.py` | Convert NX files to neural format |
|
||||
| Data Validator | `validate_parsed_data.py` | Physics and quality checks |
|
||||
| Field Predictor | `field_predictor.py` | GNN for full field prediction |
|
||||
| Parametric Predictor | `parametric_predictor.py` | GNN for direct objectives |
|
||||
| Physics Loss | `physics_losses.py` | Physics-informed training |
|
||||
| Neural Surrogate | `neural_surrogate.py` | Integration with Atomizer |
|
||||
| Neural Runner | `runner_with_neural.py` | Optimization with NN acceleration |
|
||||
|
||||
### Workflow Diagram
|
||||
|
||||
```
|
||||
Traditional:
|
||||
Design → NX Model → Mesh → Solve (30 min) → Results → Objective
|
||||
|
||||
Neural (after training):
|
||||
Design → Neural Network (4.5 ms) → Results → Objective
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Neural Model Types
|
||||
|
||||
### 1. Field Predictor GNN
|
||||
|
||||
**Use Case**: When you need full field predictions (stress distribution, deformation shape).
|
||||
|
||||
```
|
||||
Input Features (12D per node):
|
||||
├── Node coordinates (x, y, z)
|
||||
├── Material properties (E, nu, rho)
|
||||
├── Boundary conditions (fixed/free per DOF)
|
||||
└── Load information (force magnitude, direction)
|
||||
|
||||
GNN Layers (6 message passing):
|
||||
├── MeshGraphConv (custom for FEA topology)
|
||||
├── Layer normalization
|
||||
├── ReLU activation
|
||||
└── Dropout (0.1)
|
||||
|
||||
Output (per node):
|
||||
├── Displacement (6 DOF: Tx, Ty, Tz, Rx, Ry, Rz)
|
||||
└── Von Mises stress (1 value)
|
||||
```
|
||||
|
||||
**Parameters**: ~718,221 trainable
|
||||
|
||||
### 2. Parametric Predictor GNN (Recommended)
|
||||
|
||||
**Use Case**: Direct optimization objective prediction (fastest option).
|
||||
|
||||
```
|
||||
Design Parameters (ND) → Design Encoder (MLP) → GNN Backbone → Scalar Heads
|
||||
|
||||
Output (objectives):
|
||||
├── mass (grams)
|
||||
├── frequency (Hz)
|
||||
├── max_displacement (mm)
|
||||
└── max_stress (MPa)
|
||||
```
|
||||
|
||||
**Parameters**: ~500,000 trainable
|
||||
|
||||
### 3. Ensemble Models
|
||||
|
||||
**Use Case**: Uncertainty quantification.
|
||||
|
||||
1. Train 3-5 models with different random seeds
|
||||
2. At inference, run all models
|
||||
3. Use mean for prediction, std for uncertainty
|
||||
4. High uncertainty → trigger FEA validation
|
||||
|
||||
---
|
||||
|
||||
## Training Pipeline
|
||||
|
||||
### Step 1: Collect Training Data
|
||||
|
||||
Enable export in workflow config:
|
||||
|
||||
```json
|
||||
{
|
||||
"training_data_export": {
|
||||
"enabled": true,
|
||||
"export_dir": "atomizer_field_training_data/my_study"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Output structure:
|
||||
```
|
||||
atomizer_field_training_data/my_study/
|
||||
├── trial_0001/
|
||||
│ ├── input/model.bdf # Nastran input
|
||||
│ ├── output/model.op2 # Binary results
|
||||
│ └── metadata.json # Design params + objectives
|
||||
├── trial_0002/
|
||||
│ └── ...
|
||||
└── study_summary.json
|
||||
```
|
||||
|
||||
**Recommended**: 100-500 FEA samples for good generalization.
|
||||
|
||||
### Step 2: Parse to Neural Format
|
||||
|
||||
```bash
|
||||
cd atomizer-field
|
||||
python batch_parser.py ../atomizer_field_training_data/my_study
|
||||
```
|
||||
|
||||
Creates HDF5 + JSON files per trial.
|
||||
|
||||
### Step 3: Train Model
|
||||
|
||||
**Parametric Predictor** (recommended):
|
||||
```bash
|
||||
python train_parametric.py \
|
||||
--train_dir ../training_data/parsed \
|
||||
--val_dir ../validation_data/parsed \
|
||||
--epochs 200 \
|
||||
--hidden_channels 128 \
|
||||
--num_layers 4
|
||||
```
|
||||
|
||||
**Field Predictor**:
|
||||
```bash
|
||||
python train.py \
|
||||
--train_dir ../training_data/parsed \
|
||||
--epochs 200 \
|
||||
--model FieldPredictorGNN \
|
||||
--hidden_channels 128 \
|
||||
--num_layers 6 \
|
||||
--physics_loss_weight 0.3
|
||||
```
|
||||
|
||||
### Step 4: Validate
|
||||
|
||||
```bash
|
||||
python validate.py --checkpoint runs/my_model/checkpoint_best.pt
|
||||
```
|
||||
|
||||
Expected output:
|
||||
```
|
||||
Validation Results:
|
||||
├── Mean Absolute Error: 2.3% (mass), 1.8% (frequency)
|
||||
├── R² Score: 0.987
|
||||
├── Inference Time: 4.5ms ± 0.8ms
|
||||
└── Physics Violations: 0.2%
|
||||
```
|
||||
|
||||
### Step 5: Deploy
|
||||
|
||||
```json
|
||||
{
|
||||
"neural_surrogate": {
|
||||
"enabled": true,
|
||||
"model_checkpoint": "atomizer-field/runs/my_model/checkpoint_best.pt",
|
||||
"confidence_threshold": 0.85
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
### Full Neural Configuration Example
|
||||
|
||||
```json
|
||||
{
|
||||
"study_name": "bracket_neural_optimization",
|
||||
|
||||
"surrogate_settings": {
|
||||
"enabled": true,
|
||||
"model_type": "parametric_gnn",
|
||||
"model_path": "models/bracket_surrogate.pt",
|
||||
"confidence_threshold": 0.85,
|
||||
"validation_frequency": 10,
|
||||
"fallback_to_fea": true
|
||||
},
|
||||
|
||||
"training_data_export": {
|
||||
"enabled": true,
|
||||
"export_dir": "atomizer_field_training_data/bracket_study",
|
||||
"export_bdf": true,
|
||||
"export_op2": true,
|
||||
"export_fields": ["displacement", "stress"]
|
||||
},
|
||||
|
||||
"neural_optimization": {
|
||||
"initial_fea_trials": 50,
|
||||
"neural_trials": 5000,
|
||||
"retraining_interval": 500,
|
||||
"uncertainty_threshold": 0.15
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Configuration Parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `enabled` | bool | false | Enable neural surrogate |
|
||||
| `model_type` | string | "parametric_gnn" | Model architecture |
|
||||
| `model_path` | string | - | Path to trained model |
|
||||
| `confidence_threshold` | float | 0.85 | Min confidence for predictions |
|
||||
| `validation_frequency` | int | 10 | FEA validation every N trials |
|
||||
| `fallback_to_fea` | bool | true | Use FEA when uncertain |
|
||||
|
||||
---
|
||||
|
||||
## Hybrid FEA/Neural Workflow
|
||||
|
||||
### Phase 1: FEA Exploration (50-100 trials)
|
||||
- Run standard FEA optimization
|
||||
- Export training data automatically
|
||||
- Build landscape understanding
|
||||
|
||||
### Phase 2: Neural Training
|
||||
- Parse collected data
|
||||
- Train parametric predictor
|
||||
- Validate accuracy
|
||||
|
||||
### Phase 3: Neural Acceleration (1000s of trials)
|
||||
- Use neural network for rapid exploration
|
||||
- Periodic FEA validation
|
||||
- Retrain if distribution shifts
|
||||
|
||||
### Phase 4: FEA Refinement (10-20 trials)
|
||||
- Validate top candidates with FEA
|
||||
- Ensure results are physically accurate
|
||||
- Generate final Pareto front
|
||||
|
||||
---
|
||||
|
||||
## Adaptive Iteration Loop
|
||||
|
||||
For complex optimizations, use iterative refinement:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Iteration 1: │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ Initial FEA │ -> │ Train NN │ -> │ NN Search │ │
|
||||
│ │ (50-100) │ │ Surrogate │ │ (1000 trials)│ │
|
||||
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
||||
│ │ │
|
||||
│ Iteration 2+: ▼ │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ Validate Top │ -> │ Retrain NN │ -> │ NN Search │ │
|
||||
│ │ NN with FEA │ │ with new data│ │ (1000 trials)│ │
|
||||
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Adaptive Configuration
|
||||
|
||||
```json
|
||||
{
|
||||
"adaptive_settings": {
|
||||
"enabled": true,
|
||||
"initial_fea_trials": 50,
|
||||
"nn_trials_per_iteration": 1000,
|
||||
"fea_validation_per_iteration": 5,
|
||||
"max_iterations": 10,
|
||||
"convergence_threshold": 0.01,
|
||||
"retrain_epochs": 100
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Convergence Criteria
|
||||
|
||||
Stop when:
|
||||
- No improvement for 2-3 consecutive iterations
|
||||
- Reached FEA budget limit
|
||||
- Objective improvement < 1% threshold
|
||||
|
||||
### Output Files
|
||||
|
||||
```
|
||||
studies/my_study/3_results/
|
||||
├── adaptive_state.json # Current iteration state
|
||||
├── surrogate_model.pt # Trained neural network
|
||||
└── training_history.json # NN training metrics
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Loss Functions
|
||||
|
||||
### Data Loss (MSE)
|
||||
Standard prediction error:
|
||||
```python
|
||||
data_loss = MSE(predicted, target)
|
||||
```
|
||||
|
||||
### Physics Loss
|
||||
Enforce physical constraints:
|
||||
```python
|
||||
physics_loss = (
|
||||
equilibrium_loss + # Force balance
|
||||
boundary_loss + # BC satisfaction
|
||||
compatibility_loss # Strain compatibility
|
||||
)
|
||||
```
|
||||
|
||||
### Combined Training
|
||||
```python
|
||||
total_loss = data_loss + 0.3 * physics_loss
|
||||
```
|
||||
|
||||
Physics loss weight typically 0.1-0.5.
|
||||
|
||||
---
|
||||
|
||||
## Uncertainty Quantification
|
||||
|
||||
### Ensemble Method
|
||||
```python
|
||||
# Run N models
|
||||
predictions = [model_i(x) for model_i in ensemble]
|
||||
|
||||
# Statistics
|
||||
mean_prediction = np.mean(predictions)
|
||||
uncertainty = np.std(predictions)
|
||||
|
||||
# Decision
|
||||
if uncertainty > threshold:
|
||||
# Use FEA instead
|
||||
result = run_fea(x)
|
||||
else:
|
||||
result = mean_prediction
|
||||
```
|
||||
|
||||
### Confidence Thresholds
|
||||
|
||||
| Uncertainty | Action |
|
||||
|-------------|--------|
|
||||
| < 5% | Use neural prediction |
|
||||
| 5-15% | Use neural, flag for validation |
|
||||
| > 15% | Fall back to FEA |
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Symptom | Cause | Solution |
|
||||
|---------|-------|----------|
|
||||
| High prediction error | Insufficient training data | Collect more FEA samples |
|
||||
| Out-of-distribution warnings | Design outside training range | Retrain with expanded range |
|
||||
| Slow inference | Large mesh | Use parametric predictor instead |
|
||||
| Physics violations | Low physics loss weight | Increase `physics_loss_weight` |
|
||||
|
||||
---
|
||||
|
||||
## Cross-References
|
||||
|
||||
- **Depends On**: [SYS_10_IMSO](./SYS_10_IMSO.md) for optimization framework
|
||||
- **Used By**: [OP_02_RUN_OPTIMIZATION](../operations/OP_02_RUN_OPTIMIZATION.md), [OP_05_EXPORT_TRAINING_DATA](../operations/OP_05_EXPORT_TRAINING_DATA.md)
|
||||
- **See Also**: [modules/neural-acceleration.md](../../.claude/skills/modules/neural-acceleration.md)
|
||||
|
||||
---
|
||||
|
||||
## Implementation Files
|
||||
|
||||
```
|
||||
atomizer-field/
|
||||
├── neural_field_parser.py # BDF/OP2 parsing
|
||||
├── field_predictor.py # Field GNN
|
||||
├── parametric_predictor.py # Parametric GNN
|
||||
├── train.py # Field training
|
||||
├── train_parametric.py # Parametric training
|
||||
├── validate.py # Model validation
|
||||
├── physics_losses.py # Physics-informed loss
|
||||
└── batch_parser.py # Batch data conversion
|
||||
|
||||
optimization_engine/
|
||||
├── neural_surrogate.py # Atomizer integration
|
||||
└── runner_with_neural.py # Neural runner
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 2.0 | 2025-12-06 | Added MLP Surrogate with Turbo Mode |
|
||||
| 1.0 | 2025-12-05 | Initial consolidation from neural docs |
|
||||
Reference in New Issue
Block a user