Files
Atomizer/.claude/skills/01_CHEATSHEET.md

409 lines
13 KiB
Markdown
Raw Normal View History

---
skill_id: SKILL_001
version: 2.1
last_updated: 2025-12-22
type: reference
code_dependencies:
- optimization_engine/extractors/__init__.py
- optimization_engine/method_selector.py
requires_skills:
- SKILL_000
---
# Atomizer Quick Reference Cheatsheet
**Version**: 2.1
**Updated**: 2025-12-22
**Purpose**: Rapid lookup for common operations. "I want X → Use Y"
---
## Task → Protocol Quick Lookup
| I want to... | Use Protocol | Key Command/Action |
|--------------|--------------|-------------------|
| Create a new optimization study | OP_01 | Place in `studies/{geometry_type}/`, generate config + runner + **README.md** |
| Run an optimization | OP_02 | `conda activate atomizer && python run_optimization.py` |
| Check optimization progress | OP_03 | Query `study.db` or check dashboard at `localhost:3000` |
| See best results | OP_04 | `optuna-dashboard sqlite:///study.db` or dashboard |
| Export neural training data | OP_05 | `python run_optimization.py --export-training` |
| Fix an error | OP_06 | Read error log → follow diagnostic tree |
| Add custom physics extractor | EXT_01 | Create in `optimization_engine/extractors/` |
| Add lifecycle hook | EXT_02 | Create in `optimization_engine/plugins/` |
| Generate physics insight | SYS_16 | `python -m optimization_engine.insights generate <study>` |
---
## Extractor Quick Reference
| Physics | Extractor | Function Call |
|---------|-----------|---------------|
| Max displacement | E1 | `extract_displacement(op2_file, subcase=1)` |
| Natural frequency | E2 | `extract_frequency(op2_file, subcase=1, mode_number=1)` |
| Von Mises stress | E3 | `extract_solid_stress(op2_file, subcase=1, element_type='cquad4')` |
| BDF mass | E4 | `extract_mass_from_bdf(bdf_file)` |
| CAD expression mass | E5 | `extract_mass_from_expression(prt_file, expression_name='p173')` |
| Field data | E6 | `FieldDataExtractor(field_file, result_column, aggregation)` |
| Stiffness (k=F/δ) | E7 | `StiffnessCalculator(...)` |
| Zernike WFE (standard) | E8 | `extract_zernike_from_op2(op2_file, bdf_file, subcase)` |
| Zernike relative | E9 | `extract_zernike_relative_rms(op2_file, bdf_file, target, ref)` |
| Zernike builder | E10 | `ZernikeObjectiveBuilder(op2_finder)` |
| Part mass + material | E11 | `extract_part_mass_material(prt_file)` → mass, volume, material |
| Zernike Analytic | E20 | `extract_zernike_analytic(op2_file, focal_length=5000.0)` |
| **Zernike OPD** | E22 | `extract_zernike_opd(op2_file)`**Most rigorous, RECOMMENDED** |
> **Mass extraction tip**: Always use E11 (geometry .prt) over E4 (BDF) for accuracy.
> pyNastran under-reports mass ~7% on hex-dominant meshes with tet/pyramid fills.
**Full details**: See `SYS_12_EXTRACTOR_LIBRARY.md` or `modules/extractors-catalog.md`
---
## Protocol Selection Guide
### Single Objective Optimization
```
Question: Do you have ONE goal to minimize/maximize?
├─ Yes, simple problem (smooth, <10 params)
│ └─► Protocol 10 + CMA-ES or GP-BO sampler
├─ Yes, complex problem (noisy, many params)
│ └─► Protocol 10 + TPE sampler
└─ Not sure about problem characteristics?
└─► Protocol 10 with adaptive characterization (default)
```
### Multi-Objective Optimization
```
Question: Do you have 2-3 competing goals?
├─ Yes (e.g., minimize mass AND minimize stress)
│ └─► Protocol 11 + NSGA-II sampler
└─ Pareto front needed?
└─► Protocol 11 (returns best_trials, not best_trial)
```
### Neural Network Acceleration
```
Question: Do you need >50 trials OR surrogate model?
├─ Yes
│ └─► Protocol 14 (configure surrogate_settings in config)
└─ Training data export needed?
└─► OP_05_EXPORT_TRAINING_DATA.md
```
---
## Configuration Quick Reference
### optimization_config.json Structure
```json
{
"study_name": "my_study",
"design_variables": [
{"name": "thickness", "min": 1.0, "max": 10.0, "unit": "mm"}
],
"objectives": [
{"name": "mass", "goal": "minimize", "unit": "kg"}
],
"constraints": [
{"name": "max_stress", "type": "<=", "threshold": 250, "unit": "MPa"}
],
"optimization_settings": {
"protocol": "protocol_10_single_objective",
"sampler": "TPESampler",
"n_trials": 50
},
"simulation": {
"model_file": "model.prt",
"sim_file": "model.sim",
"solver": "nastran"
}
}
```
### Sampler Quick Selection
| Sampler | Use When | Protocol |
|---------|----------|----------|
| `TPESampler` | Default, robust to noise | P10 |
| `CMAESSampler` | Smooth, unimodal problems | P10 |
| `GPSampler` | Expensive FEA, few trials | P10 |
| `NSGAIISampler` | Multi-objective (2-3 goals) | P11 |
| `RandomSampler` | Characterization phase only | P10 |
---
## Study File Structure
```
studies/{study_name}/
├── 1_setup/
│ ├── model/ # NX files (.prt, .sim, .fem)
│ └── optimization_config.json
├── 2_results/
│ ├── study.db # Optuna SQLite database
│ ├── optimizer_state.json # Real-time state (P13)
│ └── trial_logs/
├── README.md # MANDATORY: Engineering blueprint
├── STUDY_REPORT.md # MANDATORY: Results tracking
└── run_optimization.py # Entrypoint script
```
---
## Common Commands
```bash
# Activate environment (ALWAYS FIRST)
conda activate atomizer
# Run optimization
python run_optimization.py --start
# Run with specific trial count
python run_optimization.py --start --trials 50
# Resume interrupted optimization
python run_optimization.py --start --resume
# Export training data for neural network
python run_optimization.py --export-training
# View results in Optuna dashboard
optuna-dashboard sqlite:///3_results/study.db
# Check study status
python -c "import optuna; s=optuna.load_study('my_study', 'sqlite:///3_results/study.db'); print(f'Trials: {len(s.trials)}')"
```
### When to Use --resume
| Scenario | Command |
|----------|---------|
| **First run of NEW study** | `python run_optimization.py --start --trials 50` |
| **First run with SEEDING** (e.g., V15 from V14) | `python run_optimization.py --start --trials 50` |
| **Continue INTERRUPTED run** | `python run_optimization.py --start --resume` |
| **Add MORE trials to completed study** | `python run_optimization.py --start --trials 20 --resume` |
**Key insight**: `--resume` is for continuing an existing `study.db`, NOT for seeding from prior studies. Seeding happens automatically on first run when `source_studies` is configured.
---
## LAC (Learning Atomizer Core) Commands
```bash
# View LAC statistics
python knowledge_base/lac.py stats
# Generate full LAC report
python knowledge_base/lac.py report
# View pending protocol updates
python knowledge_base/lac.py pending
# Query insights for a context
python knowledge_base/lac.py insights "bracket mass optimization"
```
### Python API Quick Reference
```python
from knowledge_base.lac import get_lac
lac = get_lac()
# Query prior knowledge
insights = lac.get_relevant_insights("bracket mass")
similar = lac.query_similar_optimizations("bracket", ["mass"])
rec = lac.get_best_method_for("bracket", n_objectives=1)
# Record learning
lac.record_insight("success_pattern", "context", "insight", confidence=0.8)
# Record optimization outcome
lac.record_optimization_outcome(study_name="...", geometry_type="...", ...)
```
---
## Error Quick Fixes
| Error | Likely Cause | Quick Fix |
|-------|--------------|-----------|
| "No module named optuna" | Wrong environment | `conda activate atomizer` |
| "NX session timeout" | Model too complex | Increase `timeout` in config |
| "OP2 file not found" | Solve failed | Check NX log for errors |
| "No feasible solutions" | Constraints too tight | Relax constraint thresholds |
| "NSGA-II requires >1 objective" | Wrong protocol | Use P10 for single-objective |
| "Expression not found" | Wrong parameter name | Verify expression names in NX |
| **All trials identical results** | **Missing `*_i.prt`** | **Copy idealized part to study folder!** |
**Full troubleshooting**: See `OP_06_TROUBLESHOOT.md`
---
## CRITICAL: NX FEM Mesh Update
**If all optimization trials produce identical results, the mesh is NOT updating!**
### Required Files for Mesh Updates
```
studies/{study}/1_setup/model/
├── Model.prt # Geometry
├── Model_fem1_i.prt # Idealized part ← MUST EXIST!
├── Model_fem1.fem # FEM
└── Model_sim1.sim # Simulation
```
### Why It Matters
The `*_i.prt` (idealized part) MUST be:
1. **Present** in the study folder
2. **Loaded** before `UpdateFemodel()` (already implemented in `solve_simulation.py`)
Without it, `UpdateFemodel()` runs but the mesh doesn't change!
---
## Privilege Levels
| Level | Can Create Studies | Can Add Extractors | Can Add Protocols |
|-------|-------------------|-------------------|------------------|
| user | ✓ | ✗ | ✗ |
| power_user | ✓ | ✓ | ✗ |
| admin | ✓ | ✓ | ✓ |
---
## Dashboard URLs
| Service | URL | Purpose |
|---------|-----|---------|
| Atomizer Dashboard | `http://localhost:3000` | Real-time optimization monitoring |
| Optuna Dashboard | `http://localhost:8080` | Trial history, parameter importance |
| API Backend | `http://localhost:5000` | REST API for dashboard |
---
## Protocol Numbers Reference
| # | Name | Purpose |
|---|------|---------|
| 10 | IMSO | Intelligent Multi-Strategy Optimization (adaptive) |
| 11 | Multi-Objective | NSGA-II for Pareto optimization |
| 12 | Extractor Library | Physics extraction catalog |
| 13 | Dashboard | Real-time tracking and visualization |
| 14 | Neural | Surrogate model acceleration |
| 15 | Method Selector | Recommends optimization strategy |
| 16 | Study Insights | Physics visualizations (Zernike, stress, modal) |
---
## Study Insights Quick Reference (SYS_16)
Generate physics-focused visualizations from FEA results.
### Available Insight Types
| Type | Purpose | Data Required |
|------|---------|---------------|
| `zernike_dashboard` | **RECOMMENDED: Unified WFE dashboard** | OP2 with displacements |
| `zernike_wfe` | WFE with Standard/OPD toggle | OP2 with displacements |
| `zernike_opd_comparison` | Compare Standard vs OPD methods | OP2 with displacements |
| `stress_field` | Von Mises stress contours | OP2 with stresses |
| `modal` | Mode shapes + frequencies | OP2 with eigenvectors |
| `thermal` | Temperature distribution | OP2 with temperatures |
| `design_space` | Parameter-objective landscape | study.db with 5+ trials |
### Zernike Method Comparison
| Method | Use | RMS Difference |
|--------|-----|----------------|
| **Standard (Z-only)** | Quick analysis | Baseline |
| **OPD (X,Y,Z)** ← RECOMMENDED | Any surface with lateral displacement | **+8-11% higher** (more accurate) |
### Commands
```bash
# List available insights for a study
python -m optimization_engine.insights list
# Generate all insights
python -m optimization_engine.insights generate studies/my_study
# Generate specific insight
python -m optimization_engine.insights generate studies/my_study --type zernike_wfe
```
### Python API
```python
from optimization_engine.insights import get_insight, list_available_insights
from pathlib import Path
study_path = Path("studies/my_study")
# Check what's available
available = list_available_insights(study_path)
# Generate Zernike WFE insight
insight = get_insight('zernike_wfe', study_path)
result = insight.generate()
print(result.html_path) # Path to generated HTML
print(result.summary) # Key metrics dict
```
**Output**: HTMLs saved to `{study}/3_insights/`
---
## CRITICAL: NXSolver Initialization Pattern
**NEVER pass full config dict to NXSolver. Use named parameters:**
```python
# WRONG - causes TypeError
self.nx_solver = NXSolver(self.config) # ❌
# CORRECT - use FEARunner pattern from V14/V15
nx_settings = self.config.get('nx_settings', {})
nx_install_dir = nx_settings.get('nx_install_path', 'C:\\Program Files\\Siemens\\NX2506')
# Extract version from path
import re
version_match = re.search(r'NX(\d+)', nx_install_dir)
nastran_version = version_match.group(1) if version_match else "2506"
self.nx_solver = NXSolver(
master_model_dir=str(self.master_model_dir), # Path to 1_setup/model
nx_install_dir=nx_install_dir,
nastran_version=nastran_version,
timeout=nx_settings.get('simulation_timeout_s', 600),
use_iteration_folders=True,
study_name="my_study_name"
)
```
### FEARunner Class Pattern
Always wrap NXSolver in a `FEARunner` class for:
- Lazy initialization (setup on first use)
- Clean separation of NX setup from optimization logic
- Consistent error handling
```python
class FEARunner:
def __init__(self, config: Dict):
self.config = config
self.nx_solver = None
self.master_model_dir = SETUP_DIR / "model"
def setup(self):
# Initialize NX and solver here
...
def run_fea(self, params: Dict, trial_num: int) -> Optional[Dict]:
if self.nx_solver is None:
self.setup()
# Run simulation...
```
**Reference implementations**:
- `studies/m1_mirror_adaptive_V14/run_optimization.py`
- `studies/m1_mirror_adaptive_V15/run_optimization.py`